We live in an era where the rapid development of artificial intelligence monopolises public discourse, inspiring awe but, above all, a deep and sometimes undefined fear. However, if we scratch the surface of the technophobic science fiction scenarios that dominate cinema, those about autonomous machines that gain consciousness and turn against humanity, we will discover that the real root of social anxiety is much more down-to-earth and entirely material.
The average person, the worker who struggles daily to get by in an increasingly hostile economic environment, does not fear the algorithm itself. They fear the hand that controls the algorithm.
The Oligopoly of Intelligence: Who Owns AI?
And the truth, as countless contemporary political economy analyses demonstrate, is that artificial intelligence is currently concentrated in the hands of big capital. The development and training of sophisticated artificial intelligence models require colossal investments, access to supercomputers, enormous computing power, and unimaginable amounts of data. An oligopoly of tech giants controls almost exclusively these resources.
The numbers paint a staggering picture of concentration. According to RBC Wealth Management and Bloomberg data, combined capital expenditures by Big Tech firms, Amazon, Alphabet, Meta, Microsoft, and others more than doubled in just two years, reaching approximately $427 billion in 2025 and projected to surge to roughly $562 billion in 2026. Campaign US reporting puts the Big Four alone on track to spend upward of $650 billion on AI investments in their 2026 fiscal years. Amazon leads with a $200 billion capital expenditure (capex) plan, mostly earmarked for AWS (Amazon Web Services), while Alphabet forecasts $175–185 billion to double down on Gemini AI models and Google Cloud expansion. These are not the budgets of democratic institutions or public research bodies. They are the war chests of private empires.
The venture capital landscape tells an even more pointed story of concentration. Crunchbase data shows that global AI investment reached roughly $202 billion in 2025, representing half of all venture capital deployed worldwide, a concentration unprecedented in technology investment history. Foundation model companies alone captured $80 billion, or 40% of all global AI funding. OpenAI and Anthropic, just two companies, captured 14% of all global venture investment across every sector. Geographically, 79% of AI funding flowed to the United States, with the San Francisco Bay Area alone absorbing $122 billion, creating a geographic concentration of technological power with no modern parallel.
These firms are not operating from positions of financial fragility. As of late 2025, the major tech companies held cash and equivalents totalling roughly $490 billion and generated nearly $400 billion in trailing free cash flow after their enormous capital outlays, meaning most of this AI buildout is being funded from internally generated cash rather than through debt. This is an oligopoly that can sustain and deepen its dominance indefinitely unless something intervenes.
Consequently, artificial intelligence is not developed in a neutral, ideal vacuum, but within the strict framework of the capitalist mode of production.
Profit, Not Liberation: The Motive Behind the Machine
In this context, the primary motivation for integrating this technology into the economy is not social welfare, the liberation of humanity from drudgery, or the fair distribution of global wealth. On the contrary, it is instrumentalized with the sole purpose of maximizing profit.
When such a powerful tool belongs to those whose ultimate goal is to increase surplus value, it is perfectly logical and justified for the social majority to view it with intense suspicion. This situation is precisely where the relentless continuation of class struggle in the modern digital age is most emphatically highlighted. The historical conflict between capital and labor has never stopped; it has simply been transferred to a new, technologically and digitally upgraded field.
In every previous industrial revolution, from the advent of the steam engine and electricity to robotics, technological progress in the hands of the elite has often been used as a means of intensifying labor, suffocating control of workers, and relentlessly squeezing labor costs. Today, artificial intelligence is the new "super-machine" through which the owners of the means of production seek to minimize their dependence on living, human labor.
Surplus value, i.e., the core of exploitation resulting from the difference between the value produced by the worker and the wage they ultimately receive, is now finding new, automated ways of extraction. The crucial question, then, in the context of class analysis, is not whether artificial intelligence as a technology is "good" or "bad," but at whose expense its practical application will operate.
The Body Count: AI and the Displacement of Workers
Unfortunately, the answer is already becoming clear for ordinary people, those who through their physical and mental efforts, produce the real wealth of this world. Scientific research and market analysis conclude that the automation brought about by AI threatens to gradually displace millions of jobs, affecting not only manual labor but also intellectual and administrative work.
The data from 2025 alone is sobering. According to the outplacement firm Challenger, Gray & Christmas, nearly 55,000 job cuts in the United States were directly attributed to AI in 2025, out of a total of 1.17 million layoffs, the highest level since the pandemic year of 2020. Over 127,000 workers at U.S.-based tech companies were laid off in mass job cuts during the year. Several major companies explicitly cited AI when announcing workforce reductions. Workday cut 8.5% of its staff, roughly 1,750 jobs, to reallocate resources toward AI investments. Amazon eliminated 14,000 corporate roles, stating that AI enables leaner structures. Microsoft axed 6,000 workers. IBM laid off 8,000 employees as AI agents took over their HR department.
Looking forward, the World Economic Forum's 2025 Future of Jobs Report, surveying over 1,000 employers representing 14 million workers across 55 economies, projects that 92 million jobs will be displaced by 2030. Forty-one percent of employers globally plan to reduce their workforce due to AI within five years. Gartner projects that by the end of 2026, 20% of organizations will use AI to flatten their hierarchies, potentially eliminating over half of current middle-management positions. Goldman Sachs Research found that unemployment among 20- to 30-year-olds in tech-exposed occupations has already risen by almost 3 percentage points since early 2025, notably higher than for their same-aged counterparts in other fields.
Crucially, the disruption extends well beyond blue-collar work. Bloomberg research suggests AI could replace 53% of market research analyst tasks and 67% of sales representative tasks. Anthropic CEO Dario Amodei himself has warned that AI could eliminate half of all entry-level white-collar jobs within five years. Microsoft CEO Satya Nadella revealed that 30% of company code is now AI-written, and simultaneously, over 40% of Microsoft's recent layoffs targeted software engineers.
Workers are not at risk of being replaced because the algorithm has superior empathy or creativity but mainly because capital prefers it as the perfectly obedient "worker": software does not unionize, does not participate in strikes, does not get sick, does not ask for leave or raises, and, above all, never claims a share of the final profits.
It is worth noting, however, that the picture is not monolithic. A February 2026 National Bureau of Economic Research study found that despite 90% of firms reporting no measurable impact of AI on workplace productivity, executives continued to project AI-driven productivity gains, suggesting that much of the layoff wave is driven by anticipation of AI's impact rather than its demonstrated performance. A Harvard Business Review analysis from January 2026, based on a survey of over 1,000 global executives, similarly concluded that layoffs are happening in expectation of AI's potential, not its proven results. Companies are, in effect, shedding workers in advance of a transformation that has not yet fully materialized, a speculative purge with very real human consequences.
The Algorithmic Overseer: Surveillance and Control in the Digital Workplace
When it comes to whether artificial intelligence will make life better for most people or whether they will be replaced for the sake of profits, the path of digital capitalism so far clearly points to the latter. We are already seeing this technology being used for what scholars call "algorithmic management", the delegation of core managerial functions to automated systems.
The scale of adoption is remarkable. A recent OECD survey found that 90% of workplaces in the United States already use at least one form of algorithmic management, with three out of four firms deploying ten or more of the fifteen distinct algorithmic management tools identified by researchers. In Europe, the adoption rate stands at 79%.
Nowhere is this more visible than in Amazon's vast warehouse network. A 2025 study published in the journal Socius by Northwestern University researcher Teke Wiggin documented how Amazon's fulfillment centers represent perhaps the most comprehensive implementation of algorithmic control yet seen in a physical workplace. Workers follow instructions delivered via handheld barcode scanners; each scan triggers the next task — picking, packing, or shelving — while every movement is tracked and rewards or penalties are issued based on real-time algorithmic performance evaluations. Workers can be disciplined or even fired if they perform poorly on metrics like tasks completed per hour, all without a human supervisor ever making the decision.
The study revealed something even more disturbing: during the 2021 union drive at a Bessemer, Alabama, warehouse, Amazon repurposed the very devices that algorithmically directed workers, workstation displays, personal apps, and internal communications systems into anti-union propaganda tools. Researchers described it as the "weaponization" of algorithmic management, where the same infrastructure that controls the pace and nature of work was turned into an instrument for suppressing collective action. Workers' personal apps, relied upon for overtime offers and pay records, were flooded with anti-union messages that could not be ignored. As Wiggin put it, these algorithmic tools are not merely whips; they are also bazookas in the employer's anti-union arsenal.
The Center for Democracy & Technology has further documented how companies frequently use surveillance and automated management systems to accelerate the pace of work to levels that threaten the health and safety of workers. Privacy International reports that decisions made by algorithms now determine how much individuals are paid and whether their employment is suspended or terminated, often without satisfactory explanation, rendering such decisions effectively impossible to challenge.
This is the digital panopticon in practice. The Foucauldian concept takes on a literal dimension when algorithms enable continuous, data-intensive, and opaque surveillance, creating what scholars describe as a devastating information asymmetry. Executives can see hundreds of data points about their workers in real time, from how fast they are working to how many times they click on and off their work app, while workers often have less access to information about their own performance than they did before digitalization.
The Data Harvest: How Free Labour Trains Its Replacement
Even the data that we all produce for free with our digital presence is quietly harvested to train the very systems that may render our work obsolete tomorrow. Every search query, social media post, product review, photograph, and piece of writing uploaded to platforms owned by these same tech giants becomes raw material for the refinement of AI models. This is a form of mass unpaid labor, largely unacknowledged and uncompensated, that feeds directly into the profit-generating machinery of the very companies now deploying AI to eliminate paid positions.
The economics of this data extraction are extraordinarily lopsided. OpenAI committed to spending $1.4 trillion over eight years to build new data centers, partnering with Nvidia to deliver 10 gigawatts of data center compute, all while operating on just $13 billion in revenue. The company's inference costs alone, the expense of actually running ChatGPT when a user submits a prompt, rose from $3.76 billion in 2024 to $5.02 billion in just the first half of 2025. Where does the raw material that these models are trained on come from? Largely from the uncompensated creative and intellectual output of billions of ordinary people.
This dynamic represents a novel form of what Marx called "primitive accumulation", the initial appropriation of common resources that makes capitalist production possible. In the digital age, the commons being enclosed is not land but data, not physical labor but cognitive output. The workers whose jobs are now threatened by AI systems were, in many cases, the very people whose freely contributed data made those systems possible in the first place.
The Contradiction at the Heart of AI Capitalism
However, this aggressive strategy of capital hides within it a gigantic contradiction, which economic history has repeatedly highlighted. A well-known incident from the history of labor struggles captures it perfectly: when an automobile manufacturer once proudly showed the union his new robots, saying meaningfully that they would never go on strike, the union representative replied disarmingly that they would also never buy his cars.
Algorithms remain excellent producers, but non-existent consumers. If the masses of ordinary people lose their income, are replaced and marginalized, who will have the purchasing power to consume the wealth that will be produced so quickly?
The empirical evidence for this concern is mounting. A Bank for International Settlements working paper on AI and income inequality found that AI investment is consistently associated with higher real incomes and a higher income share for the richest decile, while being associated with no change in real incomes and a significantly lower income share for the poorest decile. Acemoglu and Restrepo's widely cited research demonstrates that while AI can increase overall productivity, it often does so at the expense of labor's share of income, as capital's share rises relative to wages. Piketty's theoretical framework remains grimly relevant: wealth inequality tends to increase whenever the return on capital exceeds the rate of economic growth, a condition that AI-driven capital accumulation is actively accelerating.
The microcosm of Silicon Valley itself offers a preview of this dystopia. According to the 2025 Silicon Valley Pain Index from San Jose State University, just nine households in the region hold $110 billion in liquid wealth, fifteen times more than the combined wealth of the bottom 50%, roughly 440,000 households, who collectively possess only $8.3 billion. This extreme concentration doubled in a single year. The Gini Index for Silicon Valley has surged from 0.38 in 1990 to 0.84, approaching near-total inequality, even as nearly 30% of households in America's supposed innovation capital cannot meet their basic needs.
Blindly adopting artificial intelligence with the sole aim of reducing labor costs will inevitably lead to greater inequality and foreshadow deep crises of underconsumption. The IMF itself has acknowledged this risk in a 2025 working paper on AI adoption and inequality, finding that when firms choose how aggressively to adopt AI, the wealth inequality effect is particularly pronounced, as the potential cost savings from automating high-wage tasks drive significantly higher adoption rates. The paper warns that models ignoring the adoption decision "risk understating the trade-off policymakers face between inequality and efficiency."
The Bubble Within the Boom: Financial Fragility and Circular Capital
There is another dimension to the AI economy that receives less attention in popular discourse but is critical to understanding the risks ahead: the speculative financial architecture underwriting the entire enterprise. In late 2025, 30% of the U.S. S&P 500 and 20% of the MSCI World index was held up by just five companies, the greatest concentration in half a century, with share valuations reportedly at their most stretched since the dot-com bubble.
Concerns about circular financing have grown louder. Nvidia invested $100 billion into OpenAI, which spends billions purchasing Nvidia's chips. Microsoft holds a 27% stake in OpenAI, which runs its inference costs through Microsoft Azure. Oracle entered a $300 billion deal with OpenAI. The interconnected web of investments, cloud commitments, and hardware purchases among these firms creates the appearance of accelerating growth while potentially masking systemic fragility. As Miramar Capital co-founder Max Wasserman warned, investors are essentially funding their own future revenue.
An August 2025 report from MIT's Media Lab stated bluntly that despite $30–40 billion in enterprise investment in generative AI, 95% of organizations are getting zero return. OpenAI's own finances underscore the precariousness: its valuation more than tripled from $157 billion in October 2024 to $500 billion a year later, with a further raise targeting $750–830 billion by early 2026, even as the company, which has been projected to run out of money by mid-2027, has failed to present a credible roadmap to profitability. The Bank of England has formally warned of growing risks of a global market correction tied to the overvaluation of leading AI firms.
If this bubble bursts, the workers who have already been displaced will not be the ones shielded from the fallout. They never are.
Regulatory Responses: Too Little, Too Late?
Against this backdrop, regulatory efforts have begun to materialize, though their adequacy remains sharply contested. The European Union's AI Act, which entered into force in August 2024, represents the world's first comprehensive legal framework for AI governance. Its high-risk system obligations, covering AI used in recruitment, screening, performance evaluation, and employment-related decision-making, are phasing in through August 2026 and into 2027.
The Act introduces meaningful protections: employers must notify workers before deploying high-risk AI, establish meaningful human oversight with authority to intervene and override algorithmic outputs, monitor for discrimination, and maintain detailed logs. Emotion recognition in workplaces and biometric categorization are outright banned. AI-driven hiring assessments and employee monitoring tools face heightened scrutiny. Violations can trigger fines of up to 7% of global annual revenue.
Yet even this landmark regulation has its critics. The European Commission's November 2025 "Digital Omnibus" package already proposed extending certain compliance deadlines and simplifying obligations for businesses. Meanwhile, the United States under the current administration has adopted a deliberately light-touch stance, with no federal AI regulation on the horizon. The regulatory gap between the two largest Western economies means that global corporations face an uneven landscape, with predictable incentives to arbitrage the difference.
More fundamentally, even the EU AI Act focuses on procedural safeguards, transparency, human oversight, and non-discrimination rather than addressing the structural question of who owns and controls AI systems and who captures their economic value. Requiring that an employer notify a worker before an algorithm evaluates them is meaningful. But it does not change the fact that the algorithm's purpose is to maximize productivity and minimize labor costs in the interest of shareholders, not workers.
The Class Determination of Fear
In conclusion, our fear is not a naive rejection of technological progress but a perfectly rational, class-determined, and empirically documented reaction. We know deep down that a technological revolution with enormous potential has been hijacked by a system that traditionally sacrifices people on the altar of profit growth.
As long as artificial intelligence remains a private privilege and weapon in the hands of the few, class conflict will intensify at the expense of those who support the real economy. The solution, of course, is not blind opposition to progress and the rejection of technology itself, but the overturning of the terms of ownership and control.
The scale of what is at stake cannot be overstated. Between 2026 and 2029, U.S. mega-cap companies alone are expected to spend $1.1 trillion on AI, with total global AI spending projected to surpass $1.6 trillion. This is not merely an investment in technology — it is an investment in a particular vision of the future, one in which the returns flow overwhelmingly to the owners of capital while the costs are borne by the workers whom that capital displaces. The PwC economists who modeled AI's inequality effects through 2035 concluded that the outcome depends entirely on the policy choices made today. Under the most pessimistic scenario, where AI productivity gains accrue unevenly and adoption is shaped by mistrust and weak governance, income and wealth inequality worsen significantly.
As the Noema essayists Saffron Huang and Sam Manning argued in April 2025, by the time reactive approaches like universal basic income become necessary, those controlling the AI economy may already be powerful enough to evade meaningful taxation and citizens too weak to demand their share. History repeats this pattern: once power concentrates, the powerful reshape the rules.
Only when technological achievements are placed under social control and at the service of collective needs will artificial intelligence cease to be a threat hanging over the heads of workers and become what it should have been from the outset: a powerful means of truly liberating humanity.