Blog post
Aaron Benanav on why Artificial Intelligence isn’t going to change the world. It just makes work worse.
Aaron Benanav 9 June 2025
In the years since Automation and the Future of Work first appeared, a new wave of technological enthusiasm has swept across the popular imagination. The catalyst this time has been the rapid advances in generative artificial intelligence, spearheaded by companies like OpenAI, Google DeepMind, and Meta. Once again, a chorus of voices — from Silicon Valley executives to prominent investors and journalists — insists that we are on the verge of epochal change.
Sam Altman, the CEO of OpenAI, claims that AI will solve problems as vast as climate repair, space colonisation, and the full discovery of the laws of physics, promising a transition to an age of universal prosperity. Elon Musk, the billionaire entrepreneur behind Tesla and SpaceX, warns that AI represents both the greatest existential threat to humanity and the pathway to unimaginable abundance, predicting a future in which humanoid robots outnumber human beings by 2040. Marc Andreessen, the venture capitalist who co-founded Netscape and now serves as a prominent Silicon Valley investor, declares that AI will save the world. Ezra Klein, the liberal commentator and co-founder of Vox, offers a softer version of the automation-utopia narrative in his recent book Abundance, calling for the clearing away of regulatory obstacles and greater state support for research and development to accelerate technological progress.
Yet for all their novelty, these predictions are strikingly familiar. They rehearse, in updated form, the same automation discourse I critiqued in this book: an enduring narrative that imagines technology autonomously remaking human life, while obscuring the social structures in which technological change is embedded.
[book-strip index="1"]
At the centre of today’s AI discourse lies a set of dramatic claims about labour market disruption and technological unemployment. In 2023, researchers affiliated with OpenAI and the University of Pennsylvania released a study claiming that 49 percent of workers’ tasks were exposed to large language models, suggesting an impending transformation of work across sectors ranging from education to legal services. This forecast directly updates a 2013 paper by Carl Benedikt Frey and Michael Osborne, which had sparked an earlier wave of automation anxiety by predicting that 47 percent of US jobs were vulnerable to machine learning technologies. Then as now, automation theorists imagined a tipping point at which machines would become capable of performing enough human tasks to render millions of occupations redundant, triggering an unprecedented collapse of the labour market.
It is worth recalling what became of the last round of predictions. Following the publication of Frey and Osborne’s paper in 2013, a wave of journalistic and policy commentary warned of mass technological unemployment. Yet between 2013 and the time I completed Automation and the Future of Work in 2020, no such labour market catastrophe materialised. Faced with mounting doubts, the OECD re-analysed Frey and Osborne’s methods in 2017, concluding that only around 14 percent of jobs faced a high risk of automation — a far cry from the original 47 percent figure that had captured public attention.
But even this lowered estimate proved too extreme. By 2020, it had become clear that many of the occupations thought most vulnerable to automation — such as food preparation, machine operation, driving, and other forms of manual or repetitive labour — had not seen significant employment declines. In most cases, employment in these sectors actually grew. Far from ushering in a wave of technological unemployment, the years following the financial crisis were marked by tepid labour market expansion and deepening economic stagnation. Productivity growth, particularly in US manufacturing, flatlined, reaching its lowest sustained rate since records began in the 1960s. The automation revolution, it seemed, had failed to arrive.
The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.
[book-strip index="2"]
The reality of automation’s effects over the past decade looks very different from what was predicted. Industrial robots, which had already been widely deployed by the turn of the millennium, continued to be concentrated in a small number of sectors, primarily in automobile manufacturing. Despite falling costs for robot hardware — often cited as evidence that automation would accelerate — the true expense lay not in acquiring robots but in integrating them into production systems. Programming, optimising, and maintaining industrial robots typically costs three times more than the machines themselves, meaning that only large firms producing highly standardised goods could justify their widespread use. Small and medium-sized enterprises, which tend to specialise in customised, small-batch production, saw little incentive to automate.
Meanwhile, service-sector robotisation — endlessly promised — remained almost entirely absent. Statistical agencies tracking robot deployment continue to measure it almost exclusively in manufacturing. Even in the most optimistic scenarios, robots remained specialised tools for narrow tasks: moving heavy parts, performing precise welds, repetitive soldering. Far from heralding a new industrial revolution, the 2010s revealed the limits of automation to transform the economy on any broad scale.
Understanding the limits of automation requires situating technological change within broader structural trends that have reshaped the global economy, and that are analysed in detail in this book. Since the 1970s and 1980s, industrialisation — the historical engine of capitalist growth — has largely run its course. In its place has emerged a service-based economy, where between 75 and 90 percent of workers in OECD countries are now employed. While some imagined that manufacturing jobs simply relocated to the global South, in reality deindustrialisation has become a worldwide phenomenon, with even countries like China experiencing a steady decline in manufacturing employment since 2013.
The shift from manufacturing to services has profound implications: productivity growth is typically much slower in services than in industry or agriculture. Services such as education, healthcare, and hospitality are often labour-intensive, resistant to mechanisation, and shaped by human interaction in ways that limit efficiency gains. As services have consumed a greater share of employment and output, overall productivity growth rates have slowed. Combined with weakening demographic growth and falling expectations for future markets, these trends have created an environment of chronic economic stagnation — one in which new technologies, however hyped, struggle to produce the kinds of transformations they once promised.
[book-strip index="3"]
What the past decade has demonstrated is not the disappearance of work, but rather its transformation. Even where new technologies have been introduced, most jobs have persisted, albeit in altered forms. Studies of digitalisation’s impact on work consistently show that adjustment has occurred primarily through changes in task structures within occupations, rather than through wholesale shifts between occupations. Contrary to the assumptions of automation theorists, there is no clear threshold — such as 50 percent of tasks automated — beyond which a job ceases to exist. Instead, workers adapt, roles evolve, and occupations survive, often with different skills and responsibilities than before. Whether employment in a particular sector grows, contracts, or stagnates depends not only on technological capabilities, but on broader economic conditions.
Even the same job title can encompass radically different kinds of work across firms and countries, depending on prior histories of automation, union strength, and regulatory protections. Building a car, for example, looks very different in the highly automated plants of Volkswagen in Germany, in the artisanal workshops of Ferrari in Italy, and in the rapidly expanding electric car factories of BYD in China. The same is true in the film industry, where Hollywood in California, Bollywood in India, and Nollywood in Nigeria each organise production according to distinct technical, economic, and cultural logics. In this context, the way technology changes work is neither automatic nor inevitable. It is shaped by collective choices about what kinds of work, and what kinds of working lives, society is willing to sustain.
Over the longer arc of computerisation, digital technologies have often been associated with upskilling. As workplaces became more technologically complex, the demand for better-educated, more highly trained workers increased, contributing to the polarisation of labour markets between high-skill, high-wage jobs and low-skill, precarious ones. Yet more recent technological developments have followed a different trajectory. In some sectors, digitalisation has enabled not upskilling but de-skilling. Services like Uber have stripped traditional taxi work of the expertise it once required, replacing local knowledge with GPS navigation systems.
At the same time, digital technologies have vastly expanded employers’ capacity to monitor and discipline workers. Efficiency wages are the higher wages that must often be paid when managers cannot easily observe worker performance. Jobs that were once difficult to supervise — such as long-haul trucking — are now subject to constant electronic surveillance, enabling firms to reduce efficiency wages. With greater oversight, firms can exert tighter control over the pace and organisation of work.
These developments reveal how technologies are not neutral: they are deployed within a political and institutional context that determines their impact on labour. Governments could regulate workplace surveillance; unions could bargain for protections against invasive monitoring. In the absence of such efforts, however, technological change increasingly serves to degrade work and exacerbate economic inequality.
[book-strip index="4"]
In the wake of automation’s disappointing economic impact, hopes have increasingly shifted toward generative artificial intelligence. Immense resources are now being poured into the development of AI models and the construction of the data centres required to sustain them. Microsoft, Google, Meta, and OpenAI have collectively invested billions of dollars, gambling that generative AI will deliver the breakthrough that previous rounds of automation failed to achieve. Beneath the rhetoric of industrial revolution lies a more specific ambition: to engineer a services revolution, raising productivity growth in the very sectors — healthcare, education, retail, hospitality — that have historically lagged behind.
Given the scale of investment, the implied expectations are staggering: productivity would need to rise at rates not seen since the rapid catch-up phases of twentieth-century Japan and China — but this time in already technologically advanced economies like the United States — in order to make these investments profitable. It is easy to understand the appeal. If robotisation could not deliver a new wave of prosperity, perhaps intelligent software agents could. Yet the gap between aspiration and reality remains large, and there are good reasons to doubt whether generative AI, on its current trajectory, can resolve the structural stagnation of post-industrial economies.
The fundamental limitations of generative AI have become increasingly apparent, even to many of its early champions. As François Chollet and other AI researchers have pointed out, deep neural networks suffer from inherent brittleness: they struggle to generalise beyond their training data, fail at basic reasoning tasks, and remain unreliable for applications requiring consistency or accuracy. Despite hoovering up enormous quantities of digital information, these models confront hard limits in their capacity to learn or adapt. Part of the problem lies in the very architecture of artificial neural networks, which were designed on the basis of mid-twentieth-century behaviourist psychology. Unlike organic brains — which come equipped with rich internal structures, innate goals, and representational frameworks — artificial networks are largely unstructured, relying on brute statistical association. The human mind can grasp a new concept from a few examples; machine learning models typically require millions. Hybrid approaches that integrate symbolic reasoning with deep learning, such as AlphaFold2’s success in protein folding, offer more promising results — but they must be painstakingly designed for specific tasks. There is no general method for building modular, structured cognition into deep learning systems, and no guarantee that one will be found.
Signs of disillusionment are already emerging. Microsoft CEO Satya Nadella recently acknowledged that, despite the company’s enormous investments in OpenAI and other generative AI ventures, no measurable increase in productivity growth has yet appeared. He framed the true benchmark simply: if generative AI were transformative, we would already be seeing the world economy growing faster. Instead, there is little evidence of any such acceleration. Reports in the Wall Street Journal and Financial Times have detailed the sluggish pace of corporate adoption, as businesses struggle to find reliable, large-scale uses for AI models prone to inconsistency and error. Daron Acemoglu, the Nobel Prize–winning economist, has voiced open scepticism, warning that generative AI technologies may do little to resolve the deeper economic malaise afflicting advanced capitalist societies. For all the extraordinary technical feats that have been achieved, it is increasingly clear that the current wave of AI innovation may not, by itself, produce the broad-based economic dynamism that its proponents so confidently predict.
[book-strip index="5"]
Rather than inducing mass unemployment, the more immediate effects of generative AI are likely to mirror broader trends of job transformation already unfolding today, namely de-skilling and surveillance. Preliminary studies suggest that generative AI technologies raise productivity most among lower-skilled workers, helping to standardise outputs but doing little to enhance high-skill, high-complexity work. It is no coincidence that these systems excel at generating average-quality writing and basic code — the kinds of tasks that students perform, which is why one of the main use cases for ChatGPT has been helping students cheat. As such tools become more widespread, there is a risk of a digital de-skilling of fields such as computer programming, graphic design, and legal research, where algorithmically generated outputs could substitute for outputs produced by workers with average levels of competence.
At the same time, generative AI models offer new possibilities for monitoring and evaluating workers, processing surveillance data to exert greater control over labour processes and suppress wages. Once again, the technologies that promise to liberate us from work risk intensifying exploitation instead. Without robust social and legal frameworks to redirect their development, the likely outcome of the generative AI boom will not be mass joblessness, but a worsening of work conditions, an acceleration of economic inequality, and a further erosion of workers’ autonomy.
The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.
The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many. Yet it does not have to be this way. The future remains open, contingent on whether we are willing to confront, contest, and redirect the pathways along which technology advances.
[book-strip index="1"]
The present frenzy surrounding artificial intelligence will not last forever. As the limitations of generative AI become clearer, and as the economic returns on massive corporate investments fail to materialize at the expected scale, the speculative bubble will inevitably burst. When that moment comes — as it did after the dot-com crash, and again after the robot hype of the 2010s — we will face a critical choice. We can resign ourselves to another cycle of technological disillusionment, or we can ask more fundamental questions about how technology might genuinely serve human needs. If we are to meet the challenges of the coming decades — from the climate crisis to the conclusion of the demographic transition to the pursuit of lives of greater freedom and meaning — we will need not more speculation about machines that will save us, but deliberate, collective action to shape our technological futures.
The task ahead is not merely to anticipate what AI will do to us. It is to determine what we, as societies, intend to do with AI. We must insist that technological development be measured not by shareholder returns, but by its contribution to building a more just and humane world. The final chapter of Automation and the Future of Work begins to explore what that project might entail.
Preface to the Brazilian edition of Automation and the Future of Work, forthcoming from Boitempo. Original edition available now from Verso.
Filed under: author-benanav-aaron, economics, technology