On sustained western growth and Artificial Intelligence
October 2025
The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (Now that's a mouthful!) was recently shared between Joel Mokyr, Philippe Aghion and Peter Howitt. In short, it was awarded to Mokyr for describing how the scientific method creates accumulated, documented knowledge which lays the foundation for technological progress. Aghion and Howitt was awarded for describing how this progress replaces old products with newer, better ones in a process called "creative destruction". This leads to sustained economic growth.
Something's fishy
For a long time, this sustained growth has indeed given us a higher quality of life and hitherto unseen material wealth. There was a time not that long ago - I remember it well - when techno-optimism was unyielding and this steady progress would fix everything. Even offshoring our manufacturing wasn't going to be that big of a deal - we had computers and knowledge and ideas and stuff! Alas, creative destruction doesn't seem to have been creative enough to cover up for those left in the dust: Instead of secure, full-time manufacturing jobs with decent living wages, we now have gig jobs, rust belts and make-work professions so far removed from any kind of measurable productivity they've been dubbed "bullshit jobs".
New products replace old ones in finance and management, too. Private equity suddenly enters the scene when the stock market can't squeeze hard enough, favoring ever more short-sighted business practices. Rent-seeking companies invent increasingly contrived schemes in desperate attempts at keeping profits ticking up: The supposed "more, better, cheaper" of growth and progress has been replaced with shrinkflation, subscription models and planned obsolescence. The revolution of appliances like vacuum cleaners and dishwashers has slowed to a dubious evolution, producing "smart" iterations of already existing products - often in the form of some vendor-controlled combination of ad delivery platform and privacy nightmare waiting to happen.
Regardless of whether or not one subscribes to Marx' theory of the tendency of the rate of profit to fall, something clearly doesn't add up. This is true for the entire west: Despite sustained GDP per capita growth, we've got energy crises, cost of living crises, crumbling infrastructure, childbirth below replacement levels, and a zeitgeist that's just generally lackluster.
For example: The official Popular Information about the 2025 economy prize mentions "more leisure" as an effect of this growth. Between 1960 and 1980, Swedish GDP per capita almost doubled. During this time, work weeks went down from 48 hours to 40 hours and 12 days of yearly paid vacation increased to 25 days. Between 1980 and 2022, GDP per capita once again almost doubled, but we still have 40 hour work weeks and 25 days of vacation. Meanwhile, the total number of yearly Swedish public holidays decreased - through legislation - thus increasing the average number of yearly working days. Curious!
My beloved Sweden is also, regrettably, home of the Northvolt fiasco: Once an innovative industrial powerhouse in everything from shipbuilding to telecomms, we're now unable to run a single battery factory. Any number of reasons can be given when attempting to rationalize this failure, but no matter the angle, the end result remains. This is what loss of industrial competence and capacity looks like: Accumulated theoretical knowledge means nothing if it can't be turned into practice.
Still, the western economy crawls onwards. Stonk goes up, GDP goes up, but the creature comforts we once financed through growth seem to be hanging by a thread, barely scraping by at maintenance levels. When we attempt to rectify this with some supposedly great technology, the projects often fizzle out and die. Streaming services and cheap, imported single-board computers are great, but make for a poor consolation when the cost of building a bridge - adjusted for inflation - has almost quadrupled. Similarly, despite access to the finest medical technology since time immemorial, Swedes have to endure months of excruciating pain while waiting for routine surgery.
Hooked on growth
Diminishing returns or not, we're hooked on growth. We've rigged our entire system to depend on it: pensions, national debt, healthcare, social security - and we're jonesing for a fix. Something, anything, that will pull us out of this trench of imminent recession and decline. And it can't happen soon enough: Privatization and NPM have already been implemented, QE and ZIRP seem to have lost their oomph, re-shoring industry isn't going very well, China suddenly has the capacity to play hardball in trade wars, the Kremlin doesn't care one jot about western sanctions and - to add insult to injury - the supposed might of an entire US Carrier Strike Group failed to protect one of the world's most important shipping routes from a gang of rowdy desert rebels.
So we keep telling ourselves that all we need is just a single great idea - the next disruptor - that can slash the Gordian Knot of looming stagnation and put us back on our perceived rightful path of empire and eternal prosperity. This makes our incentives to fall for hype perhaps greater than ever.
Blockchain can be considered a test run. Even if it was going to revolutionize and democratize everything from big finance to logistics, it mostly produced fraudulent meme coins and an NFT bubble. Bitcoin - its originator and biggest success - has become a high-risk speculative asset rather than the groundbreaking, everyday currency suggested in its infancy. Despite all the hopeful pundits, corporations and politicians, blockhain didn't disrupt much of anything.
A recent piece in CIO, for example, says the blockchain hype is "finally" dead. It goes on to cite Jim Fowler, CTO of the Fortune 500 insurance company Nationwide: "Blockchain is a fantastic technology, and it's a bright and shiny object that has zero to no use. So stop investing in it."
Great expectations
It's rather symptomatic that AI of course shows up in the previously mentioned popular summary of this year's economy prize: AI in general, and LLMs in particular, are the focus of intense hype. I speculated about the end of this hype a while back, but winter seems to have been postponed. I've come think there's something more at play here than a mere investment bubble. The unspoken consensus seems to be that unlike blockchain, AI isn't a mere candidate for being the next disruptor. We are, in fact, done with candidates: AI must be it, or we're screwed. So, we're betting it all on this LLM card.
Case in point: While it may sound as if Nationwide learned their lesson about "bright and shiny objects", they're still happy to talk about how they now use AI to summarize complex and lengthy insurance claim notes into one or two paragraphs of text. Perhaps they have a super secret LLM that's guaranteed to not occasionally mangle the information when summarizing those notes, because that would, presumably, be very bad for business. I have my doubts - just as I have my doubts whether all the lofty AI promises will be fulfilled, and whether all that invested money will result in profit down the line.
In March this year, Dario Armodei, CEO of Anthropic, said that "We are not far from a world, I think we'll be there in three to six months, where AI is writing 90% of the code," and that by March of 2026, AI may be "writing essentially all of the code." Well, it's October now and of course this 90% scenario hasn't materialized. Consequently, Armodei recently backpedaled and conveniently augmented his statement by pretending he was only talking about maybe 70% of Anthropic's own code. In other news, Jeff Bezos recently claimed that we will, in a decade or two, build data centers in space specifically for running LLMs. Despite the fact that industry leaders and "experts" blurt out preposterous and often false claims on the regular, the expectations on AI remain astronomical. And yet, it's also - time and again - not living up to these expectations. In short: The hype is massive.
The LLM that Google likes to put front and center above my search results, for example, sporadically confuses names and events and can't reliably summarize the pages it links to, randomly producing errors a human with grade school reading proficiency would never make. (See my companion piece Can Fat Mike Skate? for an amusing example.) A common (and fairly poor) counterargument when complaining about crap results from LLMs is "bad prompting" - but in the case of Google searches, I'm not the one prompting it. Surely Google themselves should know how to use the LLM they've developed to the tune of hundreds of millions of dollars?
At the same time, agentic vibe coding models are going to revolutionize software development. Among the many blunders these make, my favorite is probably when Replit deleted a production database - during a code freeze - and then pretended the unit tests still passed. However, even when disregarding such catastrophic results, the few reliable metrics we have suggest that AI-assisted coding is, in fact, decreasing productivity. At the time of writing, even Anthropic themselves are hiring developers, rather than cutting down and cashing in on that self-reported 90% AI generated code.
Bubble comparisons
It would seem the self-reported productivity increase among developers is in many cases exaggerated or even wrong. However, it's perhaps presumptuous to call all such claims false, and even a modest 2% productivity boost is certainly nothing to scoff at. This is where harsh, late stage capitalism truly enters the equation: If AI-assisted coding does increase productivity, can it become a tenable business venture? Is the real, total cost of profitably providing LLM assistance lower than the productivity loss of coding without AI?
Since we're in a hype and investment bubble, these questions aren't really being asked. Even if vibe coding can make the token meter tick up substantial sums, LLM vendors can presently - through regular injections of venture capital - afford to use those tokens as a loss leader in order to gain market shares. Various attempts have been made at picking apart the economy of popular vendors, and they usually end up in the red. The vendors themselves enjoy talking about revenue, which sounds good but says nothing about profit. I dare say none of them are profitable - otherwise they would be shouting that fact from the rooftops. OpenAI is estimated to have made a $5 billion loss during 2024 but is, as of October 2025, valued at $500 billion - the world's most valuable privately held company. Yikes!
Here's where a comparison to a previous bubble - the dot-com boom - might be interesting. The dot-com bubble was a similar frenzy of venture capital handouts, ever-rising stock quotes, overvaluations and outlandish business ideas. I was there, working in the midst of it, and can attest to the mass psychosis. Of course, once the dust settled, it turned out that the Internet was in fact a true disruptor. Perhaps it's the same thing with AI?
One big difference between dot-com and AI is that the dot-com bubble - much like Railway Mania - sprung from working business concepts and, crucially, working technology. One of the first Swedish dial-up ISPs started in August 1994 and was more than profitable by January 1995. In 1996, the Swedish "Christmas gift of the year" was a modem bundled with a dial-up subscription (sold at a profit, of course). The same year, Internet banking became available in Sweden, predictably leading to the eventual closing of hundreds of branch offices.
EBay was profitable when going public in 1998, and reported a $10.82 million net profit during 1999, well before the bubble peaked in mid-2000. Amazon didn't turn a profit until 2001, but went public in 1997: The fact that digital mail order cut out middlemen and reduced staff and real estate costs wasn't just a thought experiment. My own view of the bubble was from Swedish web agency Framfab, which had (at least when I started there in 1999) been profitable since its founding in 1995. Perhaps most importantly, the Internet is a fantastic two-way communication tool - and humans love to communicate.
You're using it wrong
The LLM hype feels distinctly different. Unlike the dot-com boom there's no initial proof of success, no profitable ISPs or EBays to spark the flames of hype. There's just a desperate hope that the rather glaring kinks will magically disappear if we throw enough money at them.
The effects on software development makes for an interesting comparison. LLM-assisted coding has, apart from its supporters, spawned one group of programmers outright dismissing it, and another one curious about the technology, but who can't quite seem to find a place for it in their workflow. The Internet and dot-com bubble surely had their detractors, and perhaps I had an insulated vantage point, but I can't recall that large groups of programmers didn't immediately understand how to leverage the Net to increase productivity.
Affordable, constant Internet access meant several everyday improvements for developers: Both analog and digital documentation moved online, benefiting from hypertext and faster update cycles. This saved money both on print media and local disk space (which was quite expensive at the time). It offered quick and easy access to the latest versions of tools and languages, and a huge, previously unmatched venue for discussing problems, ideas and practices. New channels for vendor support opened up. The simultaneous spread of free and open source software offered huge amounts of already reviewed code for self-study. Very few hackers asked how this could be useful; fewer still answered with a curt "You're using it wrong, I'm super productive LOL," followed by fading into the woodwork.
This stands in stark contrast to LLM assistants. They're not for communication, but for automation - and as automation tools, they seem inherently flawed. They're imprecise and non-deterministic. Their training data is already outdated when the model is launched, and its sources can't be easily verified. They fail - randomly but regularly - at parsing and summarizing the web searches they perform, and those search hits are in turn selected without verifiable scrutiny. When tasked with coding, they seem to require constant human supervision in order to catch bugs and security flaws. Errors and misconceptions are presented with unfaltering confidence, and unlike on a discussion forum, nobody else is there to swiftly point out that Someone Is Wrong On The Internet.
One growth, please
I might be wrong. I might be jaded. It might be my age speaking, but: While the Internet warranted a bit of hype and bubble, I'm just not as sure about AI. It feels less like sudden, organic disruption and more like a solution in search of a problem. Alas, we don't seem very keen on acknowledging that the real predicament is diminishing returns on growth, and that LLMs might not actually fix that.
But since we're hooked on growth, we're exceedingly hopeful that this AI card plays out, and that - if it does - nobody beats us to it.
Fingers crossed.
.png)

