Hello premium customers! Feel free to get in touch at [email protected] if you're ever feeling chatty. And if you're not one yet, I'm sorry that I paywalled this, but it took me so much effort and drove me a little insane.
Back in September 2024 I wrote about a phenomena I call The Subprime AI Crisis — that companies like Anthropic and OpenAI are providing their services at a massive loss, and that at some point they would have to start finding ways to recoup their costs, raising the prices of providing their services, which in turn would cause those connecting to their APIs to have to start doing the same to their customers.
As an aside, I also made the following prediction:
I believe that, at the very least, Microsoft will begin reducing costs in other areas of its business as a means of helping sustain the AI boom. In an email shared with me by a source from earlier this year, Microsoft's senior leadership team requested (in a plan that was eventually scrapped) reducing power requirements from multiple areas within the company as a means of freeing up power for GPUs, including moving other services' compute to other countries as a means of freeing up capacity for AI.Microsoft laid off 9000 people last week, one of its largest in history, hitting its Xbox division hardest, about a month and a half after laying off 6000 people.
But really, my biggest prediction was this:
I hypothesize a kind of subprime AI crisis is brewing, where almost the entire tech industry has bought in on a technology sold at a vastly-discounted rate, heavily-centralized and subsidized by big tech. At some point, the incredible, toxic burn-rate of generative AI is going to catch up with them, which in turn will lead to price increases, or companies releasing new products and features with wildly onerous rates — like the egregious $2-a-conversation rate for Salesforce’s “Agentforce” product — that will make even stalwart enterprise customers with budget to burn unable to justify the expense.What happens when the entire tech industry relies on the success of a kind of software that only loses money, and doesn’t create much value to begin with? And what happens when the heat gets too much, and these AI products become impossible to reconcile with, and these companies have nothing else to sell?
We may be about to find out.
The Enshittification Of Cursor
Last week, it came out that Anthropic, whose Claude models compete with those made by OpenAI, had hit $4 billion in annualized revenue, meaning [whatever month it is] multiplied by twelve, and expects to lose $3 billion in 2025 because of how utterly unprofitable its models are, though The Information adds that this is an improvement over a loss of $5.6 billion in 2024, which Anthropic claims was due to "a one-off payment to access the data centers that power its technology."
Hey, wait a second. Isn't Anthropic running its services on Amazon Web Services and Google Cloud? Didn't both Google and Amazon fund them? Is Anthropic just handing their money back to them? Weird! Kinda reminds me of how Microsoft is booking the revenue it gets from OpenAI handing it cloud credits. Weird!Anyway, The Information added in its piece that Anthropic also lost the lead developer of their Claude Code product (Boris Cherny) to Anysphere, maker of the buzzy coding startup Cursor, along with Cat Wu, one of the product managers on Claude Code, with both allegedly going on to develop "agent-like features," a thing that, to quote The Information, involves "automating complex coding tasks involving multiple steps."
I wouldn't usually just write down exactly what a startup has told The Information, but these details are important, as are the following:
Cursor’s growth is also accelerating thanks to advances in Anthropic’s models and what developers say is an easy-to-use interface. The company said last month that it has surpassed $500 million in annual recurring revenue, or $42 million in revenue per month. That’s more than double its pace of $200 million in annual recurring revenue as of March. Anysphere’s valuation is $9.9 billion, up from $2.6 billion in December.Anysphere has become precious to Silicon Valley — proof that there are startups other than OpenAI and Anthropic that can build actual products that people will pay for that use AI in some way, and, according to The Information's Generative AI database, has the most recurring revenue of any private AI-powered software-as-a-service startup (outside of the aforementioned big two).
Cursor and Anysphere are symbolic. Cursor is a tool that developers actually like, that actually makes money, that grew organically based on people talking about how much they liked it, and it proliferated one of generative AI's only real use cases — being able to generate or edit code quickly.
To get a little more specific, Cursor is something called an IDE — integrated development environment, which allows a developer to write code, run tests, manage projects, and so on, — but with AI integrations that can predict what your next change to the code might be (which Cursor calls "tab completions"), and the ability to generate code and take actions across an entire project rather than in separate requests. If you want a deeper dive (as I'm not a software developer), I recommend reading this piece from Random Coding.
A note about coding startups and AI-generated code: Code is character-heavy, as I'll get into later, but it means that in general coding startups use way more generative AI services than, say, a company generating text or images. Code is extremely verbose, and bad code often moreso, and changes to it are nuanced, requiring the generative AI model in question to ingest and output a great deal of stuff.This is, of course, compounded by their propensity for hallucinations. Basically, some degree of relying on AI-generated code means knowing that you're going to generate a certain amount of crap that you'll have to fix. While you save time in the aggregate, you are still burning extra tokens on the mistakes a model might make.
You will eventually realise why that's bad.
Nevertheless, the long and short of it is that Cursor is a well-liked product for using AI to build software, with the ability to ask it to take distinct actions using natural language, specifically using Cursor's (sigh) "agent," which can be told to do something and then work on it in the background as you go and do something else. Nothing about what I'm saying is an endorsement of the product, but it's hard to deny that software developers generally liked Cursor, and that it's become extremely popular as a result.
Or, perhaps, I should've said "liked."