Cursor's 500 requests => "unlimited" => 225 requests

5 hours ago 1

Three months ago, I wrote “Why Cursor's flat-fee pricing could lead to its downfall”—and now it’s happening.

In the blog post, I argued that AI coding agents that offer subscriptions models create misaligned incentives. The reasons for the misaligned incentives are simple:

  • AI coding agents, in its essence, are (useful!) wrappers around AI models. AI models charge per-token.

  • The “subscription” includes an (often undisclosed) per-token usage.

  • When users exceed their token usage, they get asked to pay more, upgrade their plan or just wait for a certain period of time.

This is why Cursor users are furious right now. And they’re looking to switch.

Working on an open source tool, I’ve long felt that there is a better way. In this post, I’ll look at the timeline of Cursor’s changes, and explain what we think the proper way to make money is. I’ll also have a little surprise at the end.

So, what just happened?

Yesterday, Cursor wrote “we missed the mark” and apologized for their pricing changes. This came after a couple of changes:

The original post has been changed, but can still be found on the Internet Archive:

Here they announced a new Ultra plan, and announced an “unlimited-with-rate-limits” model. This change was worse than the original “500 request limit” model (which was already problematic), because it further obscured expectations of how you can actually use the tool. Existing users were grandfathered in, but new users had to use the new pricing model.

It looked like they realized our point: that just paying for the LLM tokens directly is the only way to be completely transparent to users, and have aligned incentives. They published on their forum that they’re switching to per-token metering (without a markup):

This is exactly the same pricing model as our open-source project (and other open-source alternatives).

They updated the June 16th blog post to reflect this new pricing, hiding their “unlimited-with-rate-limits” misstep.

Their change to per-token metering, did mean in effect a halving of their original promise: from 500 requests to ~225 requests (for median usage; much less for some users), when using a frontier model like Sonnet 4.

People were understandably upset.

To the credit of the Cursor team, they recognized how they didn’t communicate this well, and apologized for how they handled the rollout.

Cursor: welcome to the club of token-based pricing, matching the cost of LLM providers. It took you a while, but your incentives are getting more aligned with your users.

Let me give you some advice on how to actually make money, in a way that fosters a great community:

  • Let users use any LLM provider directly. Don’t try to make money here!

  • Make your core product open source. Open-source products will be way better at feature development, by having a much closer connection to their community.

  • Build paid features which are only interesting to larger teams and enterprises (individual users shouldn’t care about them).

  • Since you have your own fine-tuned models, you can also become an LLM provider and sell those per-token.

And remember: When incentives are aligned, you also have a clear incentive to decrease costs! As we mentioned in our blog post:

It sounds like the 500 request limit was never sustainable, and was really just sponsored to get people to use Cursor. This is marketing spend. It’s better to be very clear about your marketing spend, so you don’t set people up for disappointment later.

So let us do some marketing spend right now! We’ll run a promotion until July 7th. For your FIRST top-up of $15, instead of $15 added to your account, you’ll see $45. The other $30 is on us. And this is ON TOP of our regular $20 in free credits for new users, for a total of $50 free credits!

  • These credits won’t expire. Use them whenever you want.

  • You pay for tokens (the exact pricing model 99% of AI model providers use). The more tokens you use, the more you get charged.

  • This means we don’t use wicked pricing schemes nor have any means to pull the rug on you. You’ll only pay “more” if (for some reason) Google or Claude decides to increase the per-token costs for Gemini 2.5 Pro/Claude 4, etc. And that happens rarely.

The beauty of being open-source is that we also have an active Discord community that’s ready to help out if you get stuck. And yes dear Cursor user, you’re allowed to talk about our competitors there :)))

Discussion about this post

Read Entire Article