Cursor Pro plan being severely limited

4 hours ago 2
r/cursor - I knew Pro was too good to be true (Pro+ Update)

It looks like in the latest update 1.1.6 (early access), they’ve dramatically reduced the number of requests you can get in the regular Pro Plan.

Before this update, I could comfortably code all day on Claude 4 thinking max.

However, today, I only managed to get about an hour of coding with it before hitting the new rate limit.

You’ve hit the rate limit on this model.

Switch to a different model, upgrade to the Pro+ plan for 3x higher limits on Claude / Gemini / OpenAI models, or set a Spend Limit for requests over your rate limit.

I guess the reason they don’t want to tell anyone the usage limits is so that they can adjust them as they work out how much compute they want to deal out to the different plans.

There’s a chance I could be wrong, and this could be a massive coincidence with me hitting the elusive local rate limit. But it just seems to perfectly coincide with this new update that I installed about an hour ago.

P.S. I love Cursor, and I’ve been surprised by all the negative feedback about the new Pro plan. I thought the offer was very generous and I overlooked the lack of transparency because it seemed such a great deal. But now it’s clear why they did that: they’re cutting back limits to push users onto the new plan. Obvious in hindsight, but I genuinely didn’t think Cursor would resort to tactics like this.(Assuming my assumptions are correct.)

(Update)

Counter theory.
Prior to today, it seemed. All the Anthropic models' rate limits were per model.

As of today, it now appears that Anthropic models are all using a pooled rate limit. Which may actually be the explanation of what has changed today.

The fact that I used Opus this morning may have wrecked my rate limit on all the other Anthropic models.

I haven't seen any information about this change to the Anthropic models' rate limits. Regardless of which theory is correct, the lack of transparency leads people to guess and make assumptions for themselves.

Read Entire Article