Has Cursor gotten "dumb"

In the last week or so, things seem to be going downhill? Did something change in the gpt4 model we’re using?


I’ve been experiencing a degradation in the last 4 days too.

Hello! Any particular features or situations in which Cursor has felt dumb? Screenshots and specific examples are super helpful to see if it’s something we can fix.

We did complete the switch to GPT-4 Turbo a couple of weeks ago – after testing and modifying our prompts (especially for Command-K) – to get the speed and updated knowledge cutoff benefits.

1 Like

I wish this was something I could repro by writing a set of tests. What I’m speaking to is the experience w/ copilot++. It’s seems to be happening more frequently now, where the edit/diff simply does not applied.

Ack, gotcha. Could you say a bit more about what’s going wrong? This will help us debug.

I.e. do the suggestions seem more wrong? Or are you trying to accept the suggestion but it’s not getting applied? Are you seeing no suggestions? Or something else?

Both, but the more obvious (less subjective) one is the deltas not getting applied via copilot++

Correct, I accept, it does the scan thing, and no changes get applied.

Thank you! If you can, a screen recording or steps to reproduce would be incredibly helpful.

I try to grab a screenshot next time.

1 Like

Just realized I might be confused on my naming. What should I call the auto-editing that occurs when applying a diff? copilot++ is different from this? @truell20

@truell20 you’re probably aware of this already, but just in case:
The developer of Aider wrote a blog post about a special diff format he is using to make GPT-4 succeed more often to create patches:


@truell20 cursor is using a proprietary model for this sort of stuff, right? current gpt-4 doesn’t handle it reliably yet?

for lack of a better name, "gpt protocol buffers" by unicomp21 · Pull Request #771 · openai/evals · GitHub (me)

More lazy than in January.

i’m going to try to find a better way to document this, but my general sense is also that cursor is “dumber” the last ~6 weeks. dumber how? mostly around understanding the context from the @ references. maybe a retriever overload issue, this is a classic problem with RAG — there is a sweet spot for the corpus size.

in the absence of more specific or quantitative comparisons, i can say that qualitatively, i need to be much more careful about the snippets and docs that i add to the context, sometimes starting a new chat if it already got “overloaded” by having a big @ doc from earlier in the chat.

one benefit of this frustration is that i found a better workflow for integrating open-source packages into my apps.

i used to @ the documentation site root and trust that cursor will figure it out. now, i have been getting better luck by cloning the repo, then adding that repo to the workspace, then instead of pointing cursor to the documentation, i point it to the source code.

there is a way that this makes sense intuitively - you remove a step from this game of telephone. instead of “code (source) => natural language (docs) => cursor => code (my app)”, it’s just “code (source) => cursor => code (my app)”. and since LLMs are just parrots, the more working code you feed in, the less you need to wrangle them to write out working code.


Very interesting, thanks for the detailed info.