Has the model changed or something?
Do you have an example?
For single shot chats things seem to work pretty well. But when using multi-shot chats, seems like gpt “forgets” what was previously mentioned in the chat? gpt-4-32k doesn’t have this problem?
Yes, I’m using the default/paid cursor model
lately, chat.openai.com is doing a better job than cursor, imho
i’m having to bounce between the two
You’d probably be better served using your own api key and using the gpt4-1106-preview model. Works great for me! I spam it all day and never pay more than 30$/mo in api fees
With a few file imports in the context, I easily come up with 10 euros a day.
Cursor is not usable at all I feel. I have spent like ~200 of my monthly fast requests just going back and forth with one thing, to the point where I had to rebuild my entire project from scratch by my own.
I don’t blame the team, it’s probably the core GPT-4 model getting dumber. I just think that the product I subscribed to, and the one I now get to use, are completely different.
Same story bro
Since a few days I experience the same thing. I can not get the AI to do anything uswful, not even small and easy fixes. Cursor is totally useless for me in the current state. When using 3.5, I don‘t even get messages anymore. Just a few lines of code that match my existing implementation with no explanation. It‘s a shame!
I guess the mistral model is out now so before long you can just use a self-hosted model that will be gpt-4 equivalent. I do wish that you didn’t have to import file context by default because it probably makes my api bill 2-3x higher. Actually spent 65$ last month on gpt-4 calls in cursor.
I agree, I also have been noticing the same
Ack, we are looking into this to see what’s going. If you can, screenshots are super helpful for helping us pin down the problem.
If I may, though I don’t have a screenshot, but a lot of times, if I give it a function and ask it to refractor it to use a different framework in python, it only gives me an explanation of what needs to change instead of refractoring the function. If I am lucky and I give it 3 functions to refractory it’s gonna do one function properly and have placeholders in the other 2 functions that basically say similar implementation to the first function. Speaking about python btw and this happens in the chat - Ctrl + L page.
Ditto, placeholders have gotten out of control, imho
Maybe we need a mode where gravity is full implementation?
Deployed a couple of changes yesterday the prompt prioritization that we think could have caused some of this. In the future, want to change the UI to give you a sense of what exactly went into the prompt.
If people have updated screenshots from the past day or two that’s super helpful in debugging any remaining issues.
I’ve encountered a similar issue. Could you please review the post I just published? Noticed a regression in Cursor AI
Is it just me? Or did the “dumb” effect go away? I’m guessing the gpt model got upgraded or something on openai side @truell20 ?