Why are the models on using your own keys so much better?

Hi All.

I wonder if someone from cursor could shed some light on this.

Ive been using cursor paid for a while now. Ive noticed the models seem to have gotten increasingly more “stupid” with code questions especially when it comes to any larger chunks of code.

Using paid cursor I get these options

I always choose GPT-4 or cursor-fast which I am told is supposed to use the latest turbo models, is that correct?

But then if I swap to my own keys I can choose from

I’ve been running into an issue nthepast feqw days where the GPT4 and Cursor-Fast models were failing miserably. GPT just going in circles and failing to get to the answer.

I swap to GPT-4-1106-preview using my own keys and suddenly it nails the issue right away.

So my questions:

  1. Can someone shed some light on which exact models gpt-4 and cursor-fast under the cursor paid plans are using as this is not clear.

  2. Why is GPT-4-1106-preview so much better than gpt4 or cursor-fast? It seems counter productive to pay for cursor but then to have to keep swapping to my own keys (in turn paying Open AI) in order to get decent answers out.


I have a similar question:

For cursor chat, does

  1. cursor-fast use gpt-4-0613 or gpt-4-1106-preview or gpt-4-0125-preview
  2. GPT-4 slow requests uses 0613?
  3. What about inline edit (ctrl-k) which base model does it use?

Would be great to get some clarity on this! Its the source of much frustration daily for me and I still find consistently that cursor-fast or gpt4 using cursor is not as intelligent as the preview models using my own keys.

Do you have an example of where using your own key gives you better answers?

  1. cursor-fast: a 3.5-level model that follows our cmd-k format a little bit better
  2. gpt-4: since a few weeks ago, has been all gpt-4-0125-preview
1 Like

Could it be that when using your own API, your context length is 128k tokens but it’s only 8k when using cursor?

No, we’re using 10k tokens regardless of model.