Option to Toggle Between Fast and Slow GPT-4 Requests

Would be cool if we could get the option to choose to use slow requests. There are times when I want to tweak just a few lines or where speed isn’t important. In these situations, using a fast request feels like overkill, and I often find myself hesitating, wondering if it’s worth consuming one of my limited fast requests. I understand I could use GPT-3, but I don’t really consider it a good alternative to 4 in most situations.

Not sure how your backend works, but it might even help with performance on your end if more people choose to use the slower version?

Thank you for the consideration.

PS: any news regarding GPT-4 Turbo. Would love some bigger context.

3 Likes

it’s a good idea, so if it become true, there are 3 mode you can choose, gpt 3, gpt4 ,gpt slow. is right?

1 Like

correct, or gpt 3, gpt4, gpt4 fast

sounds better than mine.

1 Like

+1 for this idea, especially since something seems to have gone wrong with my “fast request” accounting over the last month. it would be great to be able to save them for when i need them.

2 Likes

Yes, I’ve been looking forward to this for quite some time. It would be absolutely awesome and significantly enhance my experience with Cursor! I’ve noticed the Cursor team discussing the possibility of such a feature, but unfortunately, there haven’t been any concrete developments yet.

1 Like

++++1
I’ve had Cursor for a few days and already ate through a couple of hundred fast requests. I was under the impression that I’d have to change to the fast option, and that the default were the slow ones.

3 Likes

does a dev want to clarify what ‘cursor-fast’ is? vs. gpt-4 those are the only options I see in the Linux AppImage version. Why use different terminology from your documentation? I am inclined to think cursor-fast is some internal model you have rather than openai.

I hope they’re working on this. I would really like to “save” my fast requests for some more complex features I’m working on, and continue using slow ones for assists.

2 Likes

It is an internal model.