Support local LLM's

Hi There,

I’m wondering if there are plans in the future to support local LLM’s within Cursor? While today you support GPT-3.5 & GPT-4, it would be great if we could point Cursor to a local LLM on the machine that has been specifically tuned on a particular codebase(s).

Agree this would be great, for flying also. For the time being I use Continue with codellama which is pretty impressive for offline/local.

Does it yield better results compared to gpt4? btw openai is a investor cursor, i kinda hope they dont get “vendor” locked because of that

No, it’s noticeably worse, but good enough for syntax questions, what does this error message mean, how do these pieces of the web app stack work etc. Definitely worth playing around with via ollama if you have a mac

Can you share some URL’s for the tools you’re mentioning? Would like to check them out.

1 Like - via the cli tool i’ve found mistral and codellama most useful but they have others. i have a 16gb m2 and they run pretty well. is a vscode extension, works in cursor as well that let’s you use ollama + codellama in a similar way to cursor - I think they are just going to get eaten by copilot x/cursor just being an extension.
I had a period of a few weeks where I was frequently without internet and found these very useful

This was discussed in a thread on our Discord server.

Regarding using other AI models:

Regarding Localhost:

1 Like

Fair enough! i assumed it would be difficult because features are tuned around the capabilities of 3.5/4, so a drop in replacement with some lesser model would be a poor experience