I’m just curious if anyone uses it for work. Do you use Business or just pay for it yourself?
To the cursor team:
How much confidence can you give if Privacy Mode is enabled? I saw that Samsung employees sent proprietary code to OpenAI by mistake. It was a while ago, so I’m wondering how far privacy in AI has improved.
I use it for less sensible codebases at work. However my confidence in the privacy is not too high, unfortunately.
As far as I understand, when using privacy mode, Cursor itself does not save your prompts, but OpenAI stores them for 30 days. This of course is not the entire codebase, only snippets that were included into prompts.
I would assume, that if not sending very sensible data, it is highly unlikely that someone can reconstruct larger code bases.
Then there is the the indexing.
There embeddings of your entire codebase are stored in the cloud (you can exclude parts manually via .cursorignore though).
Embeddings reconstruction has made considerable progress, so that the embedded content can probably be reconstructed with very high accuracy. Maybe reconstruction of the same embeddings is even better in the future.
Not sure if you somehow can delete the index from the Cursor servers. I think this is the most concerning part that you have so little control over the embeddings stored at the cursor servers.
A local mode for the embeddings is unfortunately missing.
Thank you for the detailed response. My job is a bit peculiar in that I’m the only person on my team who writes code and I use the python libraries that are publicly available to write code. There is really no proprietary code but I will say that I interact with proprietary files with IP addresses etc.
For now I clean up my prompts with dummy values for IPs etc. My worry is if I by mistake put an IP address instead of a placeholder in the prompt ( with privacy mode enabled ), will cursor ( and threat actors out there ) be able to extract it ?