Add function calling/tools to cmd-L

Ability to add user’s own functions to GPT-3.5/4. This could be as simple has adding @tool:build. Where “build” is user created Python func. This can be implemented along

With this developers can build custom narrow code generation agents on top of Cursor.


+1 for this ability to set custom tools, functions and APIs that Cursor would have access to.

for inspiration, i think that superagent has done a great job of this with their app, there are a number of API/tool types that the user can choose from (Superagent Cloud), and the user can configure each one. some just require a name, others API credentials to an external service, others a specific function or json data structure.

this would be particularly valuable for giving cursor access to “code that writes code”

  1. the new “@web” tool is highly understated and should be accompanied by huge fanfare. now cursor can replace Bing and Perplexity. way to go!

  2. i just want to bump up this idea of adding custom tools to Cursor. i think it’s fine to stick with with the openai function-calling schema, which i think more people are getting used to with the release of the custom GPTs / GPT Store.


This is really interesting. I can’t promise we’ll get to this in the near feature but I’d love to hear more!

Could you give me an example of the types of tools you’d like to implemented using function calling?

This would really help me understand the underlying workflow improvement you’re aiming for.

1 Like

the “minimum viable feature” would be the ability to add a tool using the openai function-calling schema and @ it, just like i @ docs, web, etc.

i would be fine if the tool had to be python, had to be explicitly referenced, and only worked in interpreter mode.

(in other words, it’s ok if cursor isn’t smart enough to choose the right tool for the query, even though that should be possible with the openai models.)

some things that come to mind immediately:

  • search apis that are not the default “web”, such as tavily, perplexity, amazon product api, etc.
  • calculations where the llm just needs the answer from an input, which could be extended to things like table/sql lookups, hash tables, inventories, etc. even if the data or code for the calculation were in the codebase, you don’t want the llm to try to find this with the default top k retriever, or try to invent its own way to do the computation.
  • internal/custom APIs. I have a bunch of endpoints that i might want to ping, for example to return a stripe link to sell you X kg of CO2 offsets (for ) or to return an AR-viewable object from an image (for these are trivial to add as openai functions from the json structure.
  • code that writes code. this could be really powerful - you have some function that writes code from some given inputs. instead of needing to pray that cursor doesn’t make a mistake, then debug it iteratively, you just give it the function to use for that portion of the code.
  • call other llms. now this is very meta and starts to get cursor into more agentic territory. it would also be a lightweight way for the cursor team to fend off requests for supporting every new model that seems decent at coding. you can tell your users - “if you want to call another llm, write your own function for it.”
    • example prompt: “use @groq to write the initial code for feature foo, and review it before suggesting the code change to me.”

I was just yesterday tinkering with the shell_gpt (github) LLM utility tool for CLI and it has support for tool use. One concrete example of tool that I created was JSON schema inferring. I could see the utility of being able to create tools such as that and inform Cursor about them so it could make use of those user-written tools where applicable.

+1 for this because it might enable powerful innovative features coming from the users themselves.

1 Like

lol, this made my day. thank you for sharing and “live forever!”

1 Like