Cache training on docs

There are some libraries/frameworks that are not attached to the LLM in cursor (Example would be Shadcn, supabase, nextauth…). I’ve see some users on Twitter training the model on these docs yet they’re not available to me, why would all the users train the model on the same library/framework again? Why not cache everything if it’s the same link to the docs?

And do you charge for training? Because I can’t seem to find where to see my mothly usage.

1 Like