- Text-to-3D Generative AI

I can firmly say that I would have never been able to build this without Cursor:

First, a short unsolicited sales pitch for Cursor:

I am not a “software engineer” or “developer”. I absolutely hate web dev. I had never written any python code until after I found Cursor.

I do understand computer science and enjoy coding. Cursor gave me the ability to do something that I would have never dared to before: launch a software product (without hiring any developers). My current team is me + Cursor.

Ok - now, what is

  1. It’s pronounced “any thing dot x,y,z”.

  2. It is a coding-based approach to text-to-3D generative AI, broadly based in CAD. Most of the other apps in this space are using NeRFs, which is basically a combo of text-to-image and then 2D-to-3D. (Lumalabs “Genie” and Commonsense Machines “Cube” are my two favorites.) The insight that I had which led to building is that AI is really good at writing code, and you can generate 3D models with code.

  3. You can prompt the AI to generate the code for the 3D model, which in turn generates the 3D model. You can preview the file in AR (on compatible devices), change the physical appearance, download the corresponding .STL or .GLB files for 3D printing or further 3D modeling, or have my team additively manufacture it for you with the “Make it real” button.

My hope is that this coding-based approach to text-to-3D generative AI will be more useful to engineers. (and sorry, I don’t mean software engineers :wink: ). I personally haven’t been able to make anything worth manufacturing with the NeRF-based tools.

Full disclosure: the AI is not very “smart” yet, in part because I need to work on the “training” and in part because I have no budget to pay for inference for free users (so it’s currently running GPT3.5).

I would greatly appreciate any feedback you might have!

P.S. - Here are a few examples that are part of’s training set, to give you a sense of what could be possible:


This is really neat! @truell20 and I actually worked on CAD part generation for bit.

cool! in what context?

We spent some time working on a 3D autocomplete experience for part-modeling. Trained some custom models (largely ones that autocompleted on a text-based representation of actions it took to build up the part), but didn’t get very far.


Raymond this is really impressive. Fantastic work. Thanks for sharing the details to date.

Was also impressed to see what polySpectra is up to. Excited to find a solid use-case to kick the tires on this.

To get your current coding quality are you just leveraging GPT built-in training and examples in context window, or did you need to do any training / fine-tuning on examples?

One thing that may help the app’s UX a bit is to show some examples of the kinds of prompts or prompting the AI can understand and work with. This could be by way of by including a few sample prompts on a new model, by possibly showing the prompt history on a model (helpful during and when shared later), and at some point a gallery (maybe hand curated at first) of interesting models with the prompts that generated them (if that’s not yet shown on the model itself).

Thank you so much!

Right now it’s just a lot of examples in the prompt, As well as a mechanism for making sure that the code is going to run in the sandbox environment. I do plan to fine tune a model at some point.

Regarding your UI suggestion, It’s a really good one - thank you! I was planning on doing something similar to, where new users see a gallery of 3 to 4 examples First.

I am writing here to re-iterate my gratitude for Cursor. is getting hundreds of new users a day and it is simply blowing my mind that Cursor wrote most of the code!

Keep it up, you guys are changing the world!