Printing a 3D bun from a photo
By Anatoly Mironov
Can AI really help you 3D print a bun from a photo? I decided to find out.
Experimenting with AI-generated Models
I’ve been tinkering with 3D printing and recently discovered the “Image to AI” feature in Makers World. I decided to test it using a photo of a bun I ate last summer. Besides that, as a fan of code based 3d design, I wonder if code assistants can create models in openscad.
I had a crisp but flat image of the bun.

This particular bun was unique, though buns in general aren’t rare. The “Image to 3D” had no problem to generate the model. It took just a couple of minutes and cost me 2 AI credits in Makers World.


Unexpected AI Insights
I printed it out in my Ender 3 v2 with 2% infill, the size was about the real bun’s size. Surprisingly, the model even included pieces of baking paper—something not visible in my photo. It must have inferred this from other bun images or general baking knowledge.

OpenSCAD and Code Assistants
I tried using Claude to generate OpenSCAD code. It worked reasonably well—fast and syntactically correct, though not perfect. One issue: the Yoda model “hangs” on a plate. While it works as a 3D model, it’s impractical for printing as a single piece—even with supports.


openscad is not that common as javascript or python, so it’s not very surprising that it maybe is not perfect, still I can see use cases where I can design parts of my design faster.
What’s Next for AI in 3D Printing?
As AI tools evolve, they’re not just speeding up design—they’re starting to imagine with us. What will your next print be?