(AI-Empowered)
Background and Challenges
For me, the best part of design is dreaming up something totally new. I have always loved the magic of 3D interactive web experiences. But before AI, building these ideas meant either spending weeks learning heavy software like Blender or deep-diving into the complicated world of WebGL.
Today, those old walls between a designer's idea and the final code are coming down. I realized I didn't have to settle for the limits of no-code website builders or wait on an engineering team to build my concepts. AI is more than just a coding assistant. It opens up a whole new space for visual design.
To test this, I built a series of complex, hands-on experiments. Along the way, I learned that the real skill is figuring out how to navigate the edge cases of these tools. When the technical roadblocks are gone, what is left is pure architecture, aesthetics, and the fun of building things that used to feel impossible.
Here is a look at how I built these interactive pieces and the technical problems I had to solve.
Experience 01: Interactive 3D Characters
Bringing a cheerful frog and an octopus to life with cursor tracking and dynamic web interactions.
Bridging Tool Ecosystems
The goal here was to create cute, character-driven interactions on the web. I wanted to design 3D models of an octopus and a frog that didn't just sit statically. They needed to feel alive, reacting to user input and showing subtle procedural animations on screen.
Creating interactive characters takes a mix of different software. Right now, there is no single tool that handles everything end to end, and no single prompt can just build it for you. I had to break the problem into smaller pieces so each tool could do its specific job:
Blender & Claude Code
Blender usually takes a long time for modeling, so my first thought was to use AI to model it via MCP. The results showed that AI is still struggling here because pure Python code cannot precisely control 3D visual elements. Good modeling requires tight coordination between what it looks like and how it is built.
Nano Banana & Meshy AI
I ended up using Nano Banana to generate the character ideas visually, and then I used Meshy AI to create the actual 3D meshes.
Three.js Shaders & Meshy AI
I tried writing custom GLSL shaders and Three.js materials to texture the models, but it just did not look realistic. It felt very flat and digital. Instead of forcing code to do art, I went back to Meshy AI's native texturing engine to generate and bake the final, organic materials directly onto the models.
Three.js & Custom Logic
On the front end, I added event listeners and procedural animations. Whether it was calculating a new jump direction on tap or making the characters look at the mouse, I used custom Javascript to add personality to the stationary models.
The Insight: No single AI tool can handle projects like this end to end yet. Every tool has its strengths and blind spots. The real skill is knowing exactly what to ask from each tool, choosing the fastest tool for the job, and knowing when to scale back your ambition to fit what the tech can actually deliver today.
Experience 02: The Blooming Flower Garden
A hyper-realistic garden where flowers bloom and retract as your cursor explores.
Designing Around Boundaries
This piece took a lot of creative problem solving. I wanted a highly realistic, lush garden scene that also felt instant and interactive on the web. But putting photorealistic 3D interaction in a browser hits a hard industry ceiling right now. Beautiful rendering and smooth real-time web performance are basically fighting each other. Without knowing this limit, I would have made the wrong architectural choice.
Instead of fighting the technology, I built a hybrid illusion. Initially, I thought about using Blender's Geometry Nodes to make the blooming animations, rendering the heavy graphics offline, and exporting a video for the web effects.
As I planned it out, I realized something important. If all I need from Blender is a video, why not just use AI video tools to make the footage directly? Rather than spending weeks setting up procedural geometry networks, I skipped the 3D software entirely.
I split the project into two separate parts:
Nano Banana & VEO
I used Nano Banana to generate two main concept images: a static garden with closed buds, and the same garden with fully bloomed flowers. Then, I used VEO (an AI video model) to fill in the frames between the two images, creating a smooth video that captured the blooming phases. Afterwards, I ran it through FFmpeg to make perfect reversing loops.
HTML Canvas API
On the front end, I skipped WebGL altogether. I built an interactive HTML Canvas setup instead. When a user hovers over a zone, Javascript simply scrubs forward through the pre-rendered video frames. When they move away, it scrubs backward.
The Insight: The best answer was not about writing more complicated code. It was a lateral shift in design thinking. Recognizing the tech ceiling helped me skip traditional 3D modeling and use AI video generation mixed with native browser timing to get the visual result I wanted without performance drops.
Experience 03: The Interactive Water Blobs
Metaballs with ultra-realistic glass refraction, dynamically controlled by the cursor.
The Problem-Solving Pivot
I wanted a scene full of liquid, organic water blobs that gracefully merge, pull apart, and bend the background like real glass. I had built a simpler version of this in Spline before, but it felt a bit stiff. I needed a better visual result and completely custom fluid physics.
I sketched out a few technical paths with AI, initially going for the absolute best visual quality possible, which was GLSL raymarching with Signed Distance Functions (SDFs). My AI tool wrote an amazing fragment shader that had two-surface light refraction, Fresnel reflections, and chromatic aberration.
The raymarching looked beautiful, but it was just too slow. Every single pixel was running hundreds of math equations per frame. The math was perfect, but the performance was terrible. This pushed the physical limits of real-time browser rendering. It was a constraint I only truly understood once I hit it.
Instead of settling for a less impressive look, I pivoted the technical foundation. I switched off raymarching and moved to mesh-based Marching Cubes, leveraging the GPU's standard rendering pipeline and Three.js physical materials. While the AI provided the heavy math to make it work, it lacked a human designer's eye. I realized I could not just ask an AI to tune the glass refraction index to catch the light perfectly, or ask it to find the exact color hex codes that avoided looking muddy when the blobs touched. I had to locate the exact matrix arrays in the code that controlled the visual effects, and then I manually tested number combinations by hand until the result felt right.
The Insight: Bridging the gap between technically perfect code and a satisfying user experience still requires human taste. AI can write the physics engine, but the designer sets the feeling.
Final Reflection: From Designer to Design Engineer
Working on these experiments changed my relationship with technology. AI did not just give me the ability to write code, it allowed me to transition into a Design Engineer.
The hardest part of building digital products is no longer memorizing programming syntax or spending hours clicking buttons in software. The craft itself has moved up a level. To build great things today, a Design Engineer needs to:
- Map the bounds of AI: Know exactly where generative models are brilliant and where human taste still has to take over.
- Map the bounds of tech: Push the performance limits of the web browser so you actually understand what is possible to run in real time.
- Map the bounds of tools: Learn the quirks of your specific software stack so you can pick the right tool for each micro-task.
- Deconstruct and simplify: Break down giant, impossible ideas into small, boring steps that existing tools can easily execute.
- Combine tools creatively: Be willing to throw out standard methods, like swapping a 3D engine for an AI generated video clip, to let each tool do only what it does best.
Leaning into these tools has not watered down my design voice, it has actually made it clearer. I no longer have to compromise my aesthetic ideas just because of technical roadblocks. I can dream up an experience, guide the AI to handle the heavy lifting, and use my human intuition to design things that used to feel out of reach.
