Skip to content

From Linear Video to Interactive Code – An Updated Workflow for Technical Media

By: Stephen Toback

In academic video production, my primary tool has always been creating video—2D and linear. However, my ongoing testing with organic chemistry visualizations pushed me to try a different approach. After receiving some excellent feedback, I tried SVG (and ultimately WebGL) and found that working with code may have an unknown benefit: it may have made it much easier for the AI to change and update the simulation using code as compared to changing raster images frame-by-frame.

Here is a breakdown of the key improvements I found in this process.

1. Dr. Nash’s Feedback

The first major win was receiving specific feedback from Dr. Jessica Nash from Duke’s Co-Lab. She’s an AI expert who is also an expert in Organic Chemistry. She provided a technical checklist that challenged both me and the AI to move beyond a simple visual:

  • 3D molecules are significantly better for learning than 2D diagrams.

  • Visualizing electron pairs is essential to show the “why” behind the chemistry.

  • Interactive rotation: You should be able to rotate the molecules to see the reaction from any angle.

  • Simultaneous bond changes: The OH must connect while the I is released.

2. 2D Video to SVG to WebGL

The process started as a standard video, but I quickly pivoted to SVG (Scalable Vector Graphics). Because SVG is a programming language, the AI seemed to find it easier to manipulate the “math” of the animation rather than trying to redraw pixels.

The SVG model looked great, but the requirement for interactive rotation quickly exposed the limitations of 2D SVG.

When it became clear we needed a true 3D environment, I suggested to the AI that we use VRML. This makes me as old as Silicon Graphics workstations! The AI politely informed me that VRML is “heavily outdated” and steered me toward WebGL (via Three.js). The first transition was shockingly horrible since my expectation was that this was just a “code change.”

It took 12 versions, but we got there!

3. Google AI Studio and Visual Studio Code

This was my first look at Google AI Studio, and it felt much better suited for this task than the standard Gemini web interface. It handled the creation and iteration of the code files more cleanly. To manage the process, I used Visual Studio Code to track every version of the code the AI generated. This allowed me to compare and iterate versions, which was essential for a project with so many iterations.

4. Gemini 3.1 and the “Loop” Breakthrough

It’s hard to say if it was the interface or the move to Gemini 3.1, but the responsiveness was on a different level. In the past, if a model got stuck in a “loop” (repeating the same error), I usually had to start a new context.

At Version 11, I hit one of those loops where the logic was spinning and just spitting out the same version while ignoring my prompts. Instead of starting over, I asked the model to stop and think about the logic. It actually self-corrected and broke the cycle. By Version 12, the simulation worked exactly as requested. Whether it’s the improved reasoning in 3.1 or the environment of AI Studio, the ability to “talk” through a complex code fix is a massive win.

This adds a new set of tools to academic media production and I’m super excited about the possibilities beyond organic chemistry!


Sidebar: A Moment of Silence for VRML

For those of us who spent the 90s on Silicon Graphics (SGI) workstations, VRML (Virtual Reality Modeling Language) was the promised land. It was supposed to be the “HTML of 3D,” allowing us to build navigable worlds in a browser. While it pioneered the idea of a 3D web, it eventually gave way to WebGL. Unlike VRML, which required clunky browser plugins, WebGL talks directly to your computer’s GPU, making it the modern engine for the interactive molecules we’re building today.


Leave a Reply

Your email address will not be published. Required fields are marked *