Home Artists Posts Import Register

Downloads

Content


Hey, this is an early release of my tutorial and project file on how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect.  

Make sure to watch part 1 of the tutorial: https://www.youtube.com/watch?v=mRXTR9vcHAs


Project file included below 👇

Files

maxresdefault.jpg

Hey! In this tutorial, we'll go over how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect. The project file is available on my Patreon: https://patreon.com/tblankensmith Part 1 of this tutorial is available here: https://www.youtube.com/watch?v=mRXTR9vcHAs 0:00 Overview and Examples 1:23 Overview of Recording Component 2:38 Recording an Animation 4:20 Setup Particle System 5:57 Audio Analysis 7:31 Particle System Settings

Comments

Deakin Taggart

Hi Torin! Great work - I love the potential here. I'm trying to find a way to make the full project live. Along with live audio in, any thoughts on how to generate the images live so that prompts can be added during the performance or extend for time not limited by the recording? I understand there is the delay as a new image is created but perhaps there is a method for rolling/queueing generation? Would love your thoughts. Thanks!

JJ Wiesler

I have the same question! Getting the audio is, as described above works well, but how can the image generation and particle system be evolved to work in realtime?

JJ Wiesler

Thanks so much!