Home Artists Posts Import Register

Content

Here links to all the tools I used to make this video, along with timestamps for what section of the video I'm referring to. 

Want my settings and prompts? All of them are available on this post

BLENDER

[0:00 - 0:20], [8:07 -  8:38] 

I built this section manually in Blender using some assets I found online. No AI necessary. 

Abandoned Warehouse Model by Aurélien Martel 

 Apple 2 Computer On Desk by Hank-Ball 

DALLE2

[0:21 - 0:35], [1:07 - 1:09]

For these sections I used the outpainting feature in Dalle2 to do an infinite zoom. In combination with this brilliant zoom animation tool. After the Dalle2 generation is complete, simply download it and drag it into the animation tool. This will create the next scaled frame for you to use in Dalle2. Make sure you don't crop the image when you import the new scaled image into Dalle2.  

[6:09 - 6:23]

This section is similar, but rather than scaling down the image and zooming out, I used outpainting to build out the image into a megacanvas. After I was happy with the massive image, I panned the camera around in my video editing software. 

Stable WarpFusion

[0:35 - 0:38] 3D Mode, [0:38 - 0:40] Video Input, [0:41 - 1:07] Video Inputs, [2:49 - 4:33] Video Inputs, 

These sections use Stable WarpFusion by a patreon account I found called Sxela. Stable Warpfusion has a cool feature called flow mapping, where the optical flow of the moving objects in an input video are recorded, and the morphed image is moved accordingly. There is a consistency check as well for maintaining a cohesive image. One important note, the faster objects are moving in the video, the less effective your results will look. If you are looking for additional help, their discord is a useful place for getting additional help too!

[1:09 - 2:48] 3D Mode, [4:34 - 5:41] 3D Mode, [6:23 - 8:05] 3D Mode

Deforum Stable Diffusion

These sections are made with a different notebook for stable diffusion called Deforum Stable Diffusion v0.5. What's cool about this notebook is that it allows you to schedule the strength and noise of the animation. The trade off is that this notebook does not have any of the consistency/warp mapping like Stable Warp fusion does. This notebook is also greatly simplified compared to the Disco Diffusion adaptation that became stable warpfusion. 

Stable Diffusion Interpolation

[5:42 - 6:09]

This section was made using stable diffusion interpolation! You can image it as sliding across latent space from one image to another. If you'd like more help setting it up, Nerdy Rodent is an incredible youtube channel for learning how to set up this repo (and many others) on your machine. 

Free AI Upscaling

I spent a long time trying to get higher resolution stable diffusion generations to look good (1024x1024, 1536x1536). What I found after a lot of testing is that the larger resolution generations are still limited by the fact that stable-diffusion was trained on 512x512 images. Larger generations turn out like outpainting, where you just have more smaller sized figures in a larger canvas. For this reason, I would recommand keeping the generation width and height at 512x512, and then upscaling with REALESRGAN.

Framerate Interpolation

To increase the framerate of your animation, I would recommend using flowframes. I used it to double my fps from 30 to 60 for this project. 


It took me a long time to test out all the countless setting combinations to make these animations look how I wanted them to. If you'd like to replicate my results, all the settings and prompts I used in this video can be found here: https://www.patreon.com/posts/72829815 

Comments

No comments found for this post.