Stable WarpFusion v0.24 - FreeU, inpaint-softedge, temporal-depth experimental controlnets (Patreon)
Downloads
Content
Changelog:
- add FreeU Hack from https://huggingface.co/papers/2309.11497
- add option to apply FreeU before or after controlnet outputs
- add inpaint-softedge and temporal-depth controlnet models
- auto-download inpaint-softedge and temporal-depth checkpoints
- fix sd21 lineart model not working
- refactor get_controlnet_annotations a bit
- add inpaint-softedge and temporal-depth controlnet preprocessors
6.10.2023, v0.24.1:
- fix controlnet preview (next_frame error)
- fix dwpose 'final_boxes' error for frames with no people
- move width_height to video init cell to avoid people forgetting to run it to update width_height
14.10.2023, v0.24.2:
- fix xformers version
14.10.2023, v0.24.3:
- fix flow preview error for fewer than 10 frames
16.10.2023, v0.24.5:
- fix pillow errors (UnidentifiedImageError: cannot identify image file)
- fix timm import error (isDirectory error)
- deprecate v2_depth model (use depth controlnet instead)
20.10.2023, v0.24.6:
- fix pytorch dependencies error
- fix zoe depth error
- move installers to github repo
- move to L tier
1.12.2023
- move to M tier
FreeU
GUI - misc - apply_freeu_after_control, do_freeunet
This hack lowers the effect of stablediffusion unet residual skip-connections, prioritizing the core concepts in the image over low-frequency details. As you can see in the video, with FreeU on the image seems less cluttered, but still has enough high-frequency details. apply_freeu_after_control applies the hack after getting input from controlnets, which for me was producing a bit worse results.
Inpaint-softedge controlnet
I've experimented with mixed-input controlnets. This works the same way inpaint controlnet does + it uses softedge input for the inpainted area, so it relies not only on the masked area surroundings, but also on softedge filter output for the masked area, which gives a little more control.
Temporal-depth controlnet
This one takes previous frame + current frame depth + next frame depth as its inputs
Those controlnets are experimental, and you can try replacing some controlnet pairs with them, like replace depth with temporal-depth, or replace inpaint with inpaint-softedge
Local install guide:
https://github.com/Sxela/WarpFusion/blob/main/README.md
Guides made by users:
- 05.05.2023, v0.10 Video to AI Animation Tutorial For Beginners: Stable WarpFusion + Controlnet | MDMZ
- 11.05.2023, v0.11 How to use Stable Warp Fusion
- 13.05.2023, v0.8 Warp Fusion Local Install Guide (v0.8.6) with Diffusion Demonstration
- 14.05.2023, v0.12 Warp Fusion Alpha Masking Tutorial | Covers Both Auto-Masking and Custom Masking
- 23.05.2023, v0.12 STABLE WARPFUSION TUTORIAL - Colab Pro & Local Install
- 15.06.2023, v0.13 AI Animation out of Your Video: Stable Warpfusion Guide (Google Colab & Local Intallation)
- 17.06.2023, v0.14 Stable Warpfusion Tutorial: Turn Your Video to an AI Animation
- 21.06.2023, v0.14 Avoiding Common Problems with Stable Warpfusion
- 21.06.2023, v0.15 Warp Fusion: Step by Step Tutorial
- 04.07.2023, v0.15 Intense AI Video Maker (Stable WarpFusion Tutorial)
- 15.08.2023, v0.17 BEST Laptop for AI ( SDXL & Stable Warpfusion ) ft. RTX 4090 - Make AI Art FREE and FAST!
YouTube playlist with settings:
https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq
For tech support and other questions please join our discord server:
https://discord.gg/YrpJRgVcax
Discord is the preferred method because it is nearly impossible to provide any decent help or tech support via Patreon due to its limited text formatting and inability to add screenshots or videos to comments or DMs.
Error reports in comments will be deleted and reposted in Discord.