Home Artists Posts Import Register

Downloads

Content

Local install guides:
Windows: https://github.com/Sxela/WarpFusion#local-installation-guide-for-windows-venv
Linux:
https://github.com/Sxela/WarpFusion/blob/main/README.md#local-installation-guide-for-linux-ubuntu-2204-venv

If something's not working, make a new env and grab a fresh install.bat from that repo first, ask questions later :D

video changelog: https://www.youtube.com/watch?v=VXS-bpWy7CA

Changelog:

  • add sdxl inpainting controlnet model
  • add scene splitter for animatediff mode
  • enable reference controlnet for animatediff mode
  • add multiprompt to animatediff
  • add prompt weights to animatediff
  • add simple prompt weights mode to mix text tokens before diffusion
  • enable masks for blend = None video export mode
  • fix error on changing context_length
  • add custom v1.5-v2 motion module path
  • make frame range and scene process the correct amount of frames
  • fix animatediff stylized overlap mode taking frames with incorrect offset
  • fix sdxl long prompts error
  • combine cells Prepare Folders, Install pytorch, Install SD Dependencies into a single cell
  • add Basic Settings cell to Video Input Settings cell

v0.27.2, 13.11.2023:

  • add prompt blender cell

v0.27.3, 18.11.2023:

  • fix the wrong frame range being processed by default
  • fix samplers in animatediff mode: sample_dpmpp_2m sample_dpm_2 sample_dpm_2_ancestral sample_dpmpp_2s_ancestral
  • fix error when using 0 controlnets in animatediff controlnet mode
  • disable use_legacy_cc for lazywarp mode as deprecated
  • make flow folder paths include output resolution

v0.27.4, 27.11.2023:

  • fix ldm.modules.encoders.modules error for colab thanks to #digitalgrails

1.12.2023: 

  • move to L tier

11.2.2024:

  • move to M tier

SDXL Inpainting controlnet

I've trained a somewhat working inpainting controlnet for the SDXL model.
I'm using the 15000-steps checkpoint by default, but you can go lower if it seem to strong for you. Just change the model url to another one from this repo: sxela/out at main (huggingface.co)

I suggest using 0.5 strength, as it seems to overcook the results.

Scene Splitter

Testing scene splitter for animatediff. You can now split a long video into smaller scenes, that will be rendered separately, so the frames won't blend, overlap, or mix in any other way.

You can find it in the GUI -> animatediff tab

use_manual_splits
Enable this if you plan to provide a list of scene-splitting keyframes manually via the scene_splits input below

scene_split_thresh
this threshold will be used to trigger scene splitting if you have content-aware scheduling enabled and have already extracted the frame difference. If you use content-aware scheduling, but don't want to use scene splits, either raise the threshold very high so it won't trigger any splits, or enable use_manual_splits with an empty  scene_splits

scene_splits
Expects a list of numbers like [16,30,55], where each number is the end frame of the scene. Use this if you plan to do scene splitting manually and have use_manual_splits enabled.  Scenes will be split at the keyframe, the keyframe being the end of the scene. for example, with input [16,30,55] you will get scenes: 0-16, 17-30, 31-55, 56-end frame

May trigger errors if the scene length is smaller than the context length.


Animatediff multiprompt

You can now use multiple prompts in animatediff mode, just like in non-animatediff waprfusion mode. Masks are not yet supported. You can provide prompt weights, otherwise, they will be weighted equally.

You can use this code snippet to iterate through a list of prompts, with blending between them: https://discord.com/channels/973802253204996116/1172425250898710618/1172425250898710618

GUI -> animatediff -> blend_prompts_b4_diffusion

Use this option to blend text encoder outputs instead of diffusion model outputs (much faster and provides similar results, should be enabled by default)

Prompt Blender

Extras -> Prompt Blender

A simple cell to make a set of scheduled prompts from a list of prompts and duration settings.

It will gradually transition between prompts.
Just paste the cell output into the prompt textbox in the GUI.

prompts
A list of prompts to loop through.

prompt_duration
Prompt duration defines how many frames the prompt will be kept steady without any blending.

prompt_transition
Prompt transition defines the duration of the transition between 2 prompts (in frames).

Local install guide:
https://github.com/Sxela/WarpFusion/blob/main/README.md

Guides made by users:

YouTube playlist with settings:
https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq

For tech support and other questions please join our discord server:
https://discord.gg/YrpJRgVcax

Discord is the preferred method because it is nearly impossible to provide any decent help or tech support via Patreon due to its limited text formatting and inability to add screenshots or videos to comments or DMs.
Due to the recent Patreon comments update it's impossible to reply to comments from notifications anymore, so if your comment hasn't been replied to for a while, DM me.

Files

From Aztec to Cyberpunk - Woman Dance AI Filter

StableWarpFusion v0.27.0 Source video by Medkova Lana Settings: https://github.com/Sxela/WarpFusion/blob/main/examples/stable_warpfusion_0.27.0(82)_settings.txt

Comments

Hridaye Nagpal

i am getting an error "Please lower thread number to 1-3. Pool not running" and the video isnt getting exported

Linas Bruun

I am getting an error on this version trying to run it for first time: Torch not found. Installing torch v2. Failed installing torch v2. Docker found. Skipping install. Docker detected. Skipping install. pulling a fresh ComfyUI pulling a fresh stablediffusion pulling a fresh ControlNet pulling a fresh k-diffusion pulling a fresh WarpFusion