Home Artists Posts Import Register

Downloads

Content

Local install guide: https://github.com/Sxela/WarpFusion#local-installation-guide-for-windows-venv.

A100 or 24 GB card is highly recommended for sdxl.
Make a new env and grab a fresh install.bat from that repo.

Changelog:

18.09.2023, 0.23.6:

  • add gdown install

19.09.2023, 0.23.8:

19.09.2023, 0.23.10:

  • fix gui for non controlnet-modes (getattr error in gui)
  • fix video_out not defined

26.09.2023

  • moved to L tier

6.10.2023, v0.23.11:

  • fix dwpose 'final_boxes' error for frames with no people

14.10.2023, v0.23.12:

  • fix xformers version

14.10.2023, v0.23.13:

  • fix flow preview error for less than 10 frames

16.10.2023, v0.23.15

  • fix pillow errors (UnidentifiedImageError: cannot identify image file)
  • fix timm import error (isDirectory error)
  • deprecate v2_depth model (use depth controlnet instead)

20.10.2023, v0.23.16:

  • fix pytorch dependencies error
  • fix zoe depth error
  • move installers to github repo

11.11.2023:

  • move to M


DW Pose

to select dw_pose, go to gui -> controlnet -> pose_detector -> dw_pose

Download new install.bat or install via !pip install onnxruntime-gpu gdown manually

Width_height max size

You can now specify a single number as width_height settings, and it will define the max size of the frame, fitting the output inside that size, keeping the aspect ratio. For example, if you have a 1920x1080 video, width_height=1280 will downscale that video to 1280x720

Controlnet Preview

Added controlnet annotations (detections) preview. Thanks to #kytr.ai for the idea.

To enable, check only_preview_controlnet in Do the Run cell. It will take the 1st frame in frame_range from your video and generate controlnet previews for it.

Prores

add prores codec thanks to #sozeditit https://discord.com/channels/973802253204996116/1149027955998195742

Create video cell -> output_format -> prores_mov

It has better quality than h264_mp4, and a smaller size than qtrle_mov

Video Init Update

Moved looped image init to video init settings. To use init image as video init, select video_source -> looped_init_image

Added reverse option

Extract frames in reverse. For example, if a person moves into the frame, it's easier to reverse the video so that the video starts with the person inside of the frame.

Detect fps

fps will now be detected from your init_video, divided by extract nth frame value and used in your video export if, you set your video export fps to -1.

Example: your video has 24fps. you extract every 2nd frame. The suggested output fps will be 24/2 = 12fps. If you set your video export fps to -1, it will use predicted fps = 12.

Local install guide:
https://github.com/Sxela/WarpFusion/blob/main/README.md

Guides made by users:

YouTube playlist with settings:
https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq

For tech support and other questions please join our discord server:
https://discord.gg/YrpJRgVcax

Discord is the preferred method because it is nearly impossible to provide any decent help or tech support via Patreon due to its limited text formatting and inability to add screenshots or videos to comments or DMs.
Error reports in comments will be deleted and reposted in Discord.

Comments

Raf

I get this error. What should I do??? Detected that PyTorch and torchvision were compiled with different CUDA major versions. PyTorch has CUDA Version=12.1 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install.

OMonedr

Hi. I just downloaded 0.23 Im running it in local env. NVIDIA 4080 16gb. when i difuse it it gives error about memory (i asume) link on the error file http://ftpx.forpsi.comwww.tagon.cz/work/AI_experiment/Error.txt I have tried old notebook 0.15 and it works just fine, there is no error about memory. Just the video is not very stable. Any help? advice? wHEN i run the 0.23 on Colab env. it is processing ok and never exceed 16gb of VRAM. thank you