Home Artists Posts Import Register

Downloads

Content

Local install guide: https://github.com/Sxela/WarpFusion#local-installation-guide-for-windows-venv.

A100 or 24 GB card is highly recommended.
Make a new env and grab a fresh install.bat from that repo.

Changelog:

v0.22.11, 5.09.2023

  • add rife
  • fix samtrack local install for windows
  • fix samtrack incorrect frame indexing if starting not from 1st frame
  • fix schedules not loading
  • fix ---> 81 os.chdir(f'{root_dir}/Real-ESRGAN') file not found error thanks to Leandro Dreger
  • hide "warp_mode","use_patchmatch_inpaiting","warp_num_k","warp_forward","sat_scale" from gui as deprecated
  • clean up GUI settings setters/getters
  • fix contronet not updating in GUI sometimes
  • fix io.capture_output error
  • fix platform import, gdown install

v0.22.13-16, 6.9.2023:

  • fix rife imports
  • upd rife repo, thanks to #stabilityaiart for testing

v0.22.17-18, 7.9.2023:

  • fix samtrack not saving video
  • make samtrack save separate bg mask
  • fix multiple prompt error for sdxl

v0.22.19-20, 8.9.2023:

  • fix samtrack site-packages url
  • fix samtrack missing groundingdino config

v0.22.21, 11.09.2023

  • download dummypkl automatically
  • fix venv install real-esrgan model folder not being created
  • moved to L tier

v0.22.22, 13.09.2023

  • fix "TypeError: eval() arg 1..." error when loading non-existent settings on the initial run

v0.22.23, 15.09.2023

  • add error message for model version mismatch

v0.22.24, 29.10.2023:

  • fix pytorch dependencies error
  • fix zoe depth error
  • move installers to github repo

30.10.2023:

  • moved to M tier


SAMTrack

Samtrack should now download prebuilt binaries for torch v2/cuda 11.x, if you have a different setup, then you will need to get VS Build Tools: https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst

Then install cuda toolkit for your GPU drivers/os: https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64

Then it should install itself just fine.

RIFE

Interpolates frames. Results example - in this post's video.

Settings are simple:

  • exponent: the power of 2 to which to increase the fps. 1 - x2, 2 - x4, 3 - x8, etc.
  • video_path: input video to interpolate. can be a folder with frames, but then you need to specify the fps manually.
  • nth_frame: extracts only nth frame before interpolation
  • fps: output fps (for image folder input only, fps for video will be based on input video's fps to keep the same video duration after interpolation)

If you have a high-fps output video (like 60ps), you can also try skipping frames to reduce high-frequency flicker. If you have already used the nth frame during your video render, skipping frames here may produce weird results.

Local install guide:
https://github.com/Sxela/WarpFusion/blob/main/README.md

Guides made by users:

YouTube playlist with settings:
https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq

For tech support and other questions please join our discord server:
https://discord.gg/YrpJRgVcax

Discord is the preferred method because it is nearly impossible to provide any decent help or tech support via Patreon due to its limited text formatting and inability to add screenshots or videos to comments or DMs.
Error reports in comments will be deleted and reposted in Discord.

Files

Bladerunner 2049 - anime style scene

WarpFusion RIFE interpolation comparison

Comments

Anup prabhakar

can u please also provide video tutorials?

sxela

there are some in the post, also a few on my youtube - https://www.youtube.com/watch?v=eIeYMuWXkFs&list=PL2cEnissQhlCko-T3gPH9ltMLMRjabpIS&pp=gAQBiAQB

Anup prabhakar

Sir many peoples like me are new to your patreon . Still we cannot find a Updated tut. The existing Tut are outdated as the wrapfusion keep updated. Appreciate if you can provide some new tutorials for ur new patreon subs! Followed by your old tut it end up errors. :(

sxela

So which topics for you want covered? Or like a general how to run one?

Anup prabhakar

Would be nice if u can show a tut showing how to run the latest + with an example demo video conversion!

Nate Denton

Hey. I had to duck out for about the last weeks and WOW! Things have changed! Equally I would not have a clue where to start apart from opening google coloab lol :) It changes fast and mad respect for how much work you must be putting in

반석 김

Is there a guide for a lecture that uses Extras? I'm not sure how to set it up.

sxela

RIFE is pretty straightforward - pick a video or a folder with frames to increase the FPS. SAMtrack is trickier, but you just input your text description of the objects you need to track and run.

반석 김

Do I just need to put the path of the video I converted? Or is there a separate tutorial? Thank you.

Chris Woodruff

Hey I cant manage to find the venv guide for this version in the repo :)

Koshi Mazaki

What is the reason for choosing RIFE over FILM interpolation Sxela ? Just curious, is it more performant? I usually use FILM with Deforum.

Tekglow

can you include the option to load all the controlnet's into the gpu and never unload them for the entire run just load everything up . for people with 24 gig cards, i saw some post about editing code, but why not just add the option instead. simpler

Chris Woodruff

https://github.com/Sxela/WarpFusion#local-installation-guide-for-windows-venv This one goes to 0.13 right ? Not 0.22 I imagine the dependencies are different. Sorry if I’m being stupid haha

반석 김

ModuleNotFoundError Traceback (most recent call last) in () 113 import os 114 import cv2 --> 115 from SegTracker import SegTracker 116 from model_args import aot_args,sam_args,segtracker_args 117 from PIL import Image 2 frames /content/Segment-and-Track-Anything-CLI/aot/networks/engines/__init__.py in ----> 1 from networks.engines.aot_engine import AOTEngine, AOTInferEngine 2 from networks.engines.deaot_engine import DeAOTEngine, DeAOTInferEngine 3 4 5 def build_engine(name, phase='train', **kwargs): ModuleNotFoundError: No module named 'networks.engines'; 'networks' is not a package --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- like this

Chris Woodruff

All done sorry for asking stupid questions was just confused with the version number at the top

Dariusz Mrowiec

2 question - you have some uniwersal settings for just testing... but testing on Google Collab? Or its now to high for GC? thx

sxela

https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq&pp=gAQBiAQB you can check video descriptions for settings files

Dariusz Mrowiec

iam trying on google collab... you think its to low :/ ?

Magnus Gullbrå

thank you so much for the last update... i have been using almost all my compute units trying to solve that specific error hehe

Magnus Gullbrå

It fixed that problem but I ended up getting other problems before google colab had a technical problem so I called it the night.

Rodrigo Orsi Pagotto

I used docker install but it cant find my video path URL... I tried EVERYTHING.

sxela

What does your video path look like and what are you putting in colab? Probably wrong path format like missing ./

Don M

I got the following error: AssertionError Traceback (most recent call last) in () 204 205 frames_in = sorted(glob(batchFolder+f"/{folder}({run})_*.png")) --> 206 assert len(frames_in)>1, 'Less than 1 frame found in the specified run, make sure you have specified correct batch name and run number.' 207 208 frame0 = Image.open(frames_in[0]) AssertionError: Less than 1 frame found in the specified run, make sure you have specified correct batch name and run number.

sxela

you have no frames in the run you are trying to render as video

Aeris

I'm using stable_warpfusion_v0_22_22.ipynb. When I get to define SD + K Functions, load model I'm getting raise runtime error ('Errors in loading state_dict for {}:\n\t{}'.format. Also "size mismatch for model.diffusion_model.input_blocks." My init video is 1280,720 which is what I also set on here. I'm using the A100 on collab.

sxela

Hi, you need to check your model_version is compatible with the base model of the checkpoint you've provided

Aeris

I'm using this one - with stable_warpfusion_v0_22_22.ipynb https://civitai.com/models/35650/arti-mix-checkpoint

sxela

This one has v1.5 base model so you need to chose control_multi model version

sxela

I suggest using the latest one that's available to your tier

sxela

The error message indicates that there are no frames found in the specified run. Please double-check the batch name and run number you are using, ensuring they match the naming convention used when generating the frames. Example: "C:\code\warp\19_venv\images_out\stable_warpfusion_0.19.0\stable_warpfusion_0.19.0(41)_000000.png" - for this frame batch_name is stable_warpfusion_0.19.0 and run number is 41

dvir golan

hi, when the notebook gets to define sd model + k functions. i get an error that says : model not found : no model 'jsonmerge'

sxela

Something wrong with your install. How did you create your local env?

dvir golan

Wonder if I should reinstall again

dvir golan

Today after 3 video generation, I'm getting this error again.

dvir golan

sorry no, now its another error i used to get : FileNotFoundError: [WinError 2] The system cannot find the file specified: 'J:\\SD\\Warpfusion1.2\\Segment-and-Track-Anything-CLI/stablediffusion'

cameron sullivan

I'm COMPLETELY new to warpfusion. Is there a way to preview the controlnet images and address thresholds accordingly? For example canny or softedge.

sxela

There's save annotations checkbox (they're saved to controlnetDebug folder)

dvir golan

Hi, first of all, thank you for the help! any idea{s on how to fix the following ? AssertionError Traceback (most recent call last) Cell In[44], line 564 561 gc.collect() 562 torch.cuda.empty_cache() --> 564 do_run() 565 print('n_stats_avg (mean, std): ', n_mean_avg, n_std_avg) 567 gc.collect() Cell In[6], line 992, in do_run() 990 print('used_loras, used_loras_weights', used_loras, used_loras_weights) 991 # used_loras_weights = [o for o in used_loras_weights if o is not None else 0.] --> 992 load_loras(used_loras,used_loras_weights) 993 caption = get_caption(frame_num) 994 if caption: 995 # print('args.prompt_series',args.prompts_series[frame_num]) Cell In[40], line 292, in load_loras(names, multipliers) 290 if lora_on_disk is not None: 291 if lora is None or os.path.getmtime(lora_on_disk.filename) > lora.mtime: --> 292 lora = load_lora(name, lora_on_disk.filename) 294 if lora is None: 295 print(f"Couldn't find Lora with name {name}") Cell In[40], line 263, in load_lora(name, filename) 261 lora_module.down = module 262 else: --> 263 assert False, f'Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha' 265 if len(keys_failed_to_match) > 0: 266 print(f"Failed to match keys when loading Lora {filename}: {keys_failed_to_match}") AssertionError: Bad Lora layer name: lora_te_text_model_encoder_layers_0_mlp_fc1.hada_w1_a - must end in lora_up.weight, lora_down.weight or alpha

Kamil Kowalski

I have this error every time. Its my first time - Google Colab SafetensorError Traceback (most recent call last) in () 766 elif model_version in ['control_multi_v2', 'control_multi_v2_768']: 767 config = OmegaConf.load(f"{root_dir}/ControlNet/models/cldm_v21.yaml") --> 768 sd_model = load_model_from_config(config=config, 769 ckpt=model_path, vae_ckpt=vae_ckpt, verbose=True) 770 in load_model_from_config(config, ckpt, vae_ckpt, controlnet, verbose) 317 if ckpt.endswith('.safetensors'): 318 pl_sd = {} --> 319 with safe_open(ckpt, framework="pt", device=load_to) as f: 320 for key in f.keys(): 321 pl_sd[key] = f.get_tensor(key) SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

N pizza

What is the most stable version?

N pizza

I just recieved this error add error message for model version mismatch Can you help me understand hwo the model version is working?

sxela

You need to check your checkpoint's base model and set model version drop-down accordingly

Justin Franecki (edited)

Comment edits

2024-02-12 21:17:24 I'm able to install, but when launching run.bat it immediately closes itself out.
2023-09-20 16:45:43 I'm able to install, but when launching run.bat it immediately closes itself out.

I'm able to install, but when launching run.bat it immediately closes itself out.

sxela

what error are you having? run bat from inside cmd to see it

Novellum

C:\AI\WarpFusion\v0.22.23>run.bat Skipping git as it`s installed. Activating virtual environment Traceback (most recent call last): File "", line 1, in File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\torch\__init__.py", line 133, in raise err OSError: [WinError 126] The specified module could not be found. Error loading "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies. And I received this error when installing: Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, xformers, torchvision, torchaudio ERROR: Exception: Traceback (most recent call last): File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper status = run_func(*args) File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\cli\req_command.py", line 248, in wrapper return func(self, options, args) File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\commands\install.py", line 452, in run installed = install_given_reqs( File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\req\__init__.py", line 72, in install_given_reqs requirement.install( File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\req\req_install.py", line 807, in install install_wheel( File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\operations\install\wheel.py", line 731, in install_wheel _install_wheel( File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\operations\install\wheel.py", line 591, in _install_wheel file.save() File "C:\AI\WarpFusion\v0.22.23\env\lib\site-packages\pip\_internal\operations\install\wheel.py", line 390, in save shutil.copyfileobj(f, dest) File "shutil.py", line 195, in copyfileobj File "zipfile.py", line 923, in read File "zipfile.py", line 1013, in _read1 File "zipfile.py", line 941, in _update_crc zipfile.BadZipFile: Bad CRC-32 for file 'torch/lib/torch_cuda.dll' However, in the installer once finished I was able to see the URL for google collab so it appears to work but I haven't tested. Because I don't want to run the installer each time.

Andy

I was able to install everything, but once I run all cells it stops right after analyzing all my video writing: RuntimeError: Load up a stable

Novellum

I solved it. Had to add --no-cache to install.bat on line 76

Kamil Kowalski

or this OutOfMemoryError: CUDA out of memory. Tried to allocate 74.00 MiB (GPU 0; 15.77 GiB total capacity; 13.80 GiB already allocated; 16.12 MiB free; 14.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF