Home Artists Posts Import Register

Downloads

Content

Making Temporal consistency studies took sime time, so this preview has been delayed a bit.

Sort of a disclaimer: only nvidia gpu with 8gb+ or hosted env

don't dive headfirst into a nightly build if you're planning to use it for your currect project, which is already past its deadline - you'll have a bad day. This is not a production-ready user-friendly software :D

Make sure you've tried awesome free solutions like Deforum or TemporalKit before subscribing and submitting yourself to the warp.

  • add prompt weight parser/splitter
  • update lora parser to support multiple prompts. if duplicate loras are used in more than 1 prompt, last prompt lora weights will be used
  • unload unused loras
  • add multiple prompts support
  • add max batch size
  • add prompt weights
  • add support for different prompt number per frame
  • add prompt weight blending between frames
  • bring back protobuf install
  • add universal frame loader

v0.16.4-5:

  • add masked prompts support for controlnet_multi-internal mode
  • add controlnet low vram mode

v0.16.6:

  • add masked prompts for other modes
  • fix undefined mask error

v0.16.8

  • fix consistency error between identical frames
  • add ffmpeg deflicker option to video export (dfl postifx)
  • export video with inv postfix for inverted mask video
  • add sd_batch_size, normalize_prompt_weights, mask_paths, deflicker_scale, deflicker_latent_scale to gui/saved settings
  • fix compare settings not working in new run

v0.16.9

  • fix reference controlnet not working with multiprompt

v0.16.10

  • disable ffmpeg deflicker for local install

30.06.2023:

  • moved to nightly

5.7.2023

  • fix torchmetrics version thx to tomatoslasher

28.07.2023:

  • fix pillow error

v0.16.13, 24.08.2023:

  • fix safetensors error

Multiple prompts

You can now use multiple prompts per frame. Just like this:

{0:['a cat','a dog']}

In this case with no weights specify it will give each prompt a weight of 1.

You can speciffy weights like this: {0:['a cat:2','a dog:0.5']}

The weights should be at the end of the prompt.

normalize_prompt_weights: enable to normalize weights to add up to 1.

For example this prompt {0:['a cat:2','a dog:0.5']} with normalize_prompt_weights on will effectively have weights {0:['a cat:0.8','a dog:0.2']}

Prompt weights can be animated, but the weight will be applied to the specific prompt number, not exact text. So {0:['prompt1:1', 'prompt2:0'], 5: ['prompt1:0', 'prompt3:1']} will blend the weights but not the prompts, so you will have prompt1 until frame5, then it will be replaced with prompt3, but the weights will be animated, so that a prompt for a frame between 0 and 5 will look like ['prompt1:0.5', 'prompt2:0.5']

You can have different number of prompts per frame, but the weights for prompts missing in a frame will be set to 0

For example, if you have:

{0:['a cat:1', 'a landscape:0'], 5: ['a cat:0', 'a landscape:1'], 10:['a cat:0', 'a landscape:1', 'a galaxy:1']}

'a galaxy' prompt will have 0 weight for aall frames where it's missing, and will have weight 1 at frame 10

Each additional prompt adds +50% to vram usage and diffusion render times.

Masked prompts

You can now use masks for your prompts. The logic is a bit complicated, but I hope you'll get the idea.

You can use masks if you have more than one prompt.

The first prompt is always the background prompt, you don't need a mask for it.
If you decide to use masks, you will need to provide them for every other prompt other than the 1st one. Each next prompt+mask will be placed on top of the previous, only white areas of the mask will be preserved. For example, if your 2nd prompt mask is completely covering the 1st prompt mask, you will not see the 1st prompt in the output as it will be covered by the 2nd prompt mask completely.

You need to specify path to your mask frames/video in the mask_paths variable. for 2 prompts you will need 1 mask, for 3 prompt - 2 masks, etc.

Leave mask_paths=[] to disable prompt masks. Enabling prompt masks will effectively disable prompt weights.

Max_batch_size

By default oyur image is diffused with a batch = 2, consisting of conditioned and unconditione images (positive and negative prompt). When we add more prompts, we to diffuse more images, one extra image per extra prompt.

Depending on your gpu vram, you can decide to increase batch size to process more than 2 prompts at a time.

You can set batch size to 1 to reduce VRAM usage even with 1 prompt.

Controlnet low vram fix

Enable to juggle controlnets from cpu to gpu each call. Is very slow, but saves a lot of vram. Right now all controlnets are offloaded and loaded to gpu ram once per frame, so that they are only kept on GPU during diffusion.

With controlnet_low_vram=True all controlnets will stay offloaded to cpu and only be loaded to gpu when being called during diffusion, then offloaded back to cpu, each diffsuion step.

Fixes

Unused loras should now be kept unloaded in a new run that doesn't use loras.

Local install guide:
https://discord.com/channels/973802253204996116/1067887206125015153/1067888756910215278
https://github.com/Sxela/WarpFusion/blob/main/README.md

Tutorials:
https://youtu.be/HkM-7wxtkGA
https://www.youtube.com/watch?v=FxRTEILPCQQ
https://www.youtube.com/watch?v=wqXy_r_9qw8
https://www.youtube.com/watch?v=VMF7L0czyIg
https://www.youtube.com/watch?v=m8xaPnaooyg

Youtube playlist with settings:
https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq

For tech support and other questions please join our discord server:
https://discord.gg/YrpJRgVcax

Discord is the preferred method, because it is nearly impossible to provide any decent help or tech support via Patreon due its limited text formatting and inability to add screenshots or videos to comments or DMs.
Error reports in comments will be deleted and reposted in discord.

Comments

Anonymous

Hello can u help me with this error.

Anonymous

OutOfMemoryError Traceback (most recent call last) in do_run() 1237 try: -> 1238 sample, latent, depth_img = run_sd(args, init_image=init_image, skip_timesteps=skip_steps, H=args.side_y, 1239 W=args.side_x, text_prompt=text_prompt, neg_prompt=neg_prompt, steps=steps, 19 frames OutOfMemoryError: CUDA out of memory. Tried to allocate 480.00 MiB (GPU 0; 15.77 GiB total capacity; 13.65 GiB already allocated; 96.12 MiB free; 14.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF During handling of the above exception, another exception occurred: SystemExit Traceback (most recent call last) SystemExit: During handling of the above exception, another exception occurred: SystemExit Traceback (most recent call last) [... skipping hidden 1 frame] SystemExit: During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) [... skipping hidden 1 frame] /usr/local/lib/python3.10/dist-packages/IPython/core/ultratb.py in find_recursion(etype, value, records) 380 # first frame (from in to out) that looks different. 381 if not is_recursion_error(etype, value, records): --> 382 return len(records), 0 383 384 # Select filename, lineno, func_name to track frames with TypeError: object of type 'NoneType' has no len()

Anonymous

Hi! it stops in "Generate optical flow and consistency maps NameError Traceback (most recent call last) in () 15 import zipfile, shutil 16 ---> 17 if (os.path.exists(f'{root_dir}/raft')) and force_download: 18 try: 19 shutil.rmtree(f'{root_dir}/raft') NameError: name 'os' is not defined

Anonymous

I can't do it because google colab disconnects all the time in the 5th, 6th step so I have to start again. Is there any way to solve that?

Anonymous

ok please point me to the sdxl post. I searched on here for "sdxl" and didnt find it.

Anonymous

Hi alex. Please help OutOfMemoryError Traceback (most recent call last) in do_run() 1237 try: -> 1238 sample, latent, depth_img = run_sd(args, init_image=init_image, skip_timesteps=skip_steps, H=args.side_y, 1239 W=args.side_x, text_prompt=text_prompt, neg_prompt=neg_prompt, steps=steps, 21 frames OutOfMemoryError: CUDA out of memory. Tried to allocate 480.00 MiB. GPU 0 has a total capacty of 15.77 GiB of which 224.12 MiB is free. Process 13149 has 15.55 GiB memory in use. Of the allocated memory 13.66 GiB is allocated by PyTorch, and 456.09 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I'm unsure how to solve this

Lawrence Jordan

I can't get past this error: NameError Traceback (most recent call last) in () 329 330 gui_misc = { --> 331 "user_comment": Textarea(value=user_comment,layout=Layout(width=f'80%'), description = 'user_comment:', description_tooltip = 'Enter a comment to differentiate between save files.'), 332 "blend_json_schedules": Checkbox(value=blend_json_schedules, description='blend_json_schedules',indent=True, description_tooltip = 'Smooth values between keyframes.', tooltip = 'Smooth values between keyframes.'), 333 "VERBOSE": Checkbox(value=VERBOSE,description='VERBOSE',indent=True, description_tooltip = 'Print all logs'), NameError: name 'user_comment' is not defined I'm using this settings file. { "text_prompts": { "0": [ "illustration of a robot with futuristic glasses, elegant, (8k, masterpiece:1.2), a detailed cyberpunk, cyberpunk 2077, neon, cinematic, backlighting, realism, heavy glow makeup, model photoshoot style, very long hair, intricate, elegant, hightech, trending on artstation" ] }, "user_comment": "testing cc layers", "image_prompts": {}, "range_scale": 0, "sat_scale": 0.0, "max_frames": 124, "interp_spline": "Linear", "init_image": "", "clamp_grad": true, "clamp_max": 1.0, "seed": 4275770367, "width": 704, "height": 704, "diffusion_model": "stable_diffusion", "diffusion_steps": 1000, "video_init_path": "F:\\code\\WarpFusion\\v5.27.5\\Videos\\Mirror 720x720.mov", "extract_nth_frame": 2, "flow_video_init_path": null, "flow_extract_nth_frame": 1, "video_init_seed_continuity": false, "turbo_mode": false, "turbo_steps": 3, "turbo_preroll": 1, "flow_warp": true, "check_consistency": true, "turbo_frame_skips_steps": null, "forward_weights_clip": 0.0, "forward_weights_clip_turbo_step": 0.0, "padding_ratio": 0.2, "padding_mode": "reflect", "consistency_blur": 1.0, "inpaint_blend": 0, "match_color_strength": 0.0, "high_brightness_threshold": 180, "high_brightness_adjust_ratio": 0.97, "low_brightness_threshold": 40, "low_brightness_adjust_ratio": 1.03, "stop_early": 0, "high_brightness_adjust_fix_amount": 2, "low_brightness_adjust_fix_amount": 2, "max_brightness_threshold": 254, "min_brightness_threshold": 1, "enable_adjust_brightness": false, "dynamic_thresh": 30.0, "warp_interp": 1, "fixed_code": true, "blend_code": 0.1, "normalize_code": true, "mask_result": false, "reverse_cc_order": true, "flow_lq": true, "use_predicted_noise": false, "clip_guidance_scale": 0, "clip_type": "ViT-H-14", "clip_pretrain": "laion2b_s32b_b79k", "missed_consistency_weight": 1.0, "overshoot_consistency_weight": 1.0, "edges_consistency_weight": 1.0, "style_strength_schedule": [ 0.8, 0.5 ], "flow_blend_schedule": [ 0.97 ], "steps_schedule": { "0": 35 }, "init_scale_schedule": [ 0, 0 ], "latent_scale_schedule": [ 0, 50 ], "latent_scale_template": "", "init_scale_template": "", "steps_template": "", "style_strength_template": [ 0.5, 0.6, 0.33, 2 ], "flow_blend_template": [ 0.99, 0.0, 0.33, 1 ], "make_schedules": false, "normalize_latent": "off", "normalize_latent_offset": 0, "colormatch_frame": "stylized_frame", "use_karras_noise": false, "end_karras_ramp_early": false, "use_background_mask": false, "apply_mask_after_warp": true, "background": "init_video", "background_source": "red", "mask_source": "none", "extract_background_mask": false, "mask_video_path": "", "negative_prompts": { "0": [ " cleavage, revealing, cut off, bad, boring background, simple background, More_than_two_legs, more_than_two_arms, (fat), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), ((extra arms)), ((extra legs)), mutated hands, (fused fingers), (too many fingers), ((long neck)), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist's name" ] }, "invert_mask": false, "warp_strength": 1.0, "flow_override_map": [], "cfg_scale_schedule": [ 15 ], "respect_sched": true, "color_match_frame_str": 0.2, "colormatch_offset": 0, "latent_fixed_mean": 0.0, "latent_fixed_std": 0.9, "colormatch_method": "LAB", "colormatch_regrain": null, "warp_mode": "use_image", "use_patchmatch_inpaiting": 0.0, "blend_latent_to_init": 0.0, "warp_towards_init": "off", "init_grad": false, "grad_denoised": true, "colormatch_after": false, "colormatch_turbo": false, "model_version": "control_multi", "depth_source": "init", "warp_num_k": 128, "warp_forward": false, "sampler": "sample_euler", "mask_clip": [ 0, 255 ], "inpainting_mask_weight": 1.0, "inverse_inpainting_mask": false, "model_path": "F:\\code\\WarpFusion\\v5.27.5\\models\\deliberate_v2.safetensors", "diff_override": [], "image_scale_schedule": { "0": 1.5, "1": 2 }, "image_scale_template": null, "frame_range": [ 0, 0 ], "detect_resolution": 768, "bg_threshold": 0.4, "diffuse_inpaint_mask_blur": 25, "diffuse_inpaint_mask_thresh": 0.8, "add_noise_to_latent": true, "noise_upscale_ratio": 1, "fixed_seed": false, "init_latent_fn": "spherical_dist_loss", "value_threshold": 0.1, "distance_threshold": 0.1, "masked_guidance": true, "mask_callback": 0.4, "quantize": true, "cb_noise_upscale_ratio": 1, "cb_add_noise_to_latent": true, "cb_use_start_code": true, "cb_fixed_code": false, "cb_norm_latent": false, "guidance_use_start_code": true, "offload_model": true, "controlnet_preprocess": true, "small_controlnet_model_path": "F:\\code\\WarpFusion\\v5.27.5/ControlNet/models/control_sd15_hed_small.safetensors", "use_scale": false, "g_invert_mask": false, "controlnet_multimodel": "{\"control_sd15_depth\": {\"weight\": 1, \"start\": 0, \"end\": 1}, \"control_sd15_canny\": {\"weight\": 1, \"start\": 0, \"end\": 1}, \"control_sd15_hed\": {\"weight\": 1, \"start\": 0, \"end\": 1}}", "img_zero_uncond": false, "do_softcap": true, "softcap_thresh": 0.9, "softcap_q": 1.0, "deflicker_latent_scale": 0, "deflicker_scale": 0.0, "controlnet_multimodel_mode": "internal", "no_half_vae": false, "temporalnet_source": "stylized", "temporalnet_skip_1st_frame": true, "rec_randomness": 0.0, "rec_source": "init", "rec_cfg": 1.0, "rec_prompts": { "0": [ "a beautiful highly detailed most beautiful (woman) ever" ] }, "inpainting_mask_source": null, "rec_steps_pct": 1.0 }

Lawrence Jordan

Thanks. I'm attempting this right now. What if I want to do a re-run with some prompt changes, same thing?