Home Artists Posts Import Register

Content

Sort of a disclaimer: don't dive headfirst into a nightly build if you're planning to use it for your currect project which is already past its deadline - you'll have a bad day :D

ATTENTION!  MULTI-CONTROLNET (with at seast 2 controlnets) REQUIRES 16GB OF VRAM @ 720p, IF YOU ARE RUNNING LOW ON VRAM, BE PREPARED TO USE GOOGLE COLAB HOSTED ENV OR TRY RUNNIng AT YOUR OWN RISK :D

Happy International Women's Day!
No better way to celebrate it than by warping! :D

Notebook download - https://www.patreon.com/posts/79679221

Changelog:

  • add MultiControlNet
  • add MultiControlNet autodownloader/loader
  • add MultiControlNet order, weight, start/end steps, internal/external mode
  • add MultiControlNet/Annotator cpu offload mode
  • add empty negative image condition
  • add softcap image range scaler
  • update model guidance fn

MultiControlNet

Added MultiControlNet.

Consider using multicontrolnet by default even if you only need 1 model. It adds more options, that are otherwise unavailable in non-multi controlnet mode, like weight or controlnet start/end steps.

How to use:

Init Settings

Got to Load up a stable -> define SD + K functions, load model -> model_version -> control_multi

use_small_controlnet - True

small_controlnet_model_path - leave empty

download_control_model - True

force_download - Enable if some files appearto be corrupt, disable if everything is ok.

You can then specify a path to your cusom v1.x checkpoint, and pick one or more controlnet models via checkboxes below. Those models will be downloaded if they are not available locally. You can redefine the list and order of controlnet models later in stable-settings cell.

Controlnets and their annotators will be automatically loaded, unloaded, downloaded, etc. depending on your settings in stable-settings cell.

Runtime settings

controlnet_multimodel_mode

External or internal. Internal - sums controlnet output values before feeding those into diffusion model, external - sum outputs of one controlnet conditioned diffusion model  External seems slower but smoother, uses less VRAM

External mode:
controlnet1 -> diffusion -> output1            
controlnet2 -> diffusion -> output2
weighted sum(output1 + output2) -> final result

Internal mode:
weighted sum(controlnet1 + controlnet2) -> diffusion -> final result

controlnet_multimodel settings

This is a dictionary containig a list of controlnet models. Order doesn't really matter, as their results are summed up.

Format example:
controlnet_multimodel = {

"control_sd15_depth":{

"weight":1,

"start":0,

"end":0.8

},

"control_sd15_canny":

{

"weight":0,

"start":0,

"end":1

}

}

weight (only available in internal mode) - weight of the model predictions in the output
start - % of total steps at which this controlnet begins working
end - %  of total steps at which this controlnet stops working

This way you can: limit effect of certain models, mix more controlnets than you can fit into VRAM by making sure only a limited numer of models runs at a given step.

  1. controlnet steps are counted relatively to total steps
  2. You have 50 steps, 0.3 style strength. Controlnet start 0.2 end 0.8 will run from step 10 to step 40, overlapping with the actual steps taken at steps 35-40
  3. [||||||||||] 50 steps
    [-------|||] 0.3 style strength (effective steps - 0.3x50 = 15)
    [--||||||--] - controlnet working range with start = 0.2 and end = 0.8, effective steps from 0.2x50 = 10 to 0.8x50 = 40

Empty negative image condition

img_zero_uncond

By default image conditioned models use same image for negative conditioning, like if you specified the same text in both positive and negative prompt. (i.e. both positive and negative image conditionings are the same)

You can use empty negative condition by enabling this

Softcap image range scaler

do_softcap

Softly clamp latent excessive values. Reduces feedback loop effect a bit

softcap_thresh

Scale down absolute values above that threshold (latents are being clamped at [-1:1] range, so 0.9 will downscale values above 0.9 to fit into that range, [-1.5:1.5] will be scaled to [-1:1], but only absolute values over 0.9 will be affected)

softcap_q

Percentile to downscale. 1-downscale full range with outliers, 0.9 - downscale only 90% values above thresh, clamp remaining 10%)

Notebook download - https://www.patreon.com/posts/79679221

A reminder:

Changes in GUI will not be saved into the notebook, but if you run it with new settings, they will be saved to a settings.txt file as usual.

You can load settings in misc tab.

You do not need to rerun the GUI cell after changing its settings.

Local install guide:
https://discord.com/channels/973802253204996116/1067887206125015153/1067888756910215278
https://github.com/Sxela/WarpFusion/blob/main/README.md

Youtube playlist with settings:
https://www.youtube.com/watch?v=wvvcWm4Snmc&list=PL2cEnissQhlCUgjnGrdvYMwUaDkGemLGq

For tech support and other questions please join our discord server:
https://discord.gg/sayE6j2sdP

Discord is the preferred method, because it is nearly impossible to provide any decent help or tech support via Patreon due its limited text formatting and inability to add screenshots or videos to comments or DMs.
Error reports in comments will be deleted and reposted in discord.

Files

Stable WarpFusion v0.9 Nightly MultiControlNet - 2

Settings: { "text_prompts": { "0": [ "a beautiful highly detailed anime of cyberpunk mechanical augmented most beautiful male in a breathing mask, cyberpunk 2077 edgerunners anime, neon, dystopian, hightech, trending on artstation, anime, makoto shinkai" ] }, "user_comment": "multicontrol\n", "image_prompts": {}, "range_scale": 0, "sat_scale": 0.0, "max_frames": 562, "interp_spline": "Linear", "init_image": "", "clamp_grad": true, "clamp_max": 0.8, "seed": 4275770367, "width": 704, "height": 1280, "diffusion_model": "stable_diffusion", "diffusion_steps": 1000, "video_init_path": "", "extract_nth_frame": 1, "flow_video_init_path": null, "flow_extract_nth_frame": 1, "video_init_seed_continuity": false, "turbo_mode": false, "turbo_steps": 3, "turbo_preroll": 1, "flow_warp": true, "check_consistency": true, "turbo_frame_skips_steps": null, "forward_weights_clip": 0.0, "forward_weights_clip_turbo_step": 0.0, "padding_ratio": 0.2, "padding_mode": "reflect", "consistency_blur": 1.0, "inpaint_blend": 0, "match_color_strength": 0.0, "high_brightness_threshold": 180, "high_brightness_adjust_ratio": 0.97, "low_brightness_threshold": 40, "low_brightness_adjust_ratio": 1.03, "stop_early": 0, "high_brightness_adjust_fix_amount": 2, "low_brightness_adjust_fix_amount": 2, "max_brightness_threshold": 254, "min_brightness_threshold": 1, "enable_adjust_brightness": false, "dynamic_thresh": 30.0, "warp_interp": 1, "fixed_code": false, "blend_code": 1.0, "normalize_code": true, "mask_result": false, "reverse_cc_order": true, "flow_lq": true, "use_predicted_noise": false, "clip_guidance_scale": 0, "clip_type": "ViT-H-14", "clip_pretrain": "laion2b_s32b_b79k", "missed_consistency_weight": 1.0, "overshoot_consistency_weight": 1.0, "edges_consistency_weight": 1.0, "style_strength_schedule": [ 1, 0.9, 0.5 ], "flow_blend_schedule": [ 0, 0.9 ], "steps_schedule": { "0": 50, "1": 35 }, "init_scale_schedule": [ 0, 0 ], "latent_scale_schedule": [ 0, 0, 100 ], "make_schedules": false, "normalize_latent": "off", "normalize_latent_offset": 0, "colormatch_frame": "stylized_frame", "use_karras_noise": false, "end_karras_ramp_early": false, "use_background_mask": false, "apply_mask_after_warp": true, "background": "init_video", "background_source": "red", "mask_source": "none", "extract_background_mask": false, "mask_video_path": "", "negative_prompts": { "0": [ "" ] }, "invert_mask": false, "warp_strength": 1.0, "flow_override_map": [], "cfg_scale_schedule": [ 15, 14 ], "respect_sched": true, "color_match_frame_str": 0.7, "colormatch_offset": 0, "latent_fixed_mean": 0.0, "latent_fixed_std": 0.9, "colormatch_method": "PDF", "colormatch_regrain": null, "warp_mode": "use_image", "use_patchmatch_inpaiting": 0.0, "blend_latent_to_init": 0.0, "warp_towards_init": "off", "init_grad": false, "grad_denoised": true, "colormatch_after": true, "colormatch_turbo": false, "model_version": "control_multi", "depth_source": "init", "warp_forward": false, "sampler": "sample_euler", "inpainting_mask_weight": 1.0, "inverse_inpainting_mask": false, "model_path": "protogenV22Anime_22.ckpt", "diff_override": [], "image_scale_schedule": { "0": 1.5, "1": 2 }, "image_scale_template": null, "frame_range": [ 1, 999 ], "detect_resolution": 768, "bg_threshold": 0.4, "add_noise_to_latent": true, "noise_upscale_ratio": 1, "fixed_seed": false, "init_latent_fn": "spherical_dist_loss", "value_threshold": 0.1, "distance_threshold": 0.1, "masked_guidance": true, "mask_callback": 0.4, "quantize": true, "cb_noise_upscale_ratio": 1, "cb_add_noise_to_latent": true, "cb_use_start_code": true, "cb_fixed_code": false, "cb_norm_latent": false, "guidance_use_start_code": true, "offload_model": true, "controlnet_preprocess": true, "small_controlnet_model_path": "", "use_scale": false, "g_invert_mask": false, "controlnet_multimodel": "{\"control_sd15_depth\": {\"weight\": 1, \"start\": 0, \"end\": 0.8}, \"control_sd15_canny\": {\"weight\": 1, \"start\": 0, \"end\": 1}, \"control_sd15_hed\": {\"weight\": 1, \"start\": 0.7, \"end\": 1}}", "img_zero_uncond": false, "do_softcap": true, "softcap_thresh": 0.9, "softcap_q": 1.0, }

Comments

Julian

I was just about to ask you if there was any release date in mind! Great job, pal!

Christian Zurita

Hi, excuse me, maybe do you have a Collar of this project?