Home Artists Posts Import Register

Downloads

Content

Stable Video Diffusion plus AnimateDiff Refiner Beta Release

*Green Nodes are Input Nodes,

*Purple Nodes are Image Scaling Nodes

Use SVD as usual, This workflow add animate diff refiner pass, if you used SVD for refiner, the results were not good and If you used Normal SD models for refiner, they would be flickering. So AnimateDiff is used Instead.

HOW TO USE :

SVD Setup

  • Checkpoint- SVD for 14 Frames | SVD_XT for 25 Frames. https://comfyanonymous.github.io/ComfyUI_examples/video

  • SVD FPS - 6-8 is good

  • SVD Motion Bucket - Under 100 For portraits, 100 - 200 for Others. Test till 500 ( In the Initial Still Image if there elements captured in Motion like, fire, snow, smoke etc - Use 80 - 180)

  • SVD Video Frames - 14 Frames for SVD | 25 For SVD_XT

  • SVD Augmentation Level - Add Extra noise in SVD, 0.1-0.3 seems stable. You can Test other values.

  • Global Seed - Control All Sampler's Seed, Changing this also changes the motion.

  • Load Starting Image - Choose the Image you want to Experiment with

  • Upscale Image By - Rescale as Desired. See SVD Height and Width Input Checks and keep them Under 1k resolution

  • Enable Custom Resolution - 0 For Disabled | 1 for Enabled

  • SVD Custom H and W - Enter your custom height and width.

  • Refiner UpScale - Use 1.2 - 1.5 for best detailed result. (Keep Above 1k) See Refiner Resolution Checks and Keep above 1k for sharp and fine result.

  • Refiner Denoise - Use 0.4 - 0.6 for best result.

AnimteDiff Setup

  • Load AD Checkpoint - Load Model according to your Picture - Realistic, Anime, Cartoon. Try Experimenting, Epic Realism Model was seen to give best result in Realistic Photos.

  • Lora Stack - Load your desired Loras. Artifacts like blue lines are seen when using loras

  • Prompts - Change prompts according to the picture you see, and loras.


Tips:

SVD
- When You Seen no motion in video or panning motion, try changing seed and Motion Bucket.
- If you see too much motion/blur, lower the motion bucket and change seed.
- Elements like flowing water, wind, clouds , etc, elements which are captured during motion is seen to produce better motion after render.

AnimateDiff
- If you see no details in the final render then you need to increase refiner's upscale value to 1.5 or something near, to give some space to add details
-Loras work as usual, don't over do loras.

Installation :

1) The Workflow File Is Attached Below also you can download from here

2)  AnimateDiff Evolved 

3) SVD   with svd - Model 1 and svd_xt - Model 2

4) Rest Install Custom Nodes with Comfy Manager - "Install Missing Custom Nodes" or install from below:

---------------------------------------------------------------------------------------------

Warning : If you want to Edit the workflow, don't delete GetNode or SetNode, just break the connections and keep them aside, they have a bug when deleting,  the comfyui workspace will stuck and won't let you click anything else, the solution is to clear everything from the Browser Cache and Clear Data of the Past hour from browser settings.

For Someone Getting this Python Scalers Error :

Delete this Dynamic Threshold Node from the workflow, rest will work fine.

__________________________________________________________________________

My Discord Server : https://discord.gg/z9rgJyfPWJ

____________________________

Download Beta Workflow :

Comments

JP H

Hi jerry, i meet this bug and i don't know how to fix it , could please help me ? From origin workflow [rgthree] Using rgthree's optimized recursive execution. model_type V_PREDICTION_EDM adm 768 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE left over keys: dict_keys(['conditioner.embedders.0.open_clip.model.ln_final.bias', 'conditioner.embedders.0.open_clip.model.ln_final.weight', 'conditioner.embedders.0.open_clip.model.logit_scale', 'conditioner.embedders.0.open_clip.model.positional_embedding', 'conditioner.embedders.0.open_clip.model.text_projection', 'conditioner.embedders.0.open_clip.model.token_embedding.weight', 'conditioner.embedders.3.encoder.decoder.conv_in.bias', 'conditioner.embedders.3.encoder.decoder.conv_in.weight', 'conditioner.embedders.3.encoder.decoder.conv_out.bias', 'conditioner.embedders.3.encoder.decoder.conv_out.weight', 'conditioner.embedders.3.encoder.decoder.mid.attn_1.k.bias', 'conditioner.embedders.3.encoder.decoder.mid.attn_1.k.weight', 'conditioner.embedders.3.encoder.decoder.mid.attn_1.norm.bias', 'conditioner.embedders.3.encoder.decoder.mid.attn_1.norm.weight', and so on another is : ERROR diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight shape '[320, 1024]' is invalid for input of size 245760 ERROR diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight shape '[320, 1024]' is invalid for input of size 245760 ERROR diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight shape '[320, 1024]' is invalid for input of size 245760 ERROR diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight shape '[320, 1024]' is invalid for input of size 245760 ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 1024]' is invalid for input of size 491520

DAVID OSTLER

I just found your patreon and realized how much of the amazing stuff I've seen over the past few months comes from you. Nice work! Thank you for all your contributions! I'm hoping you might know how to solve the issue I have when trying to run this script. The only thing that confused me during the setup was that I couldn't find "lcm_pytorch_lora_weights.safetensors" Instead I used "pytorch_lora_weights.safetensors". Not sure if that makes a difference here. This is the error: Error occurred when executing KSampler: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. File "/home/phoenix/sd/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/phoenix/sd/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/phoenix/sd/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/phoenix/sd/ComfyUI/nodes.py", line 1344, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "/home/phoenix/sd/ComfyUI/nodes.py", line 1314, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample raise e File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, *args, **kwargs) File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 278, in motion_sample return orig_comfy_sample(model, noise, *args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1428, in KSampler_sample return _KSampler_sample(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 755, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1451, in sample return _sample(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 657, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 644, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 623, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 272, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 974, in __call__ return self.predict_noise(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1024, in predict_noise out = super().predict_noise(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 613, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 258, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) File "/home/phoenix/sd/ComfyUI/comfy/samplers.py", line 218, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) File "/home/phoenix/sd/ComfyUI/comfy/model_base.py", line 97, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 850, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 40, in forward_timestep_embed x = layer(x, context, time_context, num_video_frames, image_only_indicator, transformer_options) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/attention.py", line 786, in forward x_mix = mix_block(x_mix, context=time_context) #TODO: transformer_options File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/attention.py", line 460, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint return func(*inputs) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/attention.py", line 520, in _forward n = self.attn1(n, context=context_attn1, value=value_attn1) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/phoenix/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/attention.py", line 412, in forward out = optimized_attention(q, k, v, self.heads) File "/home/phoenix/sd/ComfyUI/comfy/ldm/modules/attention.py", line 327, in attention_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask) File "/home/phoenix/miniconda3/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention return _memory_efficient_attention( File "/home/phoenix/miniconda3/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File "/home/phoenix/miniconda3/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 341, in _memory_efficient_attention_forward out, *_ = op.apply(inp, needs_gradient=False) File "/home/phoenix/miniconda3/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/flash.py", line 458, in apply out, softmax_lse, rng_state = cls.OPERATOR( File "/home/phoenix/.local/lib/python3.10/site-packages/torch/_ops.py", line 755, in __call__ return self._op(*args, **(kwargs or {})) File "/home/phoenix/miniconda3/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/flash.py", line 106, in _flash_fwd ) = _C_flashattention.fwd(

Jerry Davos

Heyy, the recent comfy update is breaking many nodes and workflows .... Try updating all the custom nodes and also the comfyUI to the latest. And lcm_pytorch .... And pytorch lora is same ... I just renamed it on my pc... And if that doesn't work ... Python version was also updated for the new comfy so it might also be the issue... You can research on it ...see which python is needed in your comfy

Jerry Davos

Heyy It seem, that SDXL models are being using somewhere in the workflow , All models should be SD 1.5 models (Checkpoints, Loras, CNs models , Motion models) SDXLs are not supported in this workflow