Home Artists Posts Import Register

Downloads

Content




A New File  - 2_6) BG_Changer v2 has been added in the GDrive folder :

https://drive.google.com/drive/folders/1HoZxKUX7WAg7ObqP00R4oIv48sXCEryQ

.
Now you can have consistent custom Single Background or Sequence background. 

Better backgrounds which are close to inputted image.  

Prerequisites

1) If you are new to this workflow, Watch these Video and tutorial first to get how things work:

2) Controlnet - SoftedgePlus and AutoMask from this workflow:  https://www.patreon.com/posts/update-v3-0-lcm-96482739



________________________________________________________________________






Breakdown 1 [ Single Background ]

Render Video Link : https://youtube.com/shorts/Xayw0JYn6yI









Controlnets weights -
1) SoftedgePlus [ 0.9 , 0 , 0.85 ]
2) OpenPose [ 0.8, 0 , 0.8 ]



.
Softedge and Auto Mask generated from file 1_1) Controlnet_Automasking_SoftedgePlus : https://www.patreon.com/posts/update-v3-0-lcm-96482739

.


Since It is not LCM Settings are updated to the following in the workflow file. 

.


.

.

Backgrounds: 

.


1) Umute the Single Group and Connect it the Image Adjustment node to use it (See the Above Picture For Reference

2) Upload a single image into the Background Single Node




One By one, the above backgrounds were uploaded to the background node and rendered. 


CON

Flickering is observed in the renders of Single Background.... will try to fix this. 


It's due to injection of Background latents in the middle of the render process, so it acts like img2img renders and flickers.  
It can be reduced by decreasing the value in "Start Highres Fix from nth Step" node ) but the original background will be lost.  


------------------------------------------------------------------------------------------------------------



Breakdown 2 - [ Sequence Background ] 



Render Video : https://youtube.com/shorts/wzfYILSeTMA


All Controlnets Settings, Mask , and other were same like the Single Background one.

.


Background 


The Fire Video  was brought into after Effects, cropped and Rendered out as Image Sequence, and then the folder path was used as the Input in the background node. 

Like this: 



.

Unmute Background Sequence Group and Make the connection to Image Adjustment node

.


For Part 1 normal "Yellow" was used for Fire and In Part 2 - It was color hue shifted to blue with the above adjustment to look like a Blue fire which will after render will looks like "Abstract water fire" 





Multiple Loras and It's weights were tested and the 4th one has the most "water fire"  with similar fire physics and motion, So went with it. 


Face Fix Reactor


1) Images were refined 

2) Then face fixed with v2.1 Face Fix file then

3) Face swapped with scarlett johansson using the old "5) Batch Face Swap - ReActor [Experimental] workflow"  with the following settings: 

 




 

Then All frames were composed in After Effects


 ___________________________________________________________________



Models and Lora List



AniMerge Model was used for making the above renders: https://civitai.com/models/144249?modelVersionId=250801  



Lora List : 




Some of the Loras are renamed in my PC, please find it in the above list or search for similar Loras on Civitai. 




_________________________________________________________________________


Downloads Raw

Prompts and Settings Folder Has the All Raw Files of the workflows mentioned above to study. Drag and Drop into Comfyui workspace to use them.



Prompts and Settings Folder : https://drive.google.com/drive/folders/1nCOjN1qETfRiQGmQhiyM7f3TUTTlulPA?q=parent:1nCOjN1qETfRiQGmQhiyM7f3TUTTlulPA


The workflows above in the photos embeds might be different from the Main Uploaded 2_6) Bg Changer   workflow as they were prototypes workflows for testing out. Use main 2_6) workflow to get the Best results. 




____________________________________________________________________


RENDER VIDEOS :


1) Bad Devil - YT : https://youtube.com/shorts/wzfYILSeTMA 


Source : https://www.youtube.com/watch?v=mNwpS_GrIPE&ab_channel=PINKEU 



2) Collide - https://youtube.com/shorts/Xayw0JYn6yI 


Source: https://www.youtube.com/shorts/hNlO7BfvpVA 

 

________________________________________________________________________



Follow me on CivitAi : https://civitai.com/user/Jerry_Davos 


I've Planning something special on CivitAi for all of the my Workflow users...  :D


Will come back with something new soon...
 

Love you all

Byeeeeee <333



- Jerry Davos

Comments

minimo

When I watch videos, I often see the details of clothes changing. Is there a way to fix this? Should there be a step to extract more elaborate raw images?

Jerry Davos

The changing is natural for Animatediff... That's how it's diffusion is able to animate from one element to another You can minimise this "changing" using clothing Loras like used in this post : see the bikin one ... A blue bikni lora was used to maintain cloth consistency : https://www.patreon.com/posts/94523632?utm_campaign=postshare_creator

minimo

hi jerry i have one question If I want to apply only a single backgroun, can I use "2_3) BG Changer"? Or can I use "2_6) BG Changer v2 - No LCM" by disconnecting the node of the background sequence and then connecting the single node to image effects adjusting or upsclae image? What is the correct way?

minimo

I have one more question, During the raw image creation stage, the height and width are continuously output as +a. Ex) [1]controlnet -> input 1024 x 576 / output 1024x576, [2] animation raw -> input 1024 x 576 / output 1232x688 input 1232 x 688 / output 1480 x 824 Is there any way to solve this? : (