Home Artists Posts Import Register

Downloads

Content

In this post I will give the breakdown, how to use mask, some secret tips I found during experimentations and the raw workflow attached to it.

Prerequisites

1) If you are new to this workflow, Watch these Video and tutorial first to get how things work:

2) Controlnet - Softedge and AutoMask from this workflow:  https://www.patreon.com/posts/update-v3-0-lcm-96482739

Q) Why only Softedge and Mask ?

A) Because, for this render, I was aiming to be a "Cartoon style" render which does not require High details produced by lineart or softedgeplus pass.

Base Workflow File Used - #2_1) Animation Raw_CN_Masked

This it file used to to make the above render,  the original raw workflow file is also attached below to study.

BREAKDOWN

1) Model Used - Hellokid2d v1.0

For getting a 2d Cartoon style render, this model was preferred.

2) Lora Used - Moana

For Getting Cute Face, and background trees

3) Loaders Settings :

Moana Lora with around 0.3 weight was used.

4) Workflow Settings and Prompts :

Most of them were default, the only thing of high priority were the Prompts:

Positive : (masterpiece, top quality, best quality, official art, beautiful and aesthetic), Extreme detailed, red dress, nature in the background, skirt,  cute

Negative : ugly, big head, deformed, bad lighting, cloak, cape, blurry, text, clouds, watermark, extra hands, bad quality, ugly, nude, nipples, breast, titties, boobs, naked chest, naked body, deformed hands, deformed fingers, nostalgic, drawing, painting, bad anatomy, worst quality, blurry, blurred, normal quality, bad focus, tripod, three legs, weird legs, short legs, bag, handbag, 3 hands, 4 hands, three hands

(embedding:BadDream:1) boy, man, male,
(embedding:ng_deepnegative_v1_75t:1),
(embedding:epiCNegative:1),
(embedding:bad-picture-chill-75v:1),
(embedding:AS-YoungV2-neg:1),
(embedding:ERA09NEGV2:1)

Negative prompts has some NSFW keywords prioritized as the test renders were getting a naughty figure 😏.

5) CN Mask

Auto Mask generated from the 1_1) ControlNet_Extra_AutoMasking_SoftEdgePlus_v3.0 file was used here.

Make sure to enable the CN mask from the blue node in the left top corner

6) Controlnet - Softedge

With only softedge, the girl figure was easily generated with low Strength (0.65) and End percent (0.6)

.

7) Controlnet - Mask as LineART (Secret Sauce)

Since LineArt controlnet used white color to focus area and opposite for black, as an experiment, I used the mask as LineArt controlnet with weight (0.22) and end percent (0.8) which gave me a better results than using Open Pose :

I Tested with the above values in weights and Noticed mask with 0.2 has the most appeal and details, so I discard the Openpose for this render.

So these are the final passes used to make this render :

- Softedge for CN 1
- Mask as LineArt for CN 2
- Mask for Mask :D

.

After all the batches from the raw file have been rendered, it was then refined using the # 3_1) AnimateDiff Refiner_Masked_CN_v3 workflow file

and then old v2.1 Face Fixer was used.

.

Here is the Final Render in 60 FPS : https://youtube.com/shorts/QeYRRFU_234

.

Below is the raw file attached for this workflow (Drag and Drop in comfyui to see the details)

.

.


Download Link:

https://drive.google.com/drive/folders/1HoZxKUX7WAg7ObqP00R4oIv48sXCEryQ

______________________________________________________________________

In 1-2 days, After gathering all the resources, testing, making tutorial pngs and gifs, will make a new documented post for the next file :D

See ya <3 Byeeeeeeee

My Discord Server : https://discord.gg/z9rgJyfPWJ

Files

Comments

John D

I've been trying to use CN Mask for two days now and when I enable it, it generate me an almost still image that doesn't follow OpenPose At all... If I turn it off, it work but I can'T change the background. Any idea?

Julien Leroy

hello i try to fix the deprecated motion model settings note to use animatediffV2 in your workflow : https://www.noelshack.com/2024-16-7-1713733531-capture-d-cran-2024-04-21-230338.png - i understoord how the new workflow is working and i can adapt https://www.noelshack.com/2024-16-7-1713734384-capture-d-cran-2024-04-21-231937.png but on official Git of Animatediff nodes i can find clues about what is the replacement for motion model settings and where connect it ?