Home Artists Posts Import Register
Patreon importer is back online! Tell your friends ✅

Downloads

Content

In this video I teach you how to use ComfyUI to create consistent characters, pose them, automatically integrate them into AI-generated backgrounds and even control their emotions with simple prompts.

I developed this ComfyUI workflow in preparation for one of the next AI 3D rendering workflows, in which we will look at animating characters. I’ve been developing some really cool stuff and can’t wait to finally show it to you! 

But for now you can also use this workflow for many other exciting things: to create children's books, AI movies or one of these AI influencers everyone keeps talking about! 

NEW LINK TO THE POSE EDITOR: https://zhuyu1997.github.io/open-pose-editor/

You can download the FREE workflows below!

Here is the INSTALLATION GUIDE: https://docs.google.com/document/d/1ixEYqQzQBT6gAE-LOMc3--BSxE3WinKDrnGR8vfMQRs/edit?usp=sharing

I’ll also share some more example files and poses on Discord. Have fun generating!

Files

Create Consistent, Editable AI Characters & Backgrounds for your Projects! (ComfyUI Tutorial)

I'll show you how to use ComfyUI to create consistent characters, pose them, automatically integrate them into AI-generated backgrounds and even control their emotions with simple prompts. If you like my work, please consider supporting me on Patreon: https://www.patreon.com/Mickmumpitz Follow me on Twitter: https://twitter.com/mickmumpitz I developed this ComfyUI workflow in preparation for one of the next AI 3D rendering workflows, in which we will look at animating characters. But you can also use this workflow for many other exciting things: to create children's books, AI movies or one of these AI influencers everyone keeps talking about! You can download the FREE workflows here: https://www.patreon.com/posts/new-video-create-103261741?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link Chapters: 00:00 Intro 01:09 Character Sheet 04:49 Loras & Midjourney 05:48 Controllable Characters 10:42 Outro

Comments

Mauricio Tonon

thats great man, I can use to generate the frame by frame necessary for steerable motion, thanks!!1

Eldon Hier

Hello, I have suscribed to have acces to the Discord channel but I can't find it.. Can I have some help finding the link to the channel? Kindly

boris the blade

Excellent! Just one question, how did you manage to get a flat 2d design? I always get like a pixarish 3d look, and I would like a flat 2d (not manga)

Mickmumpitz

Thank you! I used Wildcard + this lora: https://civitai.com/models/181883/essenz-nausicaa-of-the-valley-of-the-wind-anime-screencap-style-lora-for-sdxl-10 Then used lots of prompts like anime, cel shading, outlines, flat colors, comic in the prompt

Lahiru Bandara

I joined the subscribed to the patreon to specifically get to know about this workflow. I have searched all over the internet finding a method to make a consistent character not only the face but also the entire body. And after searching a lot I came across this video on YouTube. I tried to run this on colab but in the free one it's very little time available so I tried to run it on cpu and I came up with some problems. But I will be trying to run this on Vast.Ai . Is there any advice from you and tips for me to work this. I'm a complete beginner to comfyui. And I would love to join the discord too

Eric Goodman

Hi, I'm excited to put this into practice. However, there is a bit of implied knowledge that sadly, I don't possess. Can you please detail the exact format for connecting the nodes in the proper order? Thanks

Anastasia Rekutz

Hi, how can I get access to the discord? Thank you! Amazing job!

OctonionPrime

1024x1024 resolution doesn't fit all the poses for some reason

ColdWarTom

Is it possible to build a workflow like this for 3DPonyVision? Is it possible to hire you to build this? Please let me know. Thanks https://civitai.com/models/479602/3dponyvision

Einar Petersen

Happy to be supporting you on your journey - I sincerely hope I'll be able to use these techniques for my many currently storyboard ready comic book projects and as well for future animation work as well. As always great entertainment value - Keep up the good work.... for some reason I have a peculiar craving for Cheese... hmmm

Sasha Melentev

Hi. Thanks for the awesome tutorials! I'm having some difficulties with the IPAdapterUnifiedLoader node. How to fix this bug? --- Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 535, in load_models raise Exception("IPAdapter model not found.")

Mickmumpitz

Thank you! Can you try installing the "ComfyUI Essentials" custom nodes via the manager and try again.

Christopher Bücklein

Hey, when I get to the second KSampler (where the mask is added for better integration of the image), I get the following mistake: Error occurred when executing KSampler: 'TimestepEmbedSequential' object has no attribute '1' File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1373, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1343, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 801, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 703, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 690, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 665, in inner_sample self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 627, in process_conds pre_run_control(model, conds[k]) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 475, in pre_run_control x['control'].pre_run(model, percent_to_timestep_function) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 322, in pre_run comfy.utils.set_attr_param(self.control_model, k, self.control_weights[k].to(dtype).to(comfy.model_management.get_torch_device())) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 455, in set_attr_param return set_attr(obj, attr, torch.nn.Parameter(value, requires_grad=False)) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 449, in set_attr obj = getattr(obj, name) File "C:\Users\chris\Documents\stablediffusion\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1709, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") I checked that I have all plugins updated and I have the same models & Loras as you use in the video. Do you have any idea what the issue could be?

James V. Thomas

Hi! I've run into this same issue and the Essentials didn't resolve it unfortunately, any other ideas?

Deniz Cakir

I am new to Comfy, how do you activate a node, specifically the one for saving individual faces.

Deniz Cakir

Well Another issue I am having is that ComfyUi complains that IPAdapterPlus.py is missing while it is perfectly there , or that controllablecharacter json fails to generate and image at all maybe I should restart my pc but I think thats ahardly the case, I have completed all the steps in the otherworkflow and everything works fine

Deniz Cakir

update, restarting the pc has changed things, the comfy ui started complaining about anything that I manually copy pasted into the folder structure, it seems as if I need to use the manager to install everything , I will try to do so .

Deniz Cakir

it figures somehow the code cannot find the default plus adapter , when I chage the adapter to something else it works, they are in the same folder etc. so this seems to be a bug.

Roberto González Valerio

Hi there! I have a problem when I launch the prompt by pressing the "Queue Prompt" button. The message that ComfyUI returns to me is: Prompt outputs failed validation ControlNetLoader: - Value not in list: control_net_name: 'OpenPoseXL2.safetensors' not in ['control_sd15_depth.pth', 'control_sd15_openpose.pth', 'sdxl\\OpenPoseXL2.safetensors', 'sdxl\\sai_xl_depth_256lora.safetensors'] UpscaleModelLoader: - Value not in list: model_name: '4xUltrasharp_4xUltrasharpV10.pt' not in [] Please, help!

Daniel Friis

I'm getting this error as I try to run the workflow: "Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320)"

Daniel Friis

Seems like you haven't installed the models ('OpenPoseXL2.safetensors' and '4xUltrasharp_4xUltrasharpV10.pt') or added them to the right folders

Daniel Friis

For reference if anyone else encounter this. My controlnet model didn't match the 'main' model. I just reloaded the workflow to make sure the right models were selected throughout the workflow

Daniel Friis

Okay, I ran into the same issue. If you've downloaded the OpenPoseXL2.safetensors as specified in the guide, you've added it the sdxl folder. Just need to change that in your workflow or move the model to the root. The guide also doesn't say anything about installing the upscaler model, but you can install that from the manager.

Daniel Friis

Just a note to the guide. IP Adapter models need to go into the models/ipadapter directory now

Daniel Friis

Has anyone had any success using this with InstantID?

Hritosloke Roy

Hi, I'm trying to generate a background with the character as per following your video, it is giving me an error "DepthAnythingPreprocessor operands could not be broadcast together with shapes (518,924,4) (3," any solution regarding this problem? Thanks

Mickmumpitz

The function of a node had now changed, which broke the workflow. I have now fixed the error and made some improvements to the workflow (v03)! But now you need an additional controlnet (MistoLine) I have added the link in the guide.

Jordan Lee

I am having an issue with the ComfyUI_ipAdapter. Update custom node 'ComfyUI_IPAdapter_plus' Update: ['https://github.com/cubiq/ComfyUI_IPAdapter_plus'] Update(git-clone) error: https://github.com/cubiq/ComfyUI_IPAdapter_plus / Not a git repository. Do I have the folder in the wrong place? Does anyone know the fix?

Jordan Lee

I solved this by uninstalling and reinstalling via the managerUI inside comfyui.

VLRevolution

Any way to have image as the input for the character sheet and then generate more consistent characters from input image? I'm sure it's somehow possible but I'm not that well versed in comfy. This would be really great addition!

VLRevolution

Btw https://openposeai.com/ is offline. What alternative should we use?

VLRevolution

Found an alternative hosting it: https://zhuyu1997.github.io/open-pose-editor/

VLRevolution

Any advice for using Mickmumpitz_ControllableCharacters_v03.json with SD1.5? I seem to get a few errors if I try

Frederic Chauveau

Sorry... a stupid question. Where can I find this file : depth_2024_04_26_16_31_12(1).png ?