Home Artists Posts Import Register
Patreon importer is back online! Tell your friends ✅

Downloads

Content

Learn how to use FLUX, ComfyUI, and SDXL to create AI characters for movies, children's books, and virtual influencers or other projects!

Discover techniques for generating AI character sheets, training custom Loras, and achieving consistent results across multiple images and characters.

Perfect for creators looking to elevate their AI art and storytelling capabilities.

👉 You can find the FREE INSTALLATION GUIDE here: https://docs.google.com/document/d/1PHYMpXqfNKj9dQIMVpXAg7R4FjNFIINok09L9heQTnM/edit?usp=sharing

👇Download the FREE WORKFLOWS below & have fun generating!

UPDATE

I just uploaded a new Version with a new Control net Implementation that should fix the errors some of you have been experiencing.

Files

Create CONSISTENT CHARACTERS for your projects with FLUX! (ComfyUI Tutorial)

Unlock the secret to perfect characters in AI art with this in-depth tutorial! If you like my work, please consider supporting me on Patreon: https://www.patreon.com/Mickmumpitz Follow me on Twitter: https://twitter.com/mickmumpitz You can find the FREE WORKFLOWS & INSTALLATION GUIDE here: https://www.patreon.com/posts/free-workflows-113743435 You can find my PATREON EXCLUSIVE advanced workflow here: https://www.patreon.com/posts/advanced-create-113745268 Lora training is easy now! Learn how to use FLUX, ComfyUI, and SDXL to create AI characters for movies, children's books, and virtual influencers. Discover techniques for generating AI character sheets, training custom Loras, and achieving consistent results across multiple images. This guide covers advanced workflows, Flux Gym, Pinokio, and FLUX Lora training. Perfect for creators looking to elevate their AI art and storytelling capabilities. Chapters: 00:00 Intro 00:50 Process 01:35 Flux Workflow 04:21 Advanced Versions 04:45 Limitations 05:34 SDXL Version 06:40 Preparing the Images 07:38 Flux Gym 09:31 Using the LoRA 11:46 Multiple Characters

Comments

Cameron Lewellen

You're quite literally the best. I'm trying to learn comfy in order to really make the most of this gold. Cheers.

Elia Savona

The undisputed king. Thank a lot man!

Stephen Purvis

Some of the best tutorials ever produced, thank you.

Dre Konrad

Consistently innovating new approaches. Well done ~

fab4_Blender

Great workflow!! How would I have to customize the workflow so that instead of text2img it would be img2img? My idea is to create a character sheet from, for example, a photo of myself. How would I do that in the character sheet workflow?

Felipe César Lourenço

thanks master of AI, how could i add a Lora node in the flux workflow?

Cassius Batts

Is it possible to install ComfyUI on Mac? If so, do you have a installation guide for mac users?

Thibault Mathian

Hey, i think the CONTROLNET MODEL link you gave for the flux version is not good and look sdxl, can you double check ?

Vishnu

same issue, I think it is https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union but running into issues with images. Not as good as Flux, might be missing something

Thibault Mathian

i made it work using an alpha version, idk where you can download it but it's included in rundiffusion.com

Tony

Great work man.

Erik Goughnour

Make sure your lora is under `ComfyUI\models\loras\` and then click the "refresh widgets" button (at least if you have the beta top-left menu layout--otherwise you might have to dig for it). Then you wire your model loader and clip loader to a new "Lora Loader (JPS)" node. That leaves one "model" output port and one "clip" output port. However you had your model and clip loaders wired up before, just hook up those output ports the same exact way--they are effectively your model loader and clip loader now. Go to the Lora Loader's 'lora_name' dropdown. In the dropdown, select the *.safetensors file generated by FluxGym (that you put in the loras folder). Make sure 'switch' on the lora loader shows 'On'. That's it.

Brian Wheldale

I realy miss negative prompts. Adornments (jewellery etc) are if not ugly enough are inconsistent. Removing them after Flux is extra work. I"m sorry for my negitive prompt er post! The workflow is fantastic, Previously I included a quick hack of IPAdapter and faceID in the earlier version of the workflow (before Flux) to inject faces into the character sheet by masking the heads of a generated character sheet (having the body qualities required), which suprised me how well it worked for a quick hack to the workflow. Basically after generating a sheet for the body type (ingoring the heads) I masked the heads and fed that version of the character sheet back into the 2nd stage (along with the face image). Was looking forward to doing the same with Flux, but the abundance of earrings is now an issue for me.

Brian Wheldale

To apply the Flux controlNet we do not need to install 'ComfyUI-eesahesNodes', as this loader node is deprecated. ComfyUI now includes native support for the InstantX/Shakkar Labs Union Controlnet Pro. Just replace the node with: 'Load ControlNet Model' and 'SetUnionControlNetType'.

Runebinder

The problem with that is the input image would need to be a character sheet for Img2Img to create a variant of one. You'd be better off adding PuLID or Reactor for face swapping and use a photo of yourself and it should add that face to the character sheet it generates.

Journeyman

Stability Matrix has Mac support. It's an application that manages AI UI packages, Models, and workflows so you can use it to install ComfyUI, WebUI Forge etc and share models between UIs.

Michael 'Kay' Klindt

Face swap wouldn't work for what I want to do. I'd like to be able to take a photo of someone, outfit and all, generate a character sheet from it, and then create the lora. In that way you could go between a live action video version and an AI img-vid version. I am aiming at music video use cases. Really hoping there is a hack for that in this workflow. Would be a game changer!

Runebinder

Thanks, I looked to do this as I didn't like the XLabs nodes when I tried them, but used the AIO Aux Preprocesser and only got a single portrait image, swapping that for SetUnionControlNetType worked a charm :)

fab4_Blender

You've hit the nail on the head, that's exactly what I want to use it for, for music videos, imagine I want an artist and I can make a parrot for him so he can replace some scenes in AI videos. The complicated thing is to create a pipeline that works for that.

santi personal

You can train the lora directly from your photos. it will be much beter using real life photos from diferent angles and expressions.

MannyThaGreat

anyone getting sampler issues? Im running ComfyUI with Pinokio. I also cant seem to find all the original safetensors.

Will West

I get an error about missing node types. Why? Also, why are there 2 workflows? Which am I supposed to use? The step-by-step guide isn't clear. It just says to download the workflow at the link but there's more than one. I'm a total noob so perhaps missing something obvious to others.

power falcon

to make the workflow working i had to replace safetenor in node load controlnet model with diffusion_pytorch_model.safetensors for workflow 241011_MICKMUMPITZ_CHARACTER_SHEET_V04_FLUX_SMPL.json

Wise

I do the same with Flux workflow but now I got error: ControlNetApplySD3 'NoneType' object has no attribute 'copy'. Any advaice?

Wise

Try to use Install Missing Custom Nodes in Manager. Did you read the "FREE INSTALLATION GUIDE"?

Kun HUANG

I just encountered an issue with the error 'RuntimeError: Numpy is not available'. Could you help me figure out the problem? I downgraded my Numpy version to an earlier release, but it didn’t resolve the issue.

Kun HUANG

got prompt Failed to validate prompt for output 87: * (prompt): - Required input is missing: images * SaveImage 87: - Required input is missing: images Output will be ignored !!! Exception during processing !!! Numpy is not available Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1574, in load_image image = torch.from_numpy(image)[None,] ^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Numpy is not available Prompt executed in 0.21 seconds

Kun HUANG

i followed all the steps of Flux model version, but the result is not that consistent, the characters are not the same face, it's not good enough for lora training.

Miles Morales

Getting an error mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)

Alex Pinsker

Do I need a Windows PC or can I accomplish the same using https://comfyuiweb.com/#ai-image-generator or https://www.runcomfy.com/ ?

Gordon Garrecht

FaceDetailerPipe The default implementation of __deepcopy__() for non-wrapper subclasses only works for subclass types that implement new_empty() and for which that function returns another instance of the same subclass. You should either properly implement new_empty() for your subclass or override __deepcopy__() if it is intended behavior for new_empty() to return an instance of a different type. Fascinating workflow! Until I reach the facedetailer :/ Can any1 help pls?

Bleep Bloop

Great results, but your guide is frustratingly vague right out of the gate. "Extract it, put it into your ComfyUI directory and run it." put what? all of the folders? the root folder? "After it’s done you can run ComfyUI by clicking run_nvidia_gpu" where? I can now search for this file and find it in a hidden .cu folder, but even then it doesn't run. Even if i point it to python no, it's not in any folder that runs. Yeah I'm sure I can figure it out over the course of my day, but it seems a waste when you've already done it but just decided to not write it down tbh

Brian Wheldale

I vaguely recall solving similar errors by ensuring the correct checkpoints are loaded. i.e., don't use a non Flux model where a Flux model should be. (It's easy for this to happen after refreshing the browser which loads any available checkpoint even the wrong one, if you're missing one). Maybe recheck those.

Wrtk Wrtk

Could you change these workflows so that a photo can be added to it. What I mean is that you can add a photo that will represent a person, a character and with this person, character will create these poses and all that as it was. I hope you know what I mean. Please if you would have a moment please do it, I think it shouldn't take you long probably for you about 30 minutes of work.

Sébastien Rommens

Bad results with pony (ponyRealism_v22MainVAE) checkpoint, what I should modify ? 1st step result here : https://ibb.co/Fx0zmTj

Miles Morales

Yup that was it!! I had the wrong checkpoints selected. It’s working now.

Gunay Karadogan

See https://github.com/comfyanonymous/ComfyUI/issues/5229 you need to update ComfyUI and also use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro. After updating ComfyUI, Manager button disappeared. I had to re-install ComfyUI Manager as well.

Luis Oliveira

I had the same problem. You have to watch the video to understand all the steps. It's time consumming since with a few adjustments to the text it would be staright forward. I'm still trying to figure out some errors. It gets stuck in the UltralyticsDetectorProvider it shows error in SEGM_DETECTOR. Probably something missing in the installation! :(

Anwar Hussain

this is what i want to knowan i for example put myself in the character sheet?

Wrtk Wrtk

Actually, yes. I mean is there a chance to change the workflow in such a way that you can add a photo at the very beginning, which will be like this character (that is, we can also put ourselves) from which you then want to create, train your own lore. That is, this character from the photo has to assume those poses that the author has originally in the workflow and the rest remains rather unchanged.

Omar Kamel

I get Sampler Custom Advanced 'out_channels' Error! Help :) ?

Gui

Same error here and can't figure it out

Gui

I got it to work, I had to change the "ControlNetLoader" node for the "InstantX Flux Union ControlNet Loader"

Omar Kamel

Now getting "ControlNetApplySD3 'NoneType' object has no attribute 'copy'" - would really appreciate any help working this out.

Aidan Blah

4x-clearRealityV1 doesn't seem to upscale. After image gen, it goes the 4x-clearRealityV1. But the image that comes out looks just as blurry - no change. Is there a parameter i should tweak?

Matthew

Any chance of an updated/fixed guide? Based on the comments it looks like in the last few days some updates have happened with models/libraries (?) and the template and guide no longer work out-of-the-box as expected.

Matthew

I'm running into various combinations of the errors already mentioned in the comments; "out_channels" error, "mat1 and mat2 cannot be multiplied", "SEGM_DETECTOR", etc. I've attempted with both the Flux flow and the SDLX flow but can't get either one working unfortunately. (I'm new to this). I do see the comments saying "just replace the node with "InstantX Flux Union ControlNet Loader" but have no idea how to go about doing that - I would love either a guide on how to do that, or an updated template already including the fix.

Mickmumpitz

I'm working on it. It's really annoying because for some people it works by changing the ControlNet Loader Node, for the other half it breaks it.

Mickmumpitz

Hey, I'm sorry you're having so many problems. I'm currently trying to figure out what the problem for so many is, which unfortunately is very tricky as it also works for many. Would you like to also try version three of the workflow? (That was the original version that I deleted because some had problems with it, one node has been replaced there.) As soon as I have found the error, I will also record a step-by-step video tutorial for the full installation. Sorry again for the inconvenience!

Matthew

No worries - and thanks I will try version 3 in the meantime! I really appreciate your videos and help.

Koss Billingham

you need a different pytorch_diffusion_model to the one in the template - found here: https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro/tree/main That solved one of my problems. The Flux Union controlNet loader problem - just double left click, type in "control net" into the search bar and click the right one, then replace the nodes flowing in and out of the one in the template. Worked for me.

Koss Billingham

set the tile size to 1024, steps to 25-35 and CFG to 6. Worked nicely for me. Oh and stick the upscale factor to 2. You can try 4, but I found I was getting weird tiling effects. Takes a while but you're better off running it through successive upscalers as you go along.

Cinematic Film

I just got black screen in my preview any idea why that is happeninng ? it is not generating any image.

Damien

Hi, is there any solution for Mac users?

Matthew

Thanks @koss billingham, that did work to resolve the errors! Now my workflow is running, however the generated image (after ~60 minutes of processing the workflow) is weirdly just a larger version of the pose sheet.

Ekka

Prompt outputs failed validation: Required input is missing: images SaveImage: - Required input is missing: images VAELoader: - Value not in list: vae_name: 'ae.safetensors' not in ['RealisticVisionV5.1.safetensors', 'taesd', 'taesdxl'] DualCLIPLoader: - Value not in list: clip_name1: 't5xxl_fp8_e4m3fn.safetensors' not in [] - Value not in list: clip_name2: 'clip_l.safetensors' not in [] UNETLoader: - Value not in list: unet_name: 'flux1-dev-fp8.safetensors' not in [] InstantX Flux Union ControlNet Loader: - Value not in list: control_net_name: 'flux\InstantX_flux.safetensors' not in [] how to solve this?

Dan Lee

this is the error i got: Prompt outputs failed validation: Required input is missing: images Required input is missing: images Required input is missing: images UpscaleModelLoader: - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\4x-ClearRealityV1.pth', 'ClearRealityV1\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\BROKEN_NCNN\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\BROKEN_NCNN\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\BROKEN_NCNN\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\BROKEN_NCNN\\4x-ClearRealityV1_Soft-fp32.bin'] ControlNetLoader: - Value not in list: control_net_name: 'OpenPoseXL2.safetensors' not in ['sdxl\\OpenPoseXL2.safetensors', 'sdxl\\mistoLine_rank256.safetensors'] SaveImage: - Required input is missing: images SaveImage: - Required input is missing: images SaveImage: - Required input is missing: images

Linus -

GPU M C B

This happens when you try to use SDXL and Flux components in one of the setup.

Jeremy Biggs

First run - I got to the control net and had an error message thrown up: ControlNetApplySD3 'NoneType' object has no attribute 'copy' File "[filepath]", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[filepath]", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[filepath]", line 169, in _map_node_over_list process_inputs(input_dict, i) File "[filepath]y", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[filepath]", line 848, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'copy'

Miles Morales

Can you use an existing image with this workflow?

Jeremy Biggs

I replaced the ControlNet with VAE and swapped it for ApplyControlNet and still had the same error. I'm using the Flux-dev checkpoint with https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/discussions ComfyUI Error Report ## Error Details - **Node Type:** ControlNetApplyAdvanced - **Exception Type:** AttributeError - **Exception Message:** 'NoneType' object has no attribute 'copy' ## Stack Trace ``` File "FILEPATH", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 169, in _map_node_over_list process_inputs(input_dict, i) File "FILEPATH", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATHComfyUI_windows_portable\ComfyUI\nodes.py", line 848, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ ``` ## System Information - **ComfyUI Version:** v0.2.3 - **Arguments:** ComfyUI\main.py --windows-standalone-build - **OS:** nt - **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)] - **Embedded Python:** true - **PyTorch Version:** 2.4.1+cu124 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 12884246528 - **VRAM Free:** 1962692088 - **Torch VRAM Total:** 9865003008 - **Torch VRAM Free:** 59526648 ## Logs ``` 2024-10-16 19:46:02,324 - root - INFO - Total VRAM 12287 MB, total RAM 32745 MB 2024-10-16 19:46:02,324 - root - INFO - pytorch version: 2.4.1+cu124 2024-10-16 19:46:02,325 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-16 19:46:02,325 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync 2024-10-16 19:46:08,288 - root - INFO - Using pytorch cross attention 2024-10-16 19:46:24,887 - root - INFO - [Prompt Server] web root: FILEPATHComfyUI_windows_portable\ComfyUI\web 2024-10-16 19:46:37,231 - root - INFO - Import times for custom nodes: 2024-10-16 19:46:37,232 - root - INFO - 0.0 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py 2024-10-16 19:46:37,232 - root - INFO - 0.1 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\cg-use-everywhere 2024-10-16 19:46:37,232 - root - INFO - 0.1 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 2024-10-16 19:46:37,233 - root - INFO - 0.5 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 2024-10-16 19:46:37,233 - root - INFO - 0.7 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager 2024-10-16 19:46:37,233 - root - INFO - 1.8 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 2024-10-16 19:46:37,234 - root - INFO - 5.1 seconds: FILEPATHComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AdvancedLivePortrait 2024-10-16 19:46:37,234 - root - INFO - 2024-10-16 19:46:37,274 - root - INFO - Starting server 2024-10-16 19:46:37,275 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-10-16 19:46:55,376 - root - INFO - got prompt 2024-10-16 19:46:56,927 - root - INFO - Using pytorch attention in VAE 2024-10-16 19:46:56,932 - root - INFO - Using pytorch attention in VAE 2024-10-16 19:47:10,940 - root - WARNING - Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. 2024-10-16 19:47:15,125 - root - ERROR - error could not detect control model type. 2024-10-16 19:47:15,126 - root - ERROR - error checkpoint does not contain controlnet or t2i adapter data FILEPATHComfyUI_windows_portable\ComfyUI\models\controlnet\diffusion_pytorch_model.safetensors 2024-10-16 19:53:27,652 - root - WARNING - clip missing: ['text_projection.weight'] 2024-10-16 19:53:28,150 - root - INFO - Requested to load FluxClipModel_ 2024-10-16 19:53:28,151 - root - INFO - Loading 1 new model 2024-10-16 19:53:32,047 - root - INFO - loaded completely 0.0 9319.23095703125 True 2024-10-16 19:53:38,695 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy' 2024-10-16 19:53:38,746 - root - ERROR - Traceback (most recent call last): File "FILEPATH", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 169, in _map_node_over_list process_inputs(input_dict, i) File "FILEPATH", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATHComfyUI_windows_portable\ComfyUI\nodes.py", line 848, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'copy' 2024-10-16 19:53:38,748 - root - INFO - Prompt executed in 403.36 seconds 2024-10-16 19:57:14,936 - root - INFO - got prompt 2024-10-16 19:57:15,067 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy' 2024-10-16 19:57:15,069 - root - ERROR - Traceback (most recent call last): File "FILEPATH", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 169, in _map_node_over_list process_inputs(input_dict, i) File "FILEPATH", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATHComfyUI_windows_portable\ComfyUI\nodes.py", line 848, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'copy' 2024-10-16 19:57:15,072 - root - INFO - Prompt executed in 0.12 seconds 2024-10-16 19:57:33,641 - root - INFO - got prompt 2024-10-16 19:57:33,766 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy' 2024-10-16 19:57:33,767 - root - ERROR - Traceback (most recent call last): File "FILEPATH", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 169, in _map_node_over_list process_inputs(input_dict, i) File "FILEPATH", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATHComfyUI_windows_portable\ComfyUI\nodes.py", line 848, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'copy' 2024-10-16 19:57:33,770 - root - INFO - Prompt executed in 0.11 seconds 2024-10-16 20:02:30,020 - root - INFO - got prompt 2024-10-16 20:02:30,145 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy' 2024-10-16 20:02:30,146 - root - ERROR - Traceback (most recent call last): File "FILEPATH", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATH", line 169, in _map_node_over_list process_inputs(input_dict, i) File "FILEPATH", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "FILEPATHComfyUI_windows_portable\ComfyUI\nodes.py", line 848, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'copy' 2024-10-16 20:02:30,148 - root - INFO - Prompt executed in 0.11 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)

ADHD MAUR MAUR

It doesn't work... 'NoneType' object has no attribute 'copy'

Aidan Blah

Thanks Koss, all may settings were good except for CGF. Looks better.

Hung Son Pham

Hi where can I download UnionFlux.safesensors (in Load ControlNet Model - and it's actually different from what I see from your video ). Is this link okie: https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union Is it this file in your tutorial ? https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/blob/832dab0074e8541d4c324619e0e357befba19611/diffusion_pytorch_model.safetensors Thank you for the excellent post

Buttonskill

Abandon all hope ye who enter these comments looking for assistance. Scroll down and it's painfully clear that the tutorial worked great 6 days ago before changes broke it. Nodes from the workflows no longer match the video. I've experienced all but one of the numerous issues listed here, and mums the word from Mickmumpitz.

Ava Lynn Li

Hi, first of all, thank you very much for the tutorial! I got ComfyUI to run on my Mac, but I wasn't able to drop the pose sheet. It always gives me the error message: Unable to find workflow in Pose_sheet_v02.png. Anyone knows a solution? Much appreciated.

Mitsuko E.

I was able to fix the ControlNet "NoneType" error with these steps: 1. Open ComfyUI Manager menu 2. Select Model Manager 3. Search for "Union" 4. Install "InstantX/Flux.1-dev Controlnet (Union) 5. Refresh or Restart ComfyUI 6. In the Load ControlNet Model node, select "FLUX.1/InstantX-FLUX1-Dev-Union/diffusion_pytorch_model.safetensors" I'm not sure if this is exactly the same controlnet as intended, but it allowed the workflow to generate images and the poses seem to be correct.

Jeremy Biggs

You need to load it into the image loader that feeds in to the control net.

Jeremy Biggs

Yeah, it's a shame as I'd be happy to be a paid backer if I could be sure of assistance - even a pinned message to say something is broken and he'll update when there's news of a fix.

Pavel Adamek

anybody else getting the error in FLUX version at face detailer step ?FaceDetailerPipe The default implementation of __deepcopy__() for non-wrapper subclasses only works for subclass types that implement new_empty() and for which that function returns another instance of the same subclass. You should either properly implement new_empty() for your subclass or override __deepcopy__() if it is intended behavior for new_empty() to return an instance of a different type.

Anna FotoNerdz

I've been trying all night and it always gives just 3-4 poses and sometimes a face and upper body view but doesn't respect the pose sheet and does never make the face angles and expressions, how can I fix this ?

Juan C. Gonzalez

Prompt outputs failed validation: Required input is missing: images Required input is missing: images Required input is missing: images ControlNetLoader: - Value not in list: control_net_name: 'OpenPoseXL2.safetensors' not in ['flux\\diffusion_pytorch_model.safetensors', 'sdxl\\OpenPoseXL2.safetensors', 'sdxl\\mistoLine_rank256.safetensors'] SaveImage: - Required input is missing: images SaveImage: - Required input is missing: images SaveImage: - Required input is missing: images cant make it work !

Len

This is strange, but if I try male character it only makes Asian men. Every. Single. Time

Jeremy Biggs

>> Is there a workflow for turning one image of a character (ie profile or portrait view) into a character sheet?

Tetrakis

I've been getting this as well. I'm on a Mac using flux1-dev-Q8_0.gguf, how about you?

Tetrakis

Do you have FLUX working as part of any workflow? It looks like you are just missing the basic components.

Tetrakis

I can get it to run through the FaceDetailer step by loading the flux1-dev-Q8_0.gguf model with the UNET Loader (GGUF) node. Unfortunately, just as the FaceDetailer iterations finish, it dies with the error "The default implementation of __deepcopy__() for non-wrapper subclasses only works for subclass types that implement new_empty() and for which that function returns another instance of the same subclass. You should either properly implement new_empty() for your subclass or override __deepcopy__() if it is intended behavior for new_empty() to return an instance of a different type."

Tetrakis

This worked for me for getting FLUX to run on my Mac. https://medium.com/@tchpnk/flux-comfyui-on-apple-silicon-with-hardware-acceleration-2024-4d44ed437179

Zazoum

Works great for my sd1.5 anime needs, with astramine,atomix and counterfeit models. For openpose model I use the fp16 openpose one v1.5. I also added a Biref bg remover, using the General model for sheet creation, kept as standard for prompting the prompt positive section "crisp, character sheet, simple background, multiple views," + description of my anime character. I do endless queued generation of sheets and almost all the results are consistent, cleaned and as intended. I don't do comfyUI up-scaling, I have Topaz. I just created the sheet loader for my best eye-picked (the ones that express me the most, I mean) up-scaled sheet, and do steps 3 and 4.

Theresa Kewley

I would also like to know this! I created different view points of a character using midjourney. It would be awesome if i could input those images into this fantastic workflow! Or is this in the advance upgrade?

Willy Quanta

Also wondering this, I have hundreds of photos of a character already but they're not as consistent as this workflow

Aidan Blah

On the upsale (stage two), How do you reduce the "Details"? I'm getting too much detail. Like moles start littering the fave. The the face also gets older because of the added wrinkles and bags under the eyes.

Jeremy Biggs

For anyone whose got this workjing with Flux - what GPU are you using?

Richard Rispoli

I did something by adding a lora to the flow. Basically you train a lora on your character then you use it to display the character in different positions like in those workflows.

Jeremy Biggs

Just seen this - using IP Adapter you can generate different views. https://www.youtube.com/watch?v=SacK9tMVNUA

Jeremy Biggs

Probably because they're overrepresented in the training set - try specifying the ethnicity if you're looking for somebody specific.

Werner Landlicht

Hello, how can i open this *.json format?

Aiowey

Does this Lora generation only make the character with the same outfit every time it's used?

grunchy

I had a similar problem (for the SDXL version - and not this workflow but an earlier version that Mickmumpitz had put up for another youtube video) The face angles and expressions were garbage. I switched to SD 1.5 workflows with their respective controlnet model files and that worked fine. When I went back to SDXL I saw that the issue was the file version and location of the SDXL controlnet model files. I would double-check file location and that the correct files are all there. Also note that (for SDXL) while the "mistoLine_rank256.safetensors" file needs to be present in the directory for the SDXL version, the node value in the workflow should still be OpenPoseXL2.safetensors.

grunchy

Process to fix is like this: 1) Find respective node of the error (ControlNetLoader is the node's default title name, and you can find the original name of a node it by right clicking the node name and looking at the value on top of the box. It is one of the "blue" boxes.) 2) Then you click on the bar where you see "OpenPoseXL2.safetensors", between the left and right arrows. The values you see there are from the model file names you have available if you have downloaded them and placed them correctly. The instructions say you should put them under the folder "SDXL". Which means you should see it as "SDXL\OpenPoseXL2.safetensors". And you have it there as your error says: "Value not in list:... )sdxl\\OpenPoseXL2.safetensors ...". In general, if you do not see the name of the model (or anything similar in the dropdown, you might have not placed the downloaded model file in the correct folder.

Chack Noriss

Is it possible with nf4 flux model ?

Corholio Zoidberg

Do you think this will work with PONY ??

grunchy

I got this as well. Using flux1-dev-Q4_K_S.gguf, also t5v-1_1-xxl-encoder-Q5_K_M.gguf, on Windows. SD/SDXL versions work.

grunchy

This github post addresses this specific issue: https://github.com/comfyanonymous/ComfyUI/issues/5229. (Scroll down to the bottom) However, I believe what really fixed it for me was installing ComfyUI-eesahesNodes as stated in point 2 of the installation guide. I installed manually. Also copied the .json file next to the .safetensors file to the folder. That got me past the controlnet stage, but I am stuck at the facedetailer.

grunchy

Gunay points to the solution. However, I believe what really fixed it for me was installing ComfyUI-eesahesNodes as stated in point 2 of the installation guide. I installed manually.

grunchy

Could not get the character sheet to work with flux, but SDXL is good. I noticed that the face poses on the lower left break if I add specific clothing:If I mention gloves or below-waistline clothing, the prompt tries to incorporate them to the face-images as well. The results are a problem for me, but funny in a way too :-)

grunchy

3060 12GB VRAM, but using 4-bit gguf. Could not get past face-detailing.

Maxy Jof

hi, i got a tiny problem that i got a error sentence said "ControlNetApplySD3 'NoneType' object has no attribute 'copy'" and i guess its cuz im using the diff controlnet but i cant find the same one like u by ur document

oscar DCDC

Hey guys! thanks for the amazing job. Do you think it's possible to use with the prompt an image. Like for example I want to make a lora on a dead king and i got only a few paintings of him. I only know how to do it with a protrait as input of the latent image but it ruins the all workflow...

Sarah

Hey, i really love your work and i've tried your worflow its impressing. I am a beginner on ComfyUI and i would like to know if i already have a charater created by another AI tool how can i modify your workflow to have a character sheet out of a text+image prompt ? I still want to keep your pose sheet but i cant find how to input 2 image in the ApplyControlNet node.

Thorfs

Same issue here with both flux workflows v3. and v.4. But SDXL is working.

Thorfs

OK I will describe my errors for the flux model more detailed, because there are differences between V3 and v4: V3 I get the error message: SamplerCustomAdvanced not enough values to unpack (expected 2, got 1) If I change the batch size to 2, I get another error message. maybe that helps finding the error source: SamplerCustomAdvanced The size of tensor a (128) must match the size of tensor b (2) at non-singleton dimension 6 In V4 I get the error message: ControlNetApplySD3 'NoneType' object has no attribute 'copy'

Michel

Hi, i had the same issue yesterday. Found a post to use this ControlNet instead: https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro/tree/main it solved my problems ;)

Thorfs

And now it solved mine :) Thank you so much for your hint!

marquis m

How do you include styles from civit? I see checkpoints, but no LoRa explanation or additions to the setup? That way we can diversify our ideas and not stick to one style like many see on civit?

Ricardo Sanchez

Si estas trabajando en flux te falta poner los archivos en su lugar, https://docs.google.com/document/d/1PHYMpXqfNKj9dQIMVpXAg7R4FjNFIINok09L9heQTnM/edit?tab=t.0 ve al apartado de FLUX sigue todo

Ricardo Sanchez

https://docs.google.com/document/d/1PHYMpXqfNKj9dQIMVpXAg7R4FjNFIINok09L9heQTnM/edit?tab=t.0

Ricardo Sanchez

el archivo unionflux.safesensors es el que tiene el nombre "diffusion_pytorch_model.safetensors" ponlo en esta ruta: ComfyUI_windows_portable\ComfyUI\models\controlnet\flux cierra todo y vuelve abrir y en la parte de load control net model selecciona el modelo "flux/diffusion_pytorch_model.safetensors" https://docs.google.com/document/d/1PHYMpXqfNKj9dQIMVpXAg7R4FjNFIINok09L9heQTnM/edit?tab=t.0

Ricardo Sanchez

Install ComfyUI https://docs.google.com/document/d/1PHYMpXqfNKj9dQIMVpXAg7R4FjNFIINok09L9heQTnM/edit?tab=t.0

Dani

hi guys, I was playing with the v04 sheet but I am getting this error. ERROR:root:Failed to validate prompt for output 87: Failed to validate prompt for output 87: ERROR:root:* (prompt): * (prompt): ERROR:root: - Required input is missing: images - Required input is missing: images ERROR:root:* SaveImage 87: * SaveImage 87: ERROR:root: - Required input is missing: images - Required input is missing: images ERROR:root:Output will be ignored Output will be ignored INFO:root:Using split attention in VAE Using split attention in VAE INFO:root:Using split attention in VAE Using split attention in VAE (I have turned on the debug option in main.py that's why the log is different), (using Flux1-dev) Node 87 is on the upscaling part. I have already disabled it from Fast Groups Muter, but it is still showing this error. Does anyone know what to do?

grunchy

I have fixed this on my machine. See my older comments on this post

Teejey

amazing tutorial, my first deep dive into locally run ai.. Q: I cant seem to change the Lora name input on the Flux-Lora comfyui template.. edit: solved, u have to create your own folder inside comfy

Massimo Moro

Has anyone managed to get ComfyUI, FLUX, and LoRA working on an NVIDIA RTX L4 virtual machine on Google Cloud with Windows Server 2022, Python 3.12, CUDA 12.4, and PyTorch 2.5?

Vitalii Mykhailyshyn

Hi! If I want to use thinkdiffusion for setting up all... I need to download all on thinkdiffusion private cloud to run this ?

Zulfariz Abd Majid

Hi, there is no image come up when I using FLUX. The SAVE IMAGE node have red line around it (I guess that is the culprit). Please help how to solve this problem?

Zulfariz Abd Majid

Hi. when I use the SDXL workflow. I got this error: This Controlnet needs a VAE but none was provided, please use a ControlNetApply node with a VAE input and connect it.

Laura Gonzalez Collado

Hey somebody can help me! i have porblems with the sampler :(

Benjamin Lebron

I got it to work; I kinda want to cry a little bit. Thank you for creating these tutorials and workflows. I cant imagine the work that went in to developing them.