Home Artists Posts Import Register
Patreon importer is back online! Tell your friends ✅

Downloads

Content

Thank you so much for supporting the channel and my work! ❤️

Be sure to check out our community Discord and let me know what you want to see next!

Below you can download:

  • Sample FLUX Training Datasets

  • The Flux Loras I trained with these datasets

  • The character sheets for creating the datasets (including metadata: Just drag and drop them into ComfyUI to start the dataset creation workflow, including the prompt)

  • The advanced workflow for FLUX

  • The advanced workflow for SDXL

Thanks again for making these videos possible. I hope you can create some amazing things with them.  Be sure to share these results in our Discord :)

UPDATE

I just uploaded a new Version with a new Control net Implementation that should fix the errors some of you have been experiencing. "241027_MICKMUMPITZ_CHARACTER_SHEET_V05_FLUX_ADV.json"

Files

Comments

red baron

wow - amazing work. can you also make a workflow who can create consistent character image to image?

noone na

how to join your discord?

Armando

Where is the tinaai Lora?

Koss Billingham

I cannot get FLUX to generate realistic looking people on this, tried the SDSX version and I'm getting nothing but wonky mutants. I've checked the relevant recommended settings on the civitai pages. WildcardTurboXL will generate perfectly good images with similar prompts when not using this workflow - where am I going wrong? Edit - getting somewhere recreating the model step by step (albeit much more rudimentary) - I have managed to generate some decent 3 profile models, however the lines from the control net are showing up as black lines overlayed on the person. Can't seem to prompt them out, any ideas how to sort this out? Edit 2 - for anyone having the same problem, I started a slightly different workflow from scratch, set the image generation to the same resolution as the control net, then set the strength to 0.6. You end up having to do some cherry picking, but it works. Upscaling has mixed results, so you'll need to add a step to fix faces, but using the face detailer module, you can split out a grid of faces using the cropped_refined node and feeding that into individual expression editors. I cut out the face elements of the controlnet and only used the 3 for the body, this saves me having to clip out faces using coordinates. Edit 3 - also, make sure your base image is the same aspect ratio as the control net image, I found I got much better results with the base image generation pixels being exactly double that of the control net.

Lee Chars

After training Lora, faces gave consistent results, but styles required prompting to get consistent results.

James McNulty

can I load an image I have created and if so once I add the node where / how to integrate into the advanced workflow?

Tomasz Pasieka

can I load an image I have created and if so once I add the node where / how to integrate into the advanced workflow?

Manuel Nixon

Thank you immensely for creating this!

Tomasz Pasieka

I got error ControlNetApplySD3 'NoneType' object has no attribute 'copy' can you help me how to sorted ?

Infinity_Reynolds

Hi there! Just stumbled upon your Youtubechannel yesterday and immediately joined your Patreon! Thank you for all your work! just a question to this workflow...is it possible to connect Loras before the Charaktercreation and if so...where? I am relatively new to ConfyUI. So any help would be great! 🖖 Oh and i am using the FLUX Workflow

Richard

Thanks! subbed! you make some great videos. would love to see more flux consistent character workflows. maybe even a LORA guide.

Baldilocks

I had a similar issue. The discord is really helpful as other users can help as well. In this case, open ComfyUI Model Manager and install the "InstantX/FLUX.1-dev Controlnet (Union)" then restart and you should be good to go.

Koss Billingham

Right - I've cracked generating enough consistent stuff to train a LoRA. If I'm using FLUXgym, is that only going to work on a FLUX workflow, or am I safe to use the LoRA for SDSX type stuff too? I've skipped FLUX up until now as I couldn't get it to do what I wanted. Edit 1: solved that problem. Trained on SDSX generated images, moved into the Flux workflow. You get much better results if you take the time to tag your training images accurately. If you've got the time/resources, regenerate more of what you want in FLUX (control net works well here) and then create a new data set to train a second LoRA from there. Took me about 5 hours to train each LoRA on 16 epocs with 52 source images (all tagged with paragraphs). Use your desired method of inpainting/img2img/photoshop to touch up the photos before you add them to the data set.

Luis Oliveira

Hi everyone! I'm new to all of this, and I'm not entirely sure what ConfyUI is yet, but I'm here because I really need help creating consistent characters. 😅 Congrats on the workflow—it's amazing! I'm trying to get it up and running, but there are so many moving parts that I'm struggling to handle it on my own. When I load the "..V04_FLUX_ADV.json" file, I encounter three errors: ImageBatchMulti, AnyLineArtPreprocessor_aux, and InstantX Flux Union ControlNet Loader. Can anyone point me in the right direction on how to resolve these? Thanks in advance!

Gerard Colin

I'm stumped for now... Simply changing the UnionFlux.safetensors to diffusion_Pytorch_model.safetensors does not work. I think it has to do with "Apply Controlnet with VAE" being depreciated. Simply replacing the node with "Apply Controlnet" also fails. I am using the Character Sheet V04 Flux Adv workflow. Thoughts?

Ekka

Prompt outputs failed validation: Required input is missing: images SaveImage: - Required input is missing: images VAELoader: - Value not in list: vae_name: 'ae.safetensors' not in ['RealisticVisionV5.1.safetensors', 'taesd', 'taesdxl'] DualCLIPLoader: - Value not in list: clip_name2: 'clip_l.safetensors' not in [] - Value not in list: clip_name1: 't5xxl_fp8_e4m3fn.safetensors' not in [] UNETLoader: - Value not in list: unet_name: 'flux1-dev-fp8.safetensors' not in [] InstantX Flux Union ControlNet Loader: - Value not in list: control_net_name: 'flux\InstantX_flux.safetensors' not in [] how to solve these errors?

R0binho0d

The girls generated from this look VERY "robot" like - how do we make a more natural looking girl?

R0binho0d

"a character sheet, simpel background, multiple views, from multiple angles, visible face, portrait, Portrait of a 21-year-old scandinavian girl, photography, realistic, instagram" just generated an asian girl. How does this even work?

Gerard Colin

This post by Mitsuko E. cleared my problems: "I was able to fix the ControlNet "NoneType" error with these steps: 1. Open ComfyUI Manager menu 2. Select Model Manager 3. Search for "Union" 4. Install "InstantX/Flux.1-dev Controlnet (Union) 5. Refresh or Restart ComfyUI 6. In the Load ControlNet Model node, select "FLUX.1/InstantX-FLUX1-Dev-Union/diffusion_pytorch_model.safetensors" I'm not sure if this is exactly the same controlnet as intended, but it allowed the workflow to generate images and the poses seem to be correct."

UAknight

Guys, where is a full instruction on How to install and use it?

Koss Billingham

Right, for those having problems with the SDXL side - the main issue is the control net. Works fine on FLUX, works horrendously on SDXL. Find some pictures of people in poses that you want, use a depth map instead and then generate a bunch of individual images, in-paint the bits you don't like and then train your LoRA from there. You can take the chunks of the advanced FLUX workflow from the template and skip the first few steps, replacing them with an image loader. Bit laborious, but it works well. The FLUX workflow doesn't seem to want to generate anything approaching photo-real and keeps trying to make my characters look a bit asian for some reason, but iterating through a few different workflows to get the end result you want is where the magic happens. The principles of these workflows give you all of the tools you need to achieve it. If you want the effectiveness of the FLUX control net in SDXL, use the FLUX workflow to generate a the poses you want, then chop those up into depth maps for SDXL once you've generated the character poses. If I can do it with zero experience in AI image generation and 2-3 days of light practice, so can you. Edit 1: about to experiment with the fluxRealism LoRA at the beginning of the FLUX workflow, I'll report back with results. Edit 2: Putting the FluxRealism LoRA in at the beginning and setting the width and height to that of the control net image resolution seems to generate MUCH better looking characters if you're after realism. Edit 3: link to the FluxRealism LoRA: https://huggingface.co/XLabs-AI/flux-RealismLora/tree/main

Koss Billingham

https://docs.google.com/document/d/1PHYMpXqfNKj9dQIMVpXAg7R4FjNFIINok09L9heQTnM/edit?tab=t.0

Koss Billingham

@Loxis This is what adding that LoRA did for the face at the end: https://imgur.com/a/MkMHLuW

R0binho0d

What resolution are you doing? I am getting same problems as you with lots of asian and not realistic images at all.

R0binho0d

Your images of the blond is exactly, what I am trying to achieve :)

R0binho0d

@Koss Billingham do you mind sharing your updated workflow?

memyself andI

Unless I'm missing something, I see no mention of the files above and where they go

Koss Billingham

@R0binho0d - resolution is 1024 for the face crops at the end. The input resolution isn't quite as simple as I've done a bit of resizing between groups.

Koss Billingham

I've given my notes to Mick, I'm not here to steal his thunder. I think an updated one is on the way anyway.

R0binho0d

It's hardly stealing the thunder, if you help improve a workflow, that doesn't fully functional. Looking in the discord, there are others who keep getting those asians and plastic people. Your help would be hugely appreciated.

UAknight

I keep getting picture from 3 sides instead of multiple. Is there a fix for that?

R0binho0d

Ill trade you for the workflow. Kidding, it should be under your "Membership" tab.

UAknight

As long as you take money for it PLEASE give a full instructions on how to use ADVANCED flow. Step by step. For me it's just a bunch of random files, and nothing more.

Thibaud Herbert

don't bother wih flux controlnet, i've tried every models possible, every settings, they all just sucks for flux. I got the same problem as you. Or by changing the settings i could get the entire sheet generated but the output quality is trash. Just don't bother using flux controlnets for now, they are not good right now. The only controllable images are in SDXL, the controlnets are good (sd1.5 are the most accurate though)

TJ Jones

For those having problems, especially where you get an error to something of the sort "...safetensors not in..." This means the library you are targeting is not available. Likely you don't have it installed in the appropriate folder. For instance, the Load Diffusion Model panel, the first item is unet_name. This component is looking in your unet folder. The name on the right is the name it's looking for. The names that came in with the json workspace were for libraries I did not have. Currently I switched this to flux1-dev.safetoners, which I do have in my unet folder. Also in Load ControlNet Model panel, the control_net_name was a non-existing name for models in my controlnet folder. I had to go download a version of InstantFlux Dev Union. MAC Users. You will run into the same issue I had which is fp8 will not work with Metal (MPS). You will have to use fp16 or higher. I'm pretty new to all this, but I believe the issue here is memory. The higher floating point models require more ram. You are pretty much screwed if you don't have at least 32GB. I have 64 in a M1 Ultra Mac Studio. I have watched my RAM run up as high as 83%. Also note for Mac, DualCLIPLoader panel. You will have to load the t5xx version that is fp16 or higher. In a nutshell, I went to google with all the file names of the models I needed and it would bring me to the correct pages for download on huggingface. Not sure why some of them would not show up in huggingface's search. So just be aware. Windows Users, similar logic applies. If you are missing something and get an error that is showing this, you need to find the correct library/model, download it, and place in the appropriate folder. From there when in Comfy, make sure the field for that particular model is indeed the one you downloaded. If the names don't match, it won't work. Hope this helps... Learning all this is pretty much how I spent my Friday, and yes, I was able to get the Flux version to run on my mac with all 5 steps and produced all the images for the blonde american woman as a test run. Note, on my mac the initial character sheet took about 18 minutes. Running the remainder outputs together took 3.5 hours.

TJ Jones

It's just as the error reads. You are missing models in your installation that the names in the UI fields are trying to call. I just posted a lengthy message up above explaining this in detail. I had to struggle though on a Mac, but I got it up and running successfully.

Luis Oliveira

Hi! Using the sdxl adv, everytime it use the emotion upscale, the colors and everything else becomes amazing, but the emotion almost disapears. A characer with open mouth and showing teeth will be upscaled wih close mouth and a almost resting posture. What can I do?

Luis Oliveira

Noise Reduction solved it. it was 0.5, changed to 0.2 and it is better, now I just need to fine tune it.

Koss Billingham

also look at the prompt clip that's going into the upscaler if it starts giving you things you don't want.

Ian Bright

This was crap. Do not buy this. Nothing but hours of wasted time and effort.

Exiliem Game

I am having issues on running this. I am using excapt same models elements from the video and it is failing. Do you know why is this error? c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'copy' 2024-10-23 10:40:59,023 - root - INFO - Prompt executed in 0.03 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)

Gerhard Bultema

i did everything in the guide, but when pasting in the character sheet i get errors i'm missing nodetypes: FromBasicPipe_v2 CoreMLDetailerHookProvider ToBasicPipe UltralyticsDetectorProvider Fast Groups Muter (rgthree) ToDetailerPipe ExpressionEditor Simple String FaceDetailerPipe ImageBatchMulti AnyLineArtPreprocessor_aux InstantX Flux Union ControlNet Loader why is this?

Exiliem Game

You need to update and install custom nodes with Manager. To do that, go to your ConfyUI manaegr panel and click on update nodes.

Gerhard Bultema

thank you. I have it working now, but when i click queue prompt i'm getting: failed to validate prompt for output 87: * (prompt): - Required input is missing: images * SaveImage 87: - Required input is missing: images Output will be ignored How do i fix that output 87?

Gerhard Bultema

what does this issue mean? mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)

Famous People

Hi Guys! Does anyone know why changing the seed number makes the output photo blurry?

Jan Minar

The CLIP type must be set to Flux, not SDXL -- this is a bug in the workflow -- see https://github.com/comfyanonymous/ComfyUI/issues/4699#issuecomment-2350815051

majdi El-Jazmawi

Hi thanks for the amazing workflows, How can I work with Flux Lora from FluxGym with the character sheet Flux?

Massimo Moro

Has anyone managed to get ComfyUI, FLUX, and LoRA working on an NVIDIA RTX L4 virtual machine on Google Cloud with Windows Server 2022, Python 3.12, CUDA 12.4, and PyTorch 2.5?

Len

Only 4 emotions getting upscaled, how do I get rest of the poses to upscale (flux workflow)?

Tony Gonzales

How do I combined your flux lora workflow and the Anime Position work flow? I want to take my poses and add my characters

Benjamin Lebron

Would anyone mind sharing their files, I cant get the thing to run, keep getting errors with ComfyUI_impact and another node. been 2 weeks of troubleshooting and reading documentation :-(

Mickmumpitz

I'm sorry about that. I am currently recording a new tutorial with the updated version that shows the full installation process step by step. I hope that helps. Are you using ComfyUI with ThinkDiffusion?

Vitalii Mykhailyshyn

I`m using ComfyUI with ThinkDiffusion. Could you try to go through all the processes there? I have tried all workflows. The result is - this list of nodes just can`t to install. UI Manager installed all nodes + all updated, but this list is not changing: When loading the graph, the following node types were not found: CoreMLDetailerHookProvider ToDetailerPipe ToBasicPipe UltralyticsDetectorProvider FaceDetailerPipe FromBasicPipe_v2 Nodes that have failed to load will show as red on the graph. So basically some part of bottom nodes are red and that's all.

Marco Ramley

Hi! Is there any way to use this workflow with a reference image, for example, If I created a character in Leonardo AI and I want the same character with this workflow, can I do that?

Benjamin Lebron

Thank you, I'll keep a lookout. I'm using the ComfyUI Windows Portable version with the Comfy Ui Manager. It just won't install two custom nodes ComfyUI_Impact or something like that (I don't have it in front of me atm)

Vitalii Mykhailyshyn

Hey, does anyone using ThinkDiffusion and workflow works for him?

Ebba Vei

Definetly this would be amazing, like a description image after input a photo, or add the same prompt and seed,. size with x model, and later get the photo album

Vitalii Mykhailyshyn

I have found the reason why workflows are not working. ThinkDiffusion does not support ComfyUI Impact Pack nodes. Maybe it`s not a big deal when you know how to replace those nodes with different ones. But I don't) Could you help?

HipFlow

i ve problem some missing node : pulidFluxEcacliploader, pulidfluxinsightfaceloader,applypulidflux, pulidfluxmodel loader

Kelevra

I'm currently experiencing the same problem as Hipflow.

Kelevra

I hope I'm not breaking any rules by posting a link to another Patreon creator's page. My sole intention is to help others that are having the same problem. I installed a fresh copy of ComfyUI using his main installer on a separate drive. https://www.patreon.com/posts/115233636. I then ran the INSIGHTFACE_AUTO_INSTALL.bat. I also needed to install a few missing safe-tensor files. https://huggingface.co/mcmonkey/google_t5-v1_1-xxl_encoderonly/tree/main https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/blob/main/diffusion_pytorch_model.safetensors This fixed the problem for me.

Clemens Simlinger

Where can I find the "InstantX_flux.safetensor" file to download? I was not able to located it anywhere online... Thanks!

Kelevra

https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/blob/main/diffusion_pytorch_model.safetensors

Kelevra

I'm not sure why his safetensor files are named differently. However, this is the correct file.

Zulfariz Abd Majid

Hi, there is no image come up when I using FLUX. The SAVE IMAGE node have red line around it (I guess that is the culprit). Please help how to solve this problem?

Wasay ali

Hi, I'm getting an error, my all "Apply controlnet nodes are in red color which is showing controlnet dot in a red circle. can anyone help me?