Home Artists Posts Import Register

Downloads

Content

NEW UPDATE 3 (8th february) :
THERE IS NOW A NEW STANDALONE INSTANTID CONTROLNET VERSION!
Check the update video: https://youtu.be/8ljj3MYMYA4

Use the newest 1-click installer to install it (INSTANTID-CONTROLNET_AUTO_INSTALL.bat)
Unfortunately the --lowvram and --medvram argument don't work with the controlnet version because it still uses the same amount of vram to generate the new images.

NEW UPDATE 2:
YOU CAN NOW CHANGE MODELS FOR instantID!
Check the update video: https://youtu.be/SMfML0P1g9Y
That's right, you want to use your favorite sdxl model? You want more realistic results? Now you can!
Thanks to the introduction of arguments!
I made 6 different arguments for you to use inside the launcher:
--inbrowser :  Automatically open the url in browser
--server_port :   Choose a specific server port, default=7860 (example --server_port 420    so the local url will be:  http://127.0.0.1:420)
--share : Creates a public URL
--medvram : Medium vram settings, uses around 13GB, max image resolution of 1024
--lowvram :  Low vram settings, uses a bit less than 13GB, max image resolution of 832

and finally

--model_path : Name of the sdxl model from huggingface   (the default model example: --model_path stablediffusionapi/juggernaut-xl-v8     diffuser model you can find here: https://huggingface.co/stablediffusionapi/juggernaut-xl-v8)

If you want to change models, just input the repo name of the diffuser sdxl model you want to use (only works with sdxl models, not 1.5, instantid was trained on sdxl only) then open the launcher.bat file with notepad, and on line 3 add the argument so for example:
python app.py --model_path stablediffusionapi/juggernaut-xl-v8
then save the file before launching it!

DON'T FORGET TO DO A GIT PULL TO HAVE ACCESS TO THE LATEST VERSION!



UPDATE:
1) NOW you can use instantID with less vram and ressources!
If you have installed instantiD already, go inside the INSTID folder, click on the folder path, type  cmd  then press enter, inside the command prompt, type    git pull   then press enter and it will update it to the latest version.
2) New Lowvram launcher added (LOWVRAM_LAUNCHER_INSTANTID.bat) dl it and then put it inside the INSTID folder and double click it to launch a version of instantID that uses less vram and is much faster to generate images
3) The Google Colab doc now doesn't require a paid account anymore! So if you have a weak GPU or no GPU at all, click on this link: https://colab.research.google.com/drive/1wYdWZFQU0QzZ8cdnsp-I9evrKPCScUET?usp=sharing
do not forget to click on "connect" and then "change runtime type", make sure you select the T4 GPU then save before launching the doc.

ENJOY! ;)

Hey everyone! I've created a 1-click installer for the MINDBLOWING InstantID FaceGEN WebUI that allows you to generate new images of your subject from a single photo in only a few seconds WITHOUT TRAINING! You Absolutely need to try this out!


Watch the video about InstantID: https://youtu.be/jjRFa1JugA0 


The installer of course automates the entire install process!


Here's how it works:


1. Download the installer (INSTANTID_AUTO_INSTALL.bat) and the 2 launchers (LAUNCHER_INSTANTID.bat & LOWVRAM_LAUNCHER_INSTANTID.bat )


2. Create a new folder on your computer and place the installer inside

3. Launch the installer

4.???

5. Profit  😎


You can then put the launchers inside the newly created "INSTID" folder and double click one of them each time you want to launch InstantID!😉

You can also use the Google Colab doc to run InstantID:
https://colab.research.google.com/drive/1wYdWZFQU0QzZ8cdnsp-I9evrKPCScUET?usp=sharing

This is some groundbreaking tech that will probably soon be implemented everywhere but right now you can use it to generate cool images of your subject or use that to make more images for Lora training, the sky is the limit!


As always, supporting me on Patreon allows me to keep creating helpful resources like this for the community. Thank you for your support - now go have some fun😉!



Files

Comments

Anonymous

yeah im so dam pumped for this hahahah

Anonymous

LAUNCHER_INSTANTID.bat inside the new ISTID directory doesnt run anything

Anonymous

I had the same problem. I moved and renamed the installation folder after the installation. This may have caused the problem. I undid this and then everything worked again.

Anonymous

Damnit, I think it needs at least 16 GB GPU to run well. It is so slow with a 3080 Ti...

LtCzr

I ran the installer and it works however it didnt create the share link, it says : Could not create share link. Please check your internet connection or our status page: https://status.gradio.app. How do i create a share link? I want my family to be able to use it on their cell phones.

Anonymous

The issue I am having is that its constantly trying to install python and I already have it installed, it does not get past that , if I try to run the launcher nothing happens.. still trying a few variations..

Aitrepreneur

you have an antivirus or firewall blocking the connection, I had that on another pc as well, just simply disable it and try again

Anonymous

Actually at least 24 GB. After the 30 steps it doubles again the memory.

Anonymous

Mine has installed and then just closes. I put the launcher into a new folder on my desktop. click and CMD opens runs a few things and closes

DanO..

Mine sat there for a LONG time before I saw progress. Did you let it run for a long time?

DanO..

My 12G 3060 isn't working, buddy. Not enough memory. What amount of VRAM is necessary when you run it?

Anonymous

how big is this once installed

Anonymous

Mine did the same, i was because i was missing C++ build tools

Anonymous

https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst

Scruffy Scruffington

clicking the launcher opens up a command windows really fast but then goes away and nothing happens

Anonymous

The only two steps I'm getting stuck on are: git clone https://huggingface.co/Aitrepreneur/models it keeps giving me an error: fatal: unable to access 'https://huggingface.co/Aitrepreneur/models/': Recv failure: Connection was reset everything else works until I type in "python app.py" at which point I think the failure of the models download wrecks havoc with things. Anyone have any suggestions on getting the models? Thanks!

Anonymous

When I run INSTANTID_AUTO_INSTALL in Powershell, I get the message "ModuleNotFoundError: no module named 'cv2'. If I just doubleclick the .bat files according to the instructions, then it doesn't run the program.

Anonymous

Try installing this: https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst It worked for me

Anonymous

Try installing this: https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst It worked for me

Anonymous

My poor 3060ti, taking about 15 minutes to generate a single 30 step image 😢 amazing tech though, hope to see it further optimised if possible!

Anonymous

You need to install the c++ files before installing instantID. If you havent installed the it before you will get errors when installing InstantID

Anonymous

And you need to install the c++ files before installing instantID. If you havent installed the it before you will get errors when installing InstantID

Anonymous

I can run it on my 16G 4060, however it uses 4GB of additional shared vram

Anonymous

Same here, I tried moving into the INSTID folder to launch from there and that didn't work either.

Anonymous

I installed but when i launch the cmd window open and close instantly... anyone know how to fix this? (Sorry if my english is kinda bad)

Anonymous

Hi Esben, thank you so much, this link was super helpful for me. everything works fine now!

Aitrepreneur

Thanks Esben, I modified the installer to add that to the installation, already had this installed before so I didn't put it again

Anonymous

I have an issue myself. For some reason, the only way I can start the app is to run the installer again. CMD is closing faster than OBS studio captures:D Maybe something in venvs?

Aitrepreneur

you might have an antivirus blocking the file maybe? Try disabling it and try again with the launcher. Or you can just do it manually, go inside the INSTID folder, click on the folder path, type cmd press enter, this will bring the command prompt window, inside type env\Scripts\activate press enter then type python app.py press enter

JamZam WamBam

It's working for me now ;)

JCS

Any way to install this on RunPod?

Anonymous

here is error even with new batch file: https://imgur.com/a/u5kwduq

Anonymous

thank you sir, may the sun meet your crops each morning

Anonymous

Also, thanks for being thoughtful with these scripts. Can't count how many times I've run similar batch files that kick off the Python installs without checking to see if it's installed first. Can be a goddam mess to clean up.

Anonymous

If you open CMD and run "where python" what do you get back? (or "which python" on Linux). If it's not something with "Python310\python.exe" at the end, you have the wrong version of Python from a previous Python install. The auto install script will not install Python if you already have it. Whatever version you have can't handle the dependencies you need. If you need that other Python version, smallest headache might be manual install of InstantID. Not too difficult. Then you get a virtual environment. If you don't need the other Python, uninstall it and re-run the script.

Anonymous

When opening the launcher, I keep getting a "Matching Triton is not available" , then a bunch of errors before it closes right away. I don't have any 3rd party antivirus and I added the folder as an exception in Windows Security. Not sure what the issue is.

Anonymous

i finally able to run it but the speed is very slow , i have 3080 11GB and i9 processor

Anonymous

same error here, how did you fix it

Diggy Dre

Is anyone getting anything "realistic" from this?

Anonymous

The one click installer didn't work for me so I did it manually, but when I reached the part where I need to type: pip install -r requirements.txt - I got this error: ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

Anonymous

Thank you for your work. Hum, seems 12 go of vram is not enough... but i run XL models in SD/Comfy with complex workflow without any problem. Where can we balance the usage of the vram ? torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 880.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 0 bytes is free. Of the allocated memory 10.03 GiB is allocated by PyTorch, and 512.43 MiB is reserved by PyTorch but unallocated

Anonymous

I think you have to go to the cloned repository before typing the pip install -r requirements.txt

Anonymous

Works one time but know i have this: "torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memory 12.70 GiB is allocated by PyTorch, and 829.85 MiB is reserved by PyTorch but unallocated." ... Maybe is too much for my little 3060 Ti xD

Anonymous

Can you add an option for Size of Batches and Number of Batches for generation in the UI so that we can do a bunch of itterations set and forget vs needing to press the generate button every time we want a new photo?

Anonymous

Weirdly enough the models won't install if I used the INSTID folder, however it will load outside of the INSTID folder..

Anonymous

Man, too much hype for this and it looks it is not as easy as you describe on your video... You need to fix it

Anonymous

3070ti and its extremely slow. what is the recommended GPU VRAM for this to be quicker?

Anonymous

Thanks for the files. Perhaps a warning for those of us on slow connections that this is going to slam you with almost 10gb worth of downloads. Knowing that, those of us who are bandwidth challenged can plan accordingly.

Anonymous

300 seconds past and still step 3... bullshit

Anonymous

I've downloaded it (V2), tried to run it, but it didn't worked. Even after installing python. What did i wrong?

Aitrepreneur

uninstall this version of python and install either this one: https://www.python.org/downloads/release/python-3106/ or rerun the installer

Aitrepreneur

the triton error is normal, what other errors you got next? Maybe try doing a recording and send it to me?

Aitrepreneur

Depends on how realistic you're talking about but I did, try playing around with the parameters and don't select any style for that

Aitrepreneur

You can dl the latest nvidia drivers and make sure that you have enabled system memory fallback, once you need to use sd/comfy disable system memory fallback again https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion

Aitrepreneur

Not enough vram to run this, if you really want to use it (altough for you it will be slow) you can try to dl the latest nvidia drivers and enable system memory fallback, once you need to use sd/comfy disable system memory fallback again https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion

Aitrepreneur

As much as possible tbh, with my 3090 it takes between 18 and 25s to generate an image

Anonymous

Is there any tutorial on how to install it on Ubuntu 22.04? I always end on:: "CUDA out of memory. Tried to allocate 1.56 GiB. GPU 0 has a total capacty of 15.69 GiB of which 518.50 MiB is free. Including non-PyTorch memory, this process has 14.80 GiB memory in use. Of the allocated memory 12.81 GiB is allocated by PyTorch, and 1.69 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" SDXL is running like a charm. What am I doing wrong? Cheers!

Aitrepreneur

This means you don't have enough vram to run this, if you really want to use it (altough for you it will be slow) you can try to dl the latest nvidia drivers and enable system memory fallback, once you need to use sd/comfy disable system memory fallback again https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion

Anonymous

Im using 4060ti with 8gb vram but looks like everyone gets same result so its not about me or exactly vram amount rn

Anonymous

Thanks for your super-quick reply! I was sure a 4080 is capable, but I'll give it a shot.

Anonymous

I get a "ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host Then several Traceback errors

Aitrepreneur

oof yeah 8gb is way too little for instantid to work at a decent speed, you can try the google colab doc though

Anonymous

Whether manual or through the installer the results are the same.

Aitrepreneur

you'll run it no problem, might be slow though, vram is king for ai tools such as this one

Anonymous

Ok, with the 3070ti in my laptop it took at least 20 minutes to generate an image. I’ll try on my 3080. I’m waiting for prices to drop before I get a 3090 or 4090 for the vram upgrade.

Anonymous

just a little update: runs flawlessly and super-quick at win11, so it needs to be an Ubuntu-issue. Will dig a little deeper later, Thanks again!

Jim Gale

getting this from the 1-click installer: ... Looking in indexes: https://download.pytorch.org/whl/cu118 ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch Collecting diffusers==0.25.0 (from -r requirements.txt (line 1)) Obtaining dependency information for diffusers==0.25.0 from https://files.pythonhosted.org/packages/aa/0b/af1dd4b355accf28fa413d93c3b7df2c25922ebc3435eecc56818a3f8c1e/diffusers-0.25.0-py3-none-any.whl.metadata Downloading diffusers-0.25.0-py3-none-any.whl.metadata (19 kB) Collecting transformers==4.36.2 (from -r requirements.txt (line 2)) Obtaining dependency information for transformers==4.36.2 from https://files.pythonhosted.org/packages/20/0a/739426a81f7635b422fbe6cb8d1d99d1235579a6ac8024c13d743efa6847/transformers-4.36.2-py3-none-any.whl.metadata Downloading transformers-4.36.2-py3-none-any.whl.metadata (126 kB) ---------------------------------------- 126.8/126.8 kB 1.1 MB/s eta 0:00:00 Collecting accelerate (from -r requirements.txt (line 3)) Obtaining dependency information for accelerate from https://files.pythonhosted.org/packages/a6/b9/44623bdb05595481107153182e7f4b9f2ef9d3b674938ad13842054dcbd8/accelerate-0.26.1-py3-none-any.whl.metadata Downloading accelerate-0.26.1-py3-none-any.whl.metadata (18 kB) Collecting safetensors (from -r requirements.txt (line 4)) Obtaining dependency information for safetensors from https://files.pythonhosted.org/packages/7c/22/27ba66710dda4cdca147ad5d8666d9cf0e2035c0aecb5f4de4e6b0e1bc49/safetensors-0.4.2-cp312-none-win_amd64.whl.metadata Downloading safetensors-0.4.2-cp312-none-win_amd64.whl.metadata (3.9 kB) Collecting einops (from -r requirements.txt (line 5)) Obtaining dependency information for einops from https://files.pythonhosted.org/packages/29/0b/2d1c0ebfd092e25935b86509a9a817159212d82aa43d7fb07eca4eeff2c2/einops-0.7.0-py3-none-any.whl.metadata Downloading einops-0.7.0-py3-none-any.whl.metadata (13 kB) ERROR: Could not find a version that satisfies the requirement onnxruntime (from versions: none) ERROR: No matching distribution found for onnxruntime [notice] A new release of pip is available: 23.2.1 -> 23.3.2 [notice] To update, run: python.exe -m pip install --upgrade pip Traceback (most recent call last): File "E:\ai-art\InstantId\INSTID\app.py", line 2, in import cv2 ModuleNotFoundError: No module named 'cv2' (end)

Anonymous

Works! With the 3060ti, took 1000 secons (more or less) one image... but works. Thanks!

Anonymous

It shows the cmd-window as you described and then inside the window appears an odd blue field with some yellow declaring-lines and yellow numbers upcounting and thats it. the first time that occurs it asked me to install python, what i did. After that only the cmd-window appears with the odd upcounting and that was it....no installing, nothing...i waited and waited....nothing more happened.

Aitrepreneur

what I would do first is completely uninstall python and C++ build tools. Then reinstall C++ build tools manually, look here: https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst Then just to be sure I would reinstall python manually with this installer https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe and not forget to check the "add to path" checkbox Then try disabling any antivirus / firewall you got going on (or uninstall antivirus because no one needs it anymore tbh..) Then you can launch this installer: https://we.tl/t-3bUpuiBVGb Hopefully after all this it should be fine

Anonymous

Could it be used for face swapping? How?

Anonymous

Normal settings and lowvram setting both take the same amount of vram for me for some reason? 12gb crashes (cuda overload)

Anonymous

hello overlord, if you have time , maybe you can rewrite the inscrutions for dummies like me, whem i do all points of the install, exactly nothing happens at the end. if i run the launcher_instandid, a small cmd window opens for about 1 millisecond thats all, nothing happens. the install itself works i think, my pc downloaded a bunch of files and program´s, but it will not start at the end

Anonymous

i fix it, i used the installer again, it downloaded a loot more files, and now it works

Anonymous

I'm having the same issues when i do all points of the install, exactly nothing happens at the end. if i run the launcher_instandid, a small cmd window opens for about 1 millisecond thats all, nothing happens.

Anonymous

It took 10 minutes on my 3090. Not workable for me at this speed. Although if I see you render it on your 3090 it takes about 10 sec?

Anonymous

My 3060ti have the image done in 600 sec. In the previous version took 1100 sec. Thanks!!! No issues this time. Great job!

DanO..

So close! It ran through the 30 steps and then ran out of memory. (3060 12GB) I shutdown and started up again and noticed it was at 10.8GB before I even tried to generate an image. I get this warning at the very start in the CMD window"A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton'" I don't know if this helps.

DanO..

The Collab doesn't work for me either. Errors out. Last thing it prints is: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Aitrepreneur

you just need to select a GPU, just click on the connect arrow option, change runtime type, under Hardware accelerator check the T4 GPU then click save. Then run the cells.

Aitrepreneur

for the local install the error is normal, just dl the latest nvidia drivers and make sure you have system memory fallback activated (it's activated by default)

Aitrepreneur

it takes around 12-14s with the newest launcher to generate an image with my 3090, are you sure everything installed correctly?

DanO..

That worked! I still get that error but it's working now. I hate updating drivers because I've been burned so many times in the past (3 decades). When I went to update the driver I saw they have a game driver and a studio driver. I loaded the studio driver and that worked! I think somewhere they say it's a good idea to change the model to one you like. Do you know how to do that? [And... thanks!]

Anonymous

Getting this error no matter whether I use the normal or the lowVRAM launcher on an 3080 10GB: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 10.00 GiB of which 0 bytes is free. Of the allocated memory 9.00 GiB is allocated by PyTorch, and 307.01 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I am also getting the 'matching triton not available' error. Any ideas?

Aitrepreneur

This means you don't have enough vram to run this, if you really want to use it (altough for you it will be slow) you can try to dl the latest nvidia drivers and enable system memory fallback, once you need to use sd/comfy disable system memory fallback again https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion the triton error is normal on windows you can also use the updated Google colab doc if you want

Anonymous

Thanks for the reply. I thought I had the fallback turned back on but it turned out that I had not :/ Having a play now and we shall see how horrendously slow it is :D

Anonymous

Since the style_template.py file is part of the project, how can I add my own styles so that I can update them later? Also, can you include it so that the Auto1111 style.csv file can be used underneath? Could you integrate it to automatically launch the local address in the browser? Update 1: Thank you for --inbrowser option in latest update! 💖

Keiron Gulrajani

It seems to run very slowly on my nvidia 3080 (8gb card) on my laptop, I have plenty of ram (32gb) will the lower vram launcher work quicker?

Anonymous

Big fan of your work! I launched the low vram version and it worked! But for some reason processing one image seems to take 15 minutes. My computer can handle SDXL on Comfy and seems to be able to put out four images in a couple minutes on that, for context. My specs are HP Envy Desktop Bundle PC, NVIDIA GeForce RTX 3070 Graphics,12th Generation Intel Core i9 Processor, 16 GB SDRAM, 1 TB SSD, Windows 11 Home OS, Wi-Fi & Bluetooth (TE02-0042, 2022). I just can't figure out if this is an optimization issue or if my computer just cant do it? So any help for this novice is appreciated.

Anonymous

I've been trying for days, updated python, did clean installs, and the installer just isn't working, it says it cant find a version that satisfies the requirement onnxruntime (from versions: none), and the same for torch I tried manually following the tut and the same thing happens when trying to install torch/torchvision/torchaudio...

Mark Sutherland

Thanks for the file and your hard work. It seemed to install and work fine then suddenly stopped making the images look like the face in the reference photo, any ideas, cause I have no idea why it stopped working as it should. Card is a 16GB rtx4000

Aitrepreneur

use the --lowvram argument but 8gb is still very little to run this at a quick speed try using the google colab doc instead

Aitrepreneur

you have 8 gb of vram, you can use the --lowvram argument in the launcher but again 8gb is still very little to run this at a quick speed you can try using the google colab doc instead

Aitrepreneur

you probably have a bad initial python install. what I would do first is completely uninstall python and C++ build tools. Then reinstall C++ build tools manually, look here: https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst Then just to be sure I would reinstall python manually with this installer https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe and not forget to check the "add to path" checkbox Then try disabling any antivirus / firewall you got going on (or uninstall antivirus because no one needs it anymore tbh..) Then you can launch this installer: https://we.tl/t-3bUpuiBVGb

Anonymous

Hi and thanks ! I did a new install with your V2 installer, and the laucher is working when I use "normal" instant ID. But, when I add the --inbrowser --model_path stablediffusionapi/juggernaut-xl-v8 He doesn't download juggernaut at all. I got : Arguments currently in use: inbrowser: True Keyword arguments {'safety_checker': None} are not expected by StableDiffusionXLInstantIDPipeline and will be ignored. Loading pipeline components...: 100%|█| 7/7 [00:00<00:00, 8.66steps/s] Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. The file is up to date and everything is working fie except that. Do yo ugot an idea ? And as always, thanks for the hard work !

Solar Zaffiro

not sure why, but V1/last time i installed this it took around 10-15 minutes to generate. Now on V2 it's close to taking an hour to generate.. I'm on a 5900x and a 3080rtx (10gbVram). I don't think it should be taking this long to generate. I tried messing with the --lowvram and the --medram, yeah they help a little.. but I was running it on full default setting on V1 and it generated much quicker.

Anonymous

curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate. The system cannot find the file C:\Users\Name\AppData\Local\Temp\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe.msixbundle.

Anonymous

Full error: [WARN] Winget is not installed on this system. [INFO] Installing Winget... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate. The system cannot find the file C:\Users\Name\AppData\Local\Temp\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe.msixbundle. [INFO] Winget installed successfully. Python already installed. Git already installed. [INFO] Installing Microsoft.VCRedist.2015+.x64... 'winget' is not recognized as an internal or external command, operable program or batch file. [INFO] Installing Microsoft.VCRedist.2015+.x86... 'winget' is not recognized as an internal or external command, operable program or batch file. [INFO] Installing vs_BuildTools... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate. [ERROR] Download failed. Please restart the installer Press any key to continue . . .

Anonymous

I've altered your app.py a bit and now I can load .safetensors (by using the gradio sample from the original repo) , this should be something interesting to add too... I think it's working but I need to test after waking up in the morning =/ at this point my own bio NN might be hallucinating stuff

Anonymous

I have the exact same issue .. and the .bat file closes itself without any continuation

Anonymous

I'm looking for a solution to handle a batch of them.

Anonymous

Anybody has a solution to this LAUNCHER.bat " --inbrowser --model_path stablediffusionapi/juggernaut-xl-v8 He doesn't download juggernaut at all. I got : Arguments currently in use: inbrowser: True Keyword arguments {'safety_checker': None} are not expected by StableDiffusionXLInstantIDPipeline and will be ignored. Loading pipeline components...: 100%|█| 7/7 [00:00 " and the cmd closes itself ... what could be the issue "" after running a fresh install of the new _V2 file

Anonymous

any option to 'generate forever'?

Anonymous

Good morning friend, I have an RTX 2080 8 GB and started the launcher with the --lowvram option, but it's still extremely slow. I can buy another RTX 2080 8GB for 250 euros and link them together with a bridge. What do you think? Is it worth the effort and money to do that?

Anonymous

nope i doesnt work that way u cant combine amount of vram im afraid

Mark Sutherland

Thanks for the reply. I looked into the models folder and didn't see the onx128 file, which I believe is the faceswap model. Copied it in and it worked. No idea how it got deleted. I have watched your last video, great as always and will use the git pull to update etc it would prob have noticed and replaced it anyway, just odd how it got deleted. Was literally working fine, then stopped swapping the face, up and running now and thanks again for the fantastic videos on INSTANTID great results with it.

Anonymous

Can not install properly . When i launch frist application cmd window run and it download python and disappeared . Whats wrong ??? What are the minimum requirements to run this application . Help me . Thanks

Anonymous

the one button install is not down loading properly, can you give more step by step instructions,

Anonymous

it keeps saying it isn't commonly downloaded and won't open

Anonymous

I am so confused, I don't understand code that's why I paid to have a "one click" and do nothing else experience.

Aitrepreneur

that's because the juggernaut-xl-v8 is the model is downloads by default so if you launched the newest installer without any arguments first, it automatically dl the juggernaut model. In the video I used the --model_path argument because it was a new install and I was lazy to look up another model to dl xD If you go to C:\Users\YOURUSERNAME\.cache\huggingface\hub folder you can see what models you have dl already, the models--stablediffusionapi--juggernaut-xl-v8 folder should be there already If some of you have the bat file closing upon launch, it's usually either because of an antivirus/firewall blocking the file and/or the fact that you don't have microsoft cpp tools installed. So disable the antivirus/firewall and try again and you can also dl the Cpp tools by following this guide https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst Or launch the installer, it will automatically install Cpp tools too

Aitrepreneur

yes that's because it's now using the juggernaut V8 model, so the quality is better but takes longer to generate, so if you want to use the oldest model, you can use the --model_path wangqixun/YamerMIX_v8 to use the oldest model

Aitrepreneur

what happens if you disable the antivirus and firewall and you try running the installer again? You can also try this installer: https://we.tl/t-cDVpmfAp7F

Aitrepreneur

create a pull request on github if you can, that would be great thx: https://github.com/aitrepreneur/INSTID/pulls

Aitrepreneur

that's because the juggernaut-xl-v8 is the model is downloads by default so if you launched the newest installer without any arguments first, it automatically dl the juggernaut model. In the video I used the --model_path argument because it was a new install and I was lazy to look up another model to dl xD If you go to C:\Users\YOURUSERNAME\.cache\huggingface\hub folder you can see what models you have dl already, the models--stablediffusionapi--juggernaut-xl-v8 folder should be there already

Aitrepreneur

yeah no, either use the google colab doc or get a Nvidia GPU with more vram (as much as you can)

Aitrepreneur

If you have the bat file closing upon launch, it's usually either because of an antivirus/firewall blocking the file and/or the fact that you don't have microsoft cpp tools installed. So disable the antivirus/firewall and try again and you can also dl the Cpp tools by following this guide https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst Or launch the installer, it will automatically install Cpp tools too You can also try this installer: https://we.tl/t-cDVpmfAp7F

Aitrepreneur

I need a bit more info there and the full precise error log Try also disabling the antivirus/firewall and try this installer also: https://we.tl/t-cDVpmfAp7F

Anonymous

what about "no triton module found" is that going to affect generating image time?

Devin Blair

\INSTID\app.py", line 2, in import cv2 ModuleNotFoundError: No module named 'cv2' I get this error, what am I missing?

Anonymous

Tcs, I could verify it works, but there are some dependency conflicts, I'm going to try to find some time tomorrow(I'm in the middle of a project for a client and spare time is... Sparse) at least push a sample to a fork and send you a link ...

Virtamouse

Where is the low memory bat download?

Solar Zaffiro

last question, if this saves the photos locally, what folder is it?

Khoa Vo

Is it possible to use my own SDXL models located on my computer? Or ones from CivitAi? How come the model path only expects models from hugging faces?

Anonymous

Yes, you are correct ... now .. wont the LAUNCHER.bat file be used to start the model in my browser ? How to i gain access to the GUI if the launcher open's gets to that 7/7 and then it closes itself .. How should we resolve this and gain access to the tool. ( Seems like a few other people are like me .. stuck and cant do anything. Please HELP OP :D

Anonymous

sigh. sadly none of this works. i don't have time to debug this. I've successfully installed AUTO111, ComfyUI, ReActor, and Oobabooga with no problems. I've tried multiple installation methods from 3 differnt YouTubers. None work. Will just have to wait until someone fixes this shit.

LtCzr

Is it possible to have multiple launchers? To have one for each model that I plan on using or do I just need to change the model each time i want to switch?

Anonymous

I don't know why but for some reason it's not working on my desktop. It seems like it runs and installs them just closes then when I try to run it, my command windows pop up and closes again. Also does it work with arm64 processors?

LtCzr

also im getting an error that says missing triton

Anonymous

Soo I wiped all , did a fresh install , Disabled Win Defender , OFFed Firewall , added even Exclusions folders JUST IN CASE xD ... i know the .bat script installs its own VisualStudio and Phyton thats great ... Had to run Windows 11 in Develeper Mode ... Ready to plug in the LAUNCHER.bat fire in the folder ... launch it ... does the who 7/7 promt and closes itself .. instead of opening a tab in browser with the application running ... ... OP help xD

Aitrepreneur

might be possible to use local models I need to check on that yet, it's just the way it was setup since it uses diffusers format

Aitrepreneur

You can just copy and paste the launcher, modify the argument inside and rename it to the model name yes

Aitrepreneur

If you have the bat file closing upon launch, it's usually either because of an antivirus/firewall blocking the file and/or the fact that you don't have microsoft cpp tools installed. So disable the antivirus/firewall and try again and you can also dl the Cpp tools by following this guide https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst Or launch the installer, it will automatically install Cpp tools too You can also try this installer: https://we.tl/t-cDVpmfAp7F

Aitrepreneur

I mean once again, If you have the bat file closing upon launch, it's usually either because of an antivirus/firewall blocking the file and/or the fact that you don't have microsoft cpp tools installed. So disable the antivirus/firewall and try again and you can also dl the Cpp tools by following this guide https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst Or launch the installer, it will automatically install Cpp tools too You can also try this installer: https://we.tl/t-cDVpmfAp7F

LtCzr

Is the generation generally tied to just what the image shows? if its just a head it doesnt really seem to generate a full body unless i either use a reference image or prompt it very heavily. Is this normal?

Aitrepreneur

Yes? That's how it works, if you only put 1 reference image it will take all data from that image only, that's why you have the second image field for the pose and aspect ratio

LtCzr

Sorry one last question, how do i change the settings for how big the image is? is it tied with the second reference image?

Peter

i do need reinstall v2 to switch models?

LtCzr

I downloaded all of the models by duplicating the launcher but changing the model, where are they found? since I can still switch between them but the folder with instantid hasnt grown much

Anonymous

i couldn get the public URL up and running, it keep showing "Could not create share link. Please check your internet connection or our status page: https://status.gradio.app."

Anonymous

Where do I get the Low Ram Launcher I just see the Launcher.bat?

Anonymous

the new launcher takes arguments. Open it up and add --lowvram to the following line python app.py --inbrowser So it would looke like python app.py --inbrowser --lowvram or when you run the LAUNCHER.bat from the CMD you can do the following LAUNCHER.bat --lowvram

Anonymous

Trying to use the collab and not getting a public URL - it just finishes playing with the last lines "Keyword arguments {'safety_checker': None} are not expected by StableDiffusionXLInstantIDPipeline and will be ignored. Loading pipeline components...: 100% 7/7 [00:03<00:00, 2.17steps/s]" I did notice an error as it was installing "ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lida 0.0.10 requires kaleido, which is not installed. llmx 0.0.15a0 requires cohere, which is not installed. llmx 0.0.15a0 requires openai, which is not installed. llmx 0.0.15a0 requires tiktoken, which is not installed. tensorflow-probability 0.22.0 requires typing-extensions<4.6.0, but you have typing-extensions 4.9.0 which is incompatible."

Anonymous

Hello everyone, answering my own issue I was having with installing this. I uninstalled Avast and disabled Windows Defender. I also uninstalled my current version of python, and then installed the specific version of Python mentioned in Aitrepreneur's first install video (3.10.6). Then I followed the manual directions in that video, and I was able to install it.

Anonymous

This is insanely good. Just saying thanks again and for reference, I have this working on a Mac Studio M1; uses about 32GB Ram and takes around 100seconds to generate a 30step image.

Badbutton

I'm trying to use the Juggernaut model but it never DLs as in the video: My Launch.bat looks like this: @echo off CALL env\Scripts\activate python app.py --inbrowser --model_path stablediffusionapi/juggernaut-xl-v8 this is correct right? Thanks BTW for the Great Videos and Tutorials

Aitrepreneur

The max resolution can be changed by modifying the app.py file but I already did that, tested the most optimal resolution so to change it you can use the --medvram or --lowvram argument inside the launcher. medvram will give a max image resolution of 1024 while lowvram will give a max image resolution of 832

Aitrepreneur

no just watch my latest update video and use the --model_path argument to dl and use a specific model

Aitrepreneur

That's because you have an antivirus/firewall blocking the connection, disable them then try again

LtCzr

if I dont add these arguments and leave it blank, does it default to a higher resolution? I have a rtx 4090 so it tends to use 22gb of vram without those arguements. Should I just use the medvram if it gives the same level of output? Or does it slow down with less vram usage?

Aitrepreneur

that's because it's the model by default, so no need to even put that argument inside the launcher, I did that in the video just as an example. If you go to the C:\Users\YOURUSERNAME\.cache\huggingface\hub folder you will see a models--stablediffusionapi--juggernaut-xl-v8 folder that's where the model is only put the --model_path argument with another model you want to use as I showed in the video

Aitrepreneur

Yes, max resolution of 1280, so with your 4090 don't use any arguments for a higher res image, the higher res, the longer it takes to generate though

Anonymous

Thanks Overlord. I had the same issue as Badbutton and this explains it. I expected the model to be placed in /models but did not see it so was confused. Now i see it's being downloaded by hub and now i understand where hub actually places it's models!

Waub Amik

Is there a folder that images output too? Or do you need to save each one?

Anonymous

Everything installed without any problems. the in LAUNCHER.bat is inside the new created folder. But whenever I tried to launch it it immediately shuts down. little help?

Anonymous

Appreciate you bro, I need to chat with you about a project I am trying to wrap up but am stuck on some particulars...

Anonymous

Are you using a certain browser? Just tried again and it stops right before the URL is created unless it’s hidden for some reason.

Anonymous

Tried many ways but still can not launch this . disable the antivirus/firewall... reinstall windows ,, but still have the above problem .help me .

Anonymous

Is there any auto "generate forever' option? I don't want to click submit 1 by 1

Anonymous

I figured out my mistake - do not forget to click on "connect" and then "change runtime type", make sure you select the T4 GPU then save before launching the doc.

Anonymous

So I am using the google collab and none of the styles are applying - any idea why? Or if they do its very minimal - not like the examples at all - comparing results from https://huggingface.co/spaces/InstantX/InstantID vs this google collab are drastically different - any suggestions? It looks like the huggingface interface may be a newer version but outside that I dont really see any differences

Anonymous

A small update: I run the installer again. It seems it doesn't finishes but it just quits without any notification. Everything is updated. But the installer never finishes the installation. I checked my antivirus and everything but no joy. I probably am doing something wrong but a little help is appreciated.

Aitrepreneur

Look at the command prompt window at the end, it should give you at least an error on what's going on (you can screen record it just in case) once again make sure you disabled completely the antivirus and firewall before launching the installer in a new folder

Anonymous

Tried many ways but still can not launch this . disable the antivirus/firewall... reinstall windows ,, but still have the above problem cannot launch the application .help me .

Anonymous

F:\AI\InstantIDAuto>INSTANTID_AUTO_INSTALL_V2.bat [INFO] Winget is already installed. Installing Python 3.10.6... Failed to download Python installer. <- just getting a error message Antivirus is disabled .. the python program is downloaded and i can is install the pyton program.. but still gives the same error message if i click on the INSTANTID_AUTO_INSTALL_V2.bat

Anonymous

it downloads the pyton program but still gives the error message, i can manually install the pyton program and install it but still gives the error message if i click on INSTANTID_AUTO_INSTALL_V2.bat

Anonymous

i have disabled the antivirus and it still gives the error message

Anonymous

Or I guess the better question would be how do I download this ..?Hey new to All this! Ready to learn I am on an iPad though. Can I download this?

Anonymous

thanks for the answer . i got i to work, im using windows 11 latest version : in windows system : about : Advance system settings i had to put Git path, python and python script path in Open the Control Panel. Click System and Security, then System Click Advanced system settings on the left Inside the System Properties window, click the Environment Variables… button. then under Systemvariables , double click on path and i added in my case it was C:\Users\PC\AppData\Local\Programs\Python\Python310 C:\Users\PC\AppData\Local\Programs\Python\Python310\Scripts C:\Program Files\Git\cmd and that seams to have fixed the issues for me

Aitrepreneur

you can't download and use that on the ipad no, you need a powerful computer gpu for that but you can use the free Google colab doc I prepared and use that on your Ipad though

Aitrepreneur

Nice usually the installer installs all of that automatically but on some systems the security settings can block the connection from the file, glad it worked for you ;)

Anonymous

can i run this in RTX 2080 super 16GB?

Anonymous

3060 12gb?

Anonymous

I got error about module triton... tried everything in github but still module named triton is not found. would you help me about this problem?

Aitrepreneur

the triton error is completely normal on windows, this module is only available on linux no worries about that

Anonymous

every time i double click the launcher the cmd window pops up briefly and closes :\

Anonymous

I have this error when I launch it with cmd. The launcher just closes and ,nothing starts. I have disabled AV and threats for this to work F:\SDXL\Instant ID\INSTID>python app.py Traceback (most recent call last): File "F:\SDXL\Instant ID\INSTID\app.py", line 2, in import cv2 ModuleNotFoundError: No module named 'cv2'

Anonymous

it also says UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) hidden_states = F.scaled_dot_product_attention(

Aitrepreneur

try this, in the cmd prompt inside the folder (click on the folder path, cmd then press enter) type: env\Scripts\activate then press enter then type: python app.py --inbrowser then press enter and see if you get an error

Link Hearth

Do I have to download the image generated every time? I cant find an output folder in the instid folder

Link Hearth

Thank you, i realised that every images created are here, do you know for how long? i mean it takes space, and the images i want to keep are already stored elsewhere. Do I have to delete these files from time to time?

Aitrepreneur

yes you can delete them whenever you want, that's just where they are automatically saved

Anonymous

Launcher.bat not functioning. When it is launched a command prompt opens and immediately closes. I have tried with and without arguments

LtCzr

I am not sure if its released yet but their gradio demo now has a controlnet slider, are we able to update to this feature?

Aitrepreneur

open a command prompt window and drag and drop the launcher inside then press enter, this should at least give you an error message so that we can understand what's wrong

LtCzr

will it be in a update video or just an announcement?

Aitrepreneur

The Triton error doesn't matter, as I show in my videos, it runs on windows without problems, Triton is just an optimization module

Aitrepreneur

Ok I just tried it on my computer, the newest version is working now and....let's just say that if you had trouble running instantid before, now it's even worse because the new controlnet version uses around 19-20gb of vram, so only people with a 3090 will be able to run this. At this point it's better to use it inside the SD webui.... Let me know if you or someone else wants to use it anyway but yeah...

LtCzr

I would like it, how do i install it? Also i have a rtx 4090 so should be fine

Anonymous

I see there are updates to InstantID at Github. How do I pull them into my Windows installation of it?

Anonymous

I'm getting this error: \INSTANTID-CONTROLNET_AUTO_INSTALL.bat [INFO] Winget is already installed. Installing Python 3.10.6... Failed to download Python installer.

Aitrepreneur

if you already have python installed, try this one:https://we.tl/t-Vc9MFwpdVZ or else make sure you don't have an antivirus/firewall blocking the connection

Anonymous

My system has low RAM for graphics and the time to produce photos is very slow for me. Is there a way to generate photos faster ؟

Aitrepreneur

can't, it uses more vram so you need to pay for the colab subscription, you can use the official huggingface demo though, less people are using it nowadays so it's quicker, if you want to use the older version then use the colab

Anonymous

I have reinstalled everything 3 times now at the beginning when i started the LAUNCHER.bat file it closed instantly and now im at the point where I start it something begins to dowload and after a while it just crashes. Any suggestions?

Aitrepreneur

I would need to see the full error code, try opening a random windows command prompt window and drag and dropping the launcher.bat file inside the window and press enter, if there is an error it should show it to you now

Anonymous

Been getting errors around Triton not being found. A matching Triton is not available, some optimizations will not be enabled Traceback (most recent call last): File "E:\aiart\InstantID\InstantID-Controlnet\env\Lib\site-packages\xformers\__init__.py", line 55, in _is_triton_available from xformers.triton.softmax import softmax as triton_softmax # noqa ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\aiart\InstantID\InstantID-Controlnet\env\Lib\site-packages\xformers\triton\softmax.py", line 11, in import triton ModuleNotFoundError: No module named 'triton' E:\aiart\InstantID\InstantID-Controlnet\env\Lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.

Anonymous

Where are the most recent installers? Including: INSTANTID-CONTROLNET_AUTO_INSTALL.bat

Anonymous

Is it possible to use the canny and pose options in the version installed in Google Collab? I could not find these options there! Thank you for guiding me

Coup1

yeah I installed both and got it mixed up. Thanks :D

GrindBird

I have a 3070 when I try to launch the standalone v3 launcher I get an error that says CUDA out of memory GPU 0 has a total capacity of 8 GB of which 0 bytes is free. Unallocated memory is large try setting pytorch_cuda_alloc_conf=expandable_segements:True to avoid fragmentation. Would changing that fry my pc lol, should I just stick with v2 standalone or SD WEB UI.

Anonymous

Hello dear, I have a problem. Due to the fact that I placed the location of the installation file in a separate folder of my Windows C drive and run it. But in the end, the installation folder does not have a large volume, but instead it has filled a large volume of my C drive. How can I delete the files related to this webui that are located on my Windows drive, because I want to use the Net Control Launcher that you have just installed? . Is there a way to completely uninstall?

Aitrepreneur

the files installed on the C drive are probably the model, if you want to run the newest controlnet it's gonna be the same model so the install will actually be faster

Marco Santos

Hi Aitrepreneur, I have run the 1-click installer but i am unable to run it i have windows 10, 16gb vram the error is something about triton not having a wheel? it crashes before i can copy it, but this is the initial traceback error before it crashes: A matching Triton is not available, some optimizations will not be enabled Traceback (most recent call last): File "D:\AI\InstantID-Controlnet\env\lib\site-packages\xformers\__init__.py", line 55, in _is_triton_available from xformers.triton.softmax import softmax as triton_softmax # noqa File "D:\AI\InstantID-Controlnet\env\lib\site-packages\xformers\triton\softmax.py", line 11, in import triton ModuleNotFoundError: No module named 'triton' D:\AI\InstantID-Controlnet\env\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( D:\AI\InstantID-Controlnet\env\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( D:\AI\InstantID-Controlnet\env\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_5m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_5m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\AI\InstantID-Controlnet\env\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_11m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_11m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\AI\InstantID-Controlnet\env\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\AI\InstantID-Controlnet\env\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_384 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\AI\InstantID-Controlnet\env\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_512 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) And then it crashes. Help would be appreciated. thanks.

Corentin Pajot

Replace launchr by CALL env\Scripts\activate python app.py --model_path RunDiffusion/Juggernaut-XL-v8 REM List of possible arguments REM --inbrowser Automatically open the url in browser, if --share is used, the public url will be automatically open instead REM --server_port Choose a specific server port, default=7860 (example --server_port 420 so the local url will be: http://127.0.0.1:420) REM --share Creates a public URL REM --model_path Name of the sdxl model from huggingface (the default model example: --model_path stablediffusionapi/juggernaut-xl-v8 diffuser model you can find here: https://huggingface.co/stablediffusionapi/juggernaut-xl-v8 REM --medvram Medium vram settings, uses around 13GB, max image resolution of 1024 REM --lowvram Low vram settings, uses a bit less than 13GB, max image resolution of 832

Aitrepreneur

the triton error is normal on windows but other than that I can't see any error, try opening a command prompt window and drag and dropping the installer inside then press enter, this will avoid the window to close automatically so you can check what error it gives us for the troubleshoot

Aitrepreneur

Indeed, looks like the other model was nuked from huggingface...I updated the launcher with the new adress thanks!

Alpha Ghost 47

The face alwayys seems to be very blurry and poor quality regardless of what I do

Aitrepreneur

what model are you using? What prompt? Is your reference image of high quality? Have you tried with other images?

Steve Marchi

Hitting the following error when running INSTANTID-CONTROLNET_AUTO_INSTALL.bat Seems as if "https://huggingface.co/api/models/stablediffusionapi/juggernaut-xl-v8" throws an error, but "https://huggingface.co/api/models/stablediffusionapi/juggernaut-xl-v9" is available. Note the 9, could that be the issue? find model: ./models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) Couldn't connect to the Hub: 401 Client Error. (Request ID: Root=1-662c28fa-492338a437a03cf175fe1bd5;4c87a40f-ea85-45f8-8089-077422a38f96) Repository Not Found for url: https://huggingface.co/api/models/stablediffusionapi/juggernaut-xl-v8. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.. Will try to load from local cache. Traceback (most recent call last): File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\huggingface_hub\utils\_errors.py", line 286, in hf_raise_for_status response.raise_for_status() File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\requests\models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/stablediffusionapi/juggernaut-xl-v8 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1656, in download info = model_info(pretrained_model_name, token=token, revision=revision) File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\huggingface_hub\hf_api.py", line 2085, in model_info hf_raise_for_status(r) File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\huggingface_hub\utils\_errors.py", line 323, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-662c28fa-492338a437a03cf175fe1bd5;4c87a40f-ea85-45f8-8089-077422a38f96) Repository Not Found for url: https://huggingface.co/api/models/stablediffusionapi/juggernaut-xl-v8. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\Steve\InstantID-Controlnet\app.py", line 158, in pipe = StableDiffusionXLInstantIDPipeline.from_pretrained( File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1096, in from_pretrained cached_folder = cls.download( File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "C:\Users\Steve\InstantID-Controlnet\env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1905, in download raise EnvironmentError( OSError: Cannot load model stablediffusionapi/juggernaut-xl-v8: model is not cached locally and an error occured while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.

Aitrepreneur

yes it's normal you are probably using an older version of the launcher, dl the new one, the reason is because that particular huggingface repo was removed, but if you really want to use the v8 you can use the newlauncher (dl from here) or just use the V9 it's pretty similar anyway

Killy_Blame

Well looks like the Launcher isn't functional anymore.

Aitrepreneur

Have you downloaded the new one? The previous huggingface repo was removed, which is why made a new launcher for another repo for the juggernaut V8

Glen

I have the same problem.. the launcher file dont work for me

Bill G.

Sorry disregard the last messages, i found the ROCm removing it, and reinstalling Python as you suggested, shall see what happens before I bother you again. Thanks for your support.

Bill G.

Okay, then, I found the ROCm issue, and all then went smoothly, Your 1 clicks are fantastic! Worth every penny, you earn,. great job, Keep the video coming and the work that you do, Thanks for your reply, and I look forward to more Vids.. thanks again.