Home Artists Posts Import Register

Downloads

Content

Join discord and tell me your discord username to get a special rank : SECourses Discord

Patreon exclusive posts index

6 June 2024 Update

  • Updated to Torch 2.3.1 and xFormers 26

  • Working flawlessly with huge speed

  • New IP Adapter SDXL model added ControlNet downloads to be able to do style transfer

  • TensorRT repo updated and works perfect with LoRAs as well

  • If you get AttributeError: 'NoneType' object has no attribute 'lowvram' error when changing model, try several times changing and it gets automatically fixed

  • This error is latest Automatic1111 error

  • install_latest_auto_1111 .sh and file will install the latest versions of the following extensions

  • sd-webui-reactor (Reactor), adetailer (After Detailer), sd-webui-controlnet (ControlNet) and Deforum

  • Tutorial for how to run scripts : https://youtu.be/8Qf4x3-DFf4

  • How to download models from CivitAI or Hugging Face or upload them full tutorial : https://youtu.be/X5WVZ0NMaTg

  • If you are going to use TensorRT first go to Settings, User Interface, Quicksettings list, enable sd_net and reload UI before generating TensorRT model : sd_unet_enable_TensorRT.png

  • TensorRT example config : tensor_RT_settings.png

Latest installer files : runpod_auto1111_v19.zip

Download the zip file and extract into any folder.

These scripts are prepared for RunPod but they would work on any Linux system. It can be local or Cloud. Just change the paths accurately.

The install commands and full instructions are provided at the runpod_instructions_READ .txt inside the attached zip file.

If you register RunPod with my link I appreicate : https://runpod.io?ref=1aka98lq

Login your pod you can use this link : https://runpod.io?ref=1aka98lq

If you don't know how to setup a Pod here written tutorial : 

Full ControlNet Tutorial : https://youtu.be/3E5fhFQUVLo

How To Install On RunPod

  • Select your GPU. I think best price / performance is RTX 3090

  • Select RunPod template runpod/stable-diffusion:web-ui-10.2.1

  • You can use either community cloud, private or network storage

  • Customize deployment

  • Make container disk 20 GB and Make volume disk 150 GB - or any size you want according to your needs

  • Wait until you can click Connect and Connect to Jupyter Lab [Port 8888]

  • Enter inside stable-diffusion-webui folder

  • Upload relauncher.py and overwrite file. Then restart your pod. This is mandatory only 1 time.

  • This is important to kill auto started Automatic1111 Web UI instance and start our own from terminal.

  • Then upload everything into workspace folder.

  • If you want to install everything and download everything run below command

Install all command is suggest one

  • export HF_HOME="/workspace"

  • cd /workspace

  • chmod +x install_all.sh

  • ./install_all.sh

After installation to re-run the web UI execute below command

  • fuser -k 3000/tcp

  • cd /workspace/stable-diffusion-webui

  • python relauncher.py

If you use above command and install all, you don't need any of below ones since all will be installed.

Now we have different installers

  • The first one is install_latest_auto_1111.sh

  • This file will update automatic1111 to the latest version and its controlnet extension to latest version including insightface library - but will not download controlnet models yet

  • Then it will automatically start latest version of automatic1111 SD web ui with latest version ControlNet and the following extensions

  • sd-webui-reactor (Reactor), adetailer (After Detailer), sd-webui-controlnet (ControlNet) and Deforum

  • This file will download 275 Amazing Fooocus styles csv that you can use in Automatic1111 SD Web UI

To run it copy paste below command

  • export HF_HOME="/workspace"

  • cd /workspace

  • chmod +x install_latest_auto_1111.sh

  • ./install_latest_auto_1111.sh


If you want to install only TensorRT Extension

The tutorial of TensorRT here : https://youtu.be/kvxX6NrPtEk

And another tutorial of TensorRT : https://youtu.be/eKnMVXVjVoU

Script to execute for install

  • export HF_HOME="/workspace"

  • cd /workspace

  • chmod +x install_tensorRT.sh

  • ./install_tensorRT.sh

Let's say you want to download all controlnet models

Then you need to run below command

  • export HF_HOME="/workspace"

  • cd /workspace

  • python control_net_downloader.py

  • python download_ip_adapter_and_instantid.py

This above command will download huge amount of models 40GB+

  • control_net_downloader.py file will download all of the available ControlNet models that I know total number of 52.

  • The downloader will download into the accurate folder with accurate naming.

  • If you find new ones let me know

  • python download_ip_adapter_and_instantid.py will download ip-adapter-faceid-plusv2_sd15_lora, ip-adapter-faceid-plusv2_sdxl_lora, ip-adapter_instant_id_sdxl and control_instant_id_sdxl into the accurate folders with accurate naming. Accurate naming is crucial.

I will hopefully update scripts if there be new model releases.

Using ip-adapter-faceid and instant_id_sdxl are not straight forward. 

However I find that, these both models are working very poor compared to my direct implementation of their pipelines into a single Gradio app with 1 click installer.

You can download them from

1 Click Automatic Installer & A Very Advanced GUI For InstantId For Windows, RunPod, Linux & Kaggle Notebook:

IP-Adapter-FaceID-PlusV2 - 0 Shot Face Transfer - Auto Installer & Gradio App

Both of my apps has instructions and installers for both Windows, RunPod (Linux) & Kaggle notebooks

after this operation i suggest you to restart your web ui with following command

  • fuser -k 3000/tcp

  • cd /workspace/stable-diffusion-webui

  • python relauncher.py

You can also run download_models.py file

This file will download the following models

  • Realistic_Vision_V6 (SD 1.5 based model - one of the very best)

  • sdxl-vae-fp16-fix - fixed SDXL VAE in FP16 so you can run in Automatic1111 in FP16

  • RealVisXL_V4.0 - one of the most realistic SDXL model - training works fine

  • Hyper_Realism_V3 - which is the best SD 1.5 realism model that I find

To download any custom model you want, execute below command

  • cd /workspace/stable-diffusion-webui/models/Stable-diffusion

Then use below format

  • wget "download_link_of_model"



Files

Comments

Anonymous

Hello. Thanks for the guides. Is there any way to use the refiner in A1111 currently?

Anonymous

Hello. Can I use LoRAs and extensions from SD on SDXL?

Furkan Gözükara

LoRAs of SD 1.5 is not compatible with SDXL. you can use majority of the extensions but controlnet is also not compatible

Rafał Ryniak

started, offline work was selected in the settings, again reply from space ;)))

Anonymous

It's great to see you here. I learned a lot of the skills I need to create AI-related content from your videos, so I apologize for not subscribing sooner. Good luck with your future endeavors.

Furkan Gözükara

Thank you so much. I am so sorry for late reply. I was sick yesterday couldn't check computer. Your support tremendously helping me.

Anonymous

hello, reference_only is not working for SDXL do you know any other alternative i can use?

Furkan Gözükara

the controlnet extension developer keep updating the extension. i think it will come sooner. also there are a lot of new models i havent tested yet. here updated this thread : https://www.patreon.com/posts/84896373 - now it installs total 48 models and all of the newest SDXL models. give them a try

Anonymous

Hi! Can you give an advise about running Tensors with custom Lora? TensorRT Lora tab model loading seems to have no effect. And stylasation is disappearing while using it. I'm developing an indie product and Tensors can really lower my buget on large scale. I would apreceate if you can give me a hint:)

Furkan Gözükara

Yes you can also generate LoRA tensor. Explained in this video have you seen it? https://youtu.be/kvxX6NrPtEk

Shavel

Hello my Runpod has worked for weeks and suddenly it is giving me this error for 2 days. Happens with creating new pods also. 2023-11-18T17:48:54Z create pod network 2023-11-18T17:48:54Z create container runpod/stable-diffusion:web-ui-10.2.1 2023-11-18T17:49:02Z pending image pull runpod/stable-diffusion:web-ui-10.2.1 2023-11-18T17:49:09Z error pulling image: Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:43926->127.0.0.53:53: i/o timeout Have you seen this before?

Nenad Kuzmanovic

You are very dedicated to your work... I really appreciate and respect that. I think you are by far the best AI content tutorial creator on the internet right now. Don't change anything in your work... I learned a lot from you and our early patreon hangout we had when I was at begining of this journey. Since then I have learned a lot, especially training models that are not the face of a real person. Right now I'm blown away by the results I'm getting in Kohya training only OUT blocks in LORA. I assure you that IS THE ONLY WAY to properly teach a style... There is no other way, cause every single tutorial on YT that explain how to make Style model is WRONG. In their way you just getting Concept and not style.. A whole unexplored universe is hidden behind those settings. I feel like I'm just starting to learn Latent Diffusion. If you want, I can send you the results of those trainings, with dataset images, etc... As a sign of gratitude, because you were my first teacher..

Rylan Czuczman

Hey thanks for these details - is it possible to use ReActor with this installation please? Struggling to find anything online related to RunPod

Arcon Septim

Can you please explain how to do clean reinstall so we don't lose model data

Anonymous

Hi, I'm reaching out to seek assistance for a persistent error I've been encountering with a CUDA-related process. Here is the error message: Traceback (most recent call last): File "/workspace/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "/workspace/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api result = await self.call_function( File "/workspace/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "/workspace/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/workspace/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/workspace/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/workspace/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper response = f(*args, **kwargs) File "/workspace/stable-diffusion-webui/modules/call_queue.py", line 77, in f devices.torch_gc() File "/workspace/stable-diffusion-webui/modules/devices.py", line 61, in torch_gc torch.cuda.empty_cache() File "/workspace/venv/lib/python3.10/site-packages/torch/cuda/memory.py", line 133, in empty_cache torch._C._cuda_emptyCache() RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Anonymous

and this error *** Error running process_batch: /workspace/stable-diffusion-webui/extensions/Stable-Diffusion-WebUI-TensorRT/scripts/trt.py8it/s] Traceback (most recent call last): File "/workspace/stable-diffusion-webui/modules/scripts.py", line 742, in process_batch script.process_batch(p, *script_args, **kwargs) File "/workspace/stable-diffusion-webui/extensions/Stable-Diffusion-WebUI-TensorRT/scripts/trt.py", line 302, in process_batch if self.idx != sd_unet.current_unet.profile_idx: AttributeError: 'NoneType' object has no attribute 'profile_idx'

Dmitry

Hello, thank you very much for your amazing work helping to solve a lot of problems. I used the installation through your script, everything works fine, but recently an update for controlnet was released, instant_ID appeared in it https://www.youtube.com/watch ?v=pd4EY5udcF8 Allows you to create art based on photos that are not inferior in quality to Laura. And instant_id doesn't work. Could you please see what the problem might be and how this problem can be solved?

Furkan Gözükara

For InstantId i am making a stand alone installer right now. ControlNet extension still not working perfect for it

C. Jonas

Hi Mate, do you plan to update the Windows Version as well? Maybe there is also sth. new to try?

Dmitry

Hello, you wrote in the instructions that if I run the command: "export HF_HOME="/workspace" cd /workspace chmod +x install_all.sh ./install_all.sh" Then everything that is needed will be installed, but I did not find a line in the script that would run the file: "install_latest_auto_1111.sh" Is this how it should be and is the code with the automatiс1111 update in the py files? I got acquainted with the code from the moment I started watching your videos, do not judge strictly if I do not understand something)

natalate_art

Hello, unfortunately have error with checkpoint window and vae window (it does not appear when I make checkpoint/vae changes), also I have lowvram eror and it does not work when I reload or etc

Dmitry

Hello, something happened. After updating A1111(used only the script from the file "install_latest_auto_1111.sh") - A1111 stopped running. Please see what this might be related to. Perhaps you should make a script that installs A1111 from the repository and creates your own venv, rather than using the one that was in the template? UPD Looks like I solved the problem. Added to file "install_latest_auto_1111.sh" after the line "44" pip install -r requirements.txt

Furkan Gözükara

great. normally auto1111 should automatically update dependencies but if you had enabled --skip install - comes default, that must be reason