1-Click INSTALL KOHYA SS GUI (Patreon)
Downloads
Content
Hey everyone! I've created a 1-click auto installer for KOHYA SS GUI, a tool to train LORA for Stable Diffusion. These file automates the entire install process with just 1-CLICK!
Here's how it works:
1. Download the batch file from this Patreon post and put it in an empty folder somewhere on your computer.
2. Double click the batch file to run it. The file will automatically:
- Install Python and other dependencies IF you don't have it installed already
- Install Git IF you don't have it installed already
- Clone the Kohya ss Gui repository from GitHub
- Launch Kohya ss Gui installation process
Once the installation has started it will ask you a few questions that you will need to answer depending on your situation.
If you are installing locally, here's what you need to input and press enter to confirm:
Kohya_ss GUI setup menu:
1. Install kohya_ss gui
2. (Optional) Install cudann files (avoid unless you really need it)
3. (Optional) Install specific bitsandbytes versions
4. (Optional) Manually configure accelerate
5. (Optional) Start Kohya_ss GUI in browser
6. Quit
Enter your choice: 1
--------
1. Torch 1 (legacy, no longer supported. Will be removed in v21.9.x)
2. Torch 2 (recommended)
3. Cancel
Enter your choice: 2
--------
In which compute environment are you running?
Please select a choice using the arrow or number keys, and selecting with enter
* This machine
AWS (Amazon SageMaker)
--------
Which type of machine are you using?
Please select a choice using the arrow or number keys, and selecting with enter
* No distributed training
multi-CPU
multi-GPU
TPU
--------
Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]: NO
--------
Do you wish to optimize your script with torch dynamo?[yes/NO]: NO
--------
Do you want to use DeepSpeed? [yes/NO]: NO
--------
What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]: all
--------
Do you wish to use FP16 or BF16 (mixed precision)?
Please select a choice using the arrow or number keys, and selecting with enter
no
fp16 -----> If GPU older than NVIDIA 3000 series (1080, 970, etc...)
bf16 -----> If GPU newer than NVIDIA 3000 series (3080, 3090, 4090, etc...)
fp8
the accelerate configuration saved at C:\Users\USERNAME/.cache\huggingface\accelerate\default_config.yaml
Let me know if you run into any issues getting it set up!
And as always, supporting me on Patreon allows me to keep creating helpful resources like this for the AI art community. Thank you for your support - now go train some awesome LORAs!