Home Artists Posts Import Register

Downloads

Content

Here you will find 6 optimized training presets for LORA inside Kohya SS. Whether you're training locally or using Runpod.

What's inside?

3 Local Training Presets: Designed for those who love to train on their own systems.

  • Normal Training: For your everyday needs.
  • Style Training: Unleash your creativity and give your models a unique flair.
  • LowVRAM: Optimized for local systems with limited VRAM.

3 Runpod Presets: Perfect for those using Runpod's GPU renting platform. These presets automatically locate the location of the stable diffusion XL model for you.

  • Normal Training: The classic training preset.
  • Style Training: Let your models stand out with a distinctive style.
  • Small LORA Files: Ideal for those working with smaller LORA file sizes.

Why These Presets?

For LORA training, every little optimization can make a huge difference. These presets are the culmination of countless hours of experimentation and refinement. By using them, you're ensuring that your LORA training is both efficient and effective, regardless of where or what you're training.


And as always, supporting me on Patreon allows me to keep creating helpful resources like this for the AI art community. Thank you for your support - now go train some awesome LORAs!

Files

Comments

Anonymous

New to this: I have been following these great tutorials and I am training 20 images (like recommended) on my RTX 4070 and it is estimating 104 hr with the recommendations made in the video. What have I done that has made this process so slow? Thank you for your help.

Aitrepreneur

that's a bit slow indeed, what is the number of total steps it gave you? and the number of it/s?

Anonymous

Kohya buttons not working on mac. I thought it was because needed the configuration file, then I subscribed for you patreon to download the files but the open button doesn't work at all... I tried some solutions from Reddit but the venv folder doesn't contain the Scripts folder. How do I fix it?

Anonymous

Im using your SDXL standard presets to try and train a likeness. Im running on a 4090 and with your settings its running about 10k steps and looks to be about 5 hours training time. 4/batch 41images/100 steps 10/epochs (no regs)...does this sound right for likeness training in SDXL(using base model)? Thanks

Aitrepreneur

sounds about right, depends on every GPU, you can also try reverting back to an older driver version (like the 531 drivers version) from Nvidia, might give you even more speed

Anonymous

hello! first of all, thanks a lot for this collection of tutorials and tools. this is the worthiest patreon i've ever been. i've been training several loras with really good results. but having a beast of a computer it seems very slow. only uses a 20% of memory. i'd like to boost it but i dont know how. thanks in advance

Aitrepreneur

You can try downloading the latest nvidia GPU and using this trick to only allow AI apps to use your Vram by disabling shared memory fallback: https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion

Anonymous

Hi, I get the following error when I run the LoRa training on the runpod: 2023-12-20 11:37:19.448110: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-20 11:37:19.448272: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-20 11:37:19.448407: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-12-20 11:37:19.455528: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-12-20 11:37:24.198671: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

Anonymous

Does LowVram work for older cards like a 1080 with 8gb vram?

Aitrepreneur

it can work if you use the latest Nvidia drivers but the training will be extremely long since it will use your Ram to make up for the lack of vram

Anonymous

Hey, thank you all the content you are creating. I have a RTX 3060 12 GB. I have been trying to train but the amount of time it takes is too much. Do you have any presets suggestion for this kind of hardware? Thank you!

Aitrepreneur

if it takes too long it's probably because of nvidia drivers, update them to the latest version and disable system memory fallback: https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/system-memory-fallback-for-stable-diffusion Then for your GPU, use the lowvram preset

Dart Photography UK

Do you have any presets for Lora training with SD 1.5?

Aitrepreneur

use the same presets, just uncheck the sdxl checkbox and change the resolution to 512,512 instead of 1024,1024

Sebastian Paulmann

Im not sure which one to take. I got a pretty beefy Laptop - 3080 ti 16gb Vram 64gb ram ryzen9, ill just try sdxl