OneTrainer Stable Diffusion XL (SDXL) Fine Tuning Best Presets (Patreon)
Downloads
Content
Patreon exclusive posts index to find our scripts easily, Patreon scripts updates history to see which updates arrived to which scripts and amazing Patreon special generative scripts list that you can use in any of your task.
Join discord to get help, chat, discuss and also tell me your discord username to get your special rank : SECourses Discord
Please also Star, Watch and Fork our Stable Diffusion & Generative AI GitHub repository and join our Reddit subreddit and follow me on LinkedIn (my real profile)
Massive Full OneTrainer Tutorial : https://youtu.be/0t5l6CP9eBg
16 August 2024
Configs updated to v4
Missing Offset Noise Weight added
Perturbation Noise Weight added and set as 0.1
According to new changes, now the colors will look much more natural after training instead of like over-saturated
If you like over-saturated colors you can set Perturbation Noise Weight and Offset Noise Weight to 0
13 August 2024
After doing 20 new hyper parameter testing and full training I have improved the configuration slightly
Training U-NET LR slightly reduced
Loss Weight Function updated to Min SNR Gamma
You can see detailed comparison report and investigation in below post
Download newest V2 configs from attachments
30 March 2024 Update:
All configs updated and weight decay parameter set in Adafactor optimizer since it improves quality
26 March 2024 Update:
New 2 presets added and older ones are removed
The quality of the new presets tested and now they are both tier 1
Entire post is updated and simplified
You can see a comparison thread for Tier 1 slow vs Fast : https://www.reddit.com/r/StableDiffusion/comments/1bnyokv/now_you_can_full_fine_tune_dreambooth_stable/
New presets saves a checkpoint every 25 epochs with a prefix_naming
New presets trains 200 epochs
How to use these configs quick tutorial : https://www.youtube.com/watch?v=yPOadldf6bI
22 March 2024 Update:
All configs are updated and Stochastic Rounding disabled
Stochastic Rounding makes the effect of like FP32 - float training
However, float like training causes overtraining with our currently found best hyper parameters
Therefore, for now, they are disabled until a better hyperparameters are researched and found
How to use these configs quick tutorial : https://www.youtube.com/watch?v=yPOadldf6bI
9 February 2024 Update:
Settings are updated for the latest OneTrainer update
You don't need to put optimizer prefs anymore
You can open configs json file and look inside to understand logic of how it works
You need to change workspace_dir, cache_dir, output_model_destination, edit your training concepts
I did set default model as the SDXL 1.0 base model hosted on Hugging Face
You can change model to any model you want from your computer or Hugging Face repo
Hopefully I will make a full tutorial that includes how to train on Windows and RunPod
I have been experimenting with Nerogar OneTrainer for days right now.
OneTrainer repo : https://github.com/Nerogar/OneTrainer
We have 3 configs.
The configs will save checkpoints every 25 epochs during training and save a final checkpoint. Don't forget to change that behaviour or fix your final checkpoint path.
Clone the repo into any folder : https://github.com/Nerogar/OneTrainer
Double click install. bat
Then double click start-ui .bat to start it after putting presets into accurate folders
Kohya config : https://www.patreon.com/posts/very-best-for-of-89213064
Download tier1_10.4GB_slow_v4.json (uses 10.4 GB VRAM) , tier1_15.4GB_fast_v4.json (uses 15.4 GB VRAM) into training_presets folder
Watch how to use concepts tutorial : https://www.youtube.com/watch?v=yPOadldf6bI
Don't forget to use 1024x1024 pixel resolution training and regularization images for SDXL training
Load our preset according to your GPU VRAM
In the general tab set your workspace directory and cache directory
In the model tab set your SDXL model path and output destination
In concepts tab add your training images and set their training captions. I trained with ohwx man
I don't see sampling is useful. It uses poorly performing sampler by default
In backup you can set your parameters as you wish. Like save after every 25 epoch. Saved checkpoints will be inside workspace/save folder
As I said I will make hopefully a big tutorial for OneTrainer
What Each Config Means
tier1_10.4GB_slow_v4.json : This file uses Fused Back Pass and disables Autocast Cache for minimum VRAM : 10.4 GB which is amazing. Same as best Tier 1 we had previously.
Tier 1 slow is 3.72 second / it on RTX 3060 - 12 GB GPU
Tier 1 slow is 1.58 second / it on RTX 3090 TI - 24 GB GPU
tier1_15.4GB_fast_v4.json : This file don't use Fused Back Pass and enable Autocast Cache
Tier 1 fast is 1.45 second / it on RTX 3090 TI 24 GB GPU
Gradient Checkpointing do not affect quality but only make training slower yet uses lesser VRAM
I also added a full resolution comparison.jpg between default SDXL preset of OneTrainer vs my 14_5GB preset vs Kohya best config ( https://www.patreon.com/posts/89213064 ) we already use
Used training images dataset - 15 images
This dataset is at best medium quality. You should try to add different backgrounds, different clothings, different angles, different expressions and different distances.
For regularization images, I have used 5200 images dataset that we have. Set repeating 0.003 so 15 reg images used in each epoch. : https://www.patreon.com/posts/massive-4k-woman-87700469