Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index to find our scripts easily, Patreon scripts updates history to see which updates arrived to which scripts and amazing Patreon special generative scripts list that you can use in any of your task.

Join discord to get help, chat, discuss and also tell me your discord username to get your special rank : SECourses Discord

Please also Star, Watch and Fork our Stable Diffusion & Generative AI  GitHub repository and join our Reddit subreddit and follow me on LinkedIn (my real profile)

Massive Full OneTrainer Tutorial : https://youtu.be/0t5l6CP9eBg

16 August 2024

  • Configs updated to v4

  • Missing Offset Noise Weight added

  • Perturbation Noise Weight added and set as 0.1

  • According to new changes, now the colors will look much more natural after training instead of like over-saturated

  • If you like over-saturated colors you can set Perturbation Noise Weight and Offset Noise Weight to 0

13 August 2024

  • After doing 20 new hyper parameter testing and full training I have improved the configuration slightly

  • Training U-NET LR slightly reduced

  • Loss Weight Function updated to Min SNR Gamma

  • You can see detailed comparison report and investigation in below post

  • https://www.patreon.com/posts/20-new-sdxl-fine-110052137

  • Download newest V2 configs from attachments

30 March 2024 Update:

All configs updated and weight decay parameter set in Adafactor optimizer since it improves quality

26 March 2024 Update:

22 March 2024 Update:

  • All configs are updated and Stochastic Rounding disabled

  • Stochastic Rounding makes the effect of like FP32 - float training

  • However, float like training causes overtraining with our currently found best hyper parameters

  • Therefore, for now, they are disabled until a better hyperparameters are researched and found

  • How to use these configs quick tutorial : https://www.youtube.com/watch?v=yPOadldf6bI

9 February 2024 Update:

  • Settings are updated for the latest OneTrainer update

  • You don't need to put optimizer prefs anymore

  • You can open configs json file and look inside to understand logic of how it works

  • You need to change workspace_dir, cache_dir, output_model_destination, edit your training concepts

  • I did set default model as the SDXL 1.0 base model hosted on Hugging Face

  • You can change model to any model you want from your computer or Hugging Face repo

  • Hopefully I will make a full tutorial that includes how to train on Windows and RunPod

I have been experimenting with Nerogar OneTrainer for days right now.

OneTrainer repo : https://github.com/Nerogar/OneTrainer

We have 3 configs.

The configs will save checkpoints every 25 epochs during training and save a final checkpoint. Don't forget to change that behaviour or fix your final checkpoint path.

Clone the repo into any folder : https://github.com/Nerogar/OneTrainer

Double click install. bat

Then double click start-ui .bat to start it after putting presets into accurate folders

Kohya config : https://www.patreon.com/posts/very-best-for-of-89213064

Load our preset according to your GPU VRAM

In the general tab set your workspace directory and cache directory

In the model tab set your SDXL model path and output destination

In concepts tab add your training images and set their training captions. I trained with ohwx man

I don't see sampling is useful. It uses poorly performing sampler by default

In backup you can set your parameters as you wish. Like save after every 25 epoch. Saved checkpoints will be inside workspace/save folder

As I said I will make hopefully a big tutorial for OneTrainer

What Each Config Means

tier1_10.4GB_slow_v4.json : This file uses Fused Back Pass and disables Autocast Cache for minimum VRAM : 10.4 GB which is amazing. Same as best Tier 1 we had previously.

Tier 1 slow is 3.72 second / it on RTX 3060 - 12 GB GPU

Tier 1 slow is 1.58 second / it on RTX 3090 TI - 24 GB GPU

tier1_15.4GB_fast_v4.json : This file don't use Fused Back Pass and enable Autocast Cache

Tier 1 fast is 1.45 second / it on RTX 3090 TI 24 GB GPU

Gradient Checkpointing do not affect quality but only make training slower yet uses lesser VRAM

I also added a full resolution comparison.jpg between default SDXL preset of OneTrainer vs my 14_5GB preset vs Kohya best config ( https://www.patreon.com/posts/89213064 ) we already use

Used training images dataset - 15 images

This dataset is at best medium quality. You should try to add different backgrounds, different clothings, different angles, different expressions and different distances.

For regularization images, I have used 5200 images dataset that we have. Set repeating 0.003 so 15 reg images used in each epoch. : https://www.patreon.com/posts/massive-4k-woman-87700469


Files

Comments

Hassan Alhassan

my model is getting over trained quickly, i used the 18gb and 14.5 gb both are getting over trained

chriseppe

the results seem promising. Would love to see if it looks even more realistic without a surreal background. Did you simply generate the images without using Adetailer on top?

Anonymous

Regardless of which Tier presets I use I get the CUDA out of memory error. I have a 16GB card and have ran other trainings with different settings just fine before. What setting in the presets may be doing this? I'm mainly trying the 13GB one.

Furkan Gözükara

Can you verify your VRAM usage before trying preset? Also are you using latest version of the OneTrainer? If you message me from discord i can connect via anydesk and check it out

Anonymous

How do I verify my VRAM usage? As far as I know it's the latest but I'll check. I'm on the discord as OhTheHueManatee.

Javi dltr

how do we use captions here for reg and for data?

Javi dltr

in sdxl config 1 ema is off but decay is 0.999?

John Dopamine

There's a new branch (just about merged) w a new "fused base" feature that allows for a significant decrease in the amount of VRam needed to train. If you haven't seen this yet, keep an eye out. With 24gb you can now train w float32, or large batch sizes like 14. It also should allow people who have cards under 16gb train now also. Mentioning this after seeing you updated after reviewing stochastic rounding. This discovery kinda came from that (which came due to Cascade not training at all). Perfect timing w SD3 right around the corner (please let them not backtrack).

Karl B

I have a question, on the Onetrainer wiki it says if "latent caching" is on as it is in your file, then you need to use more than 1 on the "image variants" setting under concepts. I have about 50 photos ready to train, Onetrainer doesn't really explain what do set that number to. Any ideas?

George Gostyshev

That's really great work done! Appreciated) OneTrainer big tutorials could be nice. Also, can I ask\suggest to sometimes add not so big videos? Like videos about some small but useful technics. Anyway thanks again)

Marian Ban

Hi, what are the correct settings for network rank and dimension? These are not included in the json files. I'm asking about the Kohya settings. Thanks

Furkan Gözükara

hi that is lora. i dont suggest lora . these are fine tuning : https://www.linkedin.com/posts/furkangozukara_why-i-dont-research-lora-training-because-activity-7164700874097856512--Ecj/?utm_source=share&utm_medium=member_desktop

mike oxmaul

Why not use EMA? Previous post said using EMA made this better than Kohya.

Tom Bloomingdale

Hello - this is working in that it is training and finishing. The results are fairly consistent and look like the same person (kind of) but nothing like the me (the subject). I have tried a few different models as a base, and gone through the settings as you describe in your video/writings several times. Any idea why I am not getting results as expected? This is SDXL, the tier1_10.4GB_slow settings, 200 epochs.

Furkan Gözükara

are you using adetailer to fix the face after generation? that is super important. please let me know after trying

Pew

Hey Doc, do you know if these new settings would apply to finetuning a Pony model or does that require a different approach than what your scripts and training entails?

Furkan Gözükara

good question. I didn't test on Pony sadly and as far as i know it is different than other models significantly. you can try. Also I am testing 2 more new settings right now so config may get updated and become better

Neto Leutwiler

Hi Dr Furkan, thanks a lot for your updates. Can we apply this best configs on kohya as well? If its possible we use this file "tier1_15.4GB_fast_v4" or the other one of kohya "Tier1_48_GB_Faster" with some modifications?