Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index

Join discord and tell me your discord username to get a special rank : SECourses Discord

30 March 2024 Update:

28 March 2024 Update:

22 March 2024 Update:

  • All configs are updated and Stochastic Rounding disabled

  • Stochastic Rounding makes the effect of like FP32 - float training

  • However, float like training causes overtraining with our currently found best hyper parameters

  • Therefore, for now, they are disabled until a better hyperparameters are researched and found

  • How to use these configs quick tutorial : https://www.youtube.com/watch?v=yPOadldf6bI

9 February 2024 Update:

  • Settings are updated for the latest OneTrainer update

  • You don't need to put optimizer prefs anymore

  • You can open configs json file and look inside to understand logic of how it works

  • You need to change workspace_dir, cache_dir, output_model_destination, edit your training concepts

  • I did set default model as the most realistic SD 1.5 model which is Hyper Realism V3 hosted on Hugging Face (MonsterMMORPG/sd15_best_realism)

  • You can change model to any model you want from your computer or Hugging Face repo

  • Hopefully I will make a full tutorial that includes how to train on Windows and RunPod

Clone the repo into any folder : https://github.com/Nerogar/OneTrainer

Double click install .bat

Then double click start-ui .bat to start it

I have made over 70 full DreamBooth trainings for over 7 days and meticulously analyzed their results to the find very best training hyper parameters.

120 amazing quality images with their prompt info posted on CivitAI 

We have 3 configs.

The configs will not save checkpoints during training but will only save a final checkpoint. Don't forget to change that behaviour or fix your final checkpoint path.

Tier 1 is best quality. Don't use xFormers.

Tier 2 is second best quality. Uses xFormers to reduce VRAM. 

All Tier 2 are equal quality and only speed and VRAM usage changes.

xFormers : reduces VRAM, increases speed, reduced quality

Gradient Checkpointing : reduces VRAM, reduces speed, quality same

EMA : increases VRAM, reduces speed, improves quality. You can load EMA on both CPU and GPU. If you load on CPU, it will be slower but VRAM will be same.

Since OneTrainer supports EMA, it is better than Kohya.

Kohya config : https://www.patreon.com/posts/97379147

There are 2 strategies of training. Stylized vs Realism.

To find out very best models for both realism and stylization models, I have made 161 models comparison recently if you remember : https://youtu.be/G-oZn4H-aHQ

Models Downloader Script And The Patreon Post Shown In The Video ⤵️ https://www.patreon.com/posts/1-click-download-96666744

1st:

Training for realism. For this training strategy I have chosen the Hyper Realism V3 model from CivitAI. The config file will download it automatically from Hugging Face or alternatively you can give the local path.

2nd:

Training for stylization like 3d render of yourself. For this task I have chosen RealCartoon-Pixar V8 from CivitAI.

To use this model the key change you need to make is, making Clip skip 2 in Advanced Settings.

I used 15 training images and trained 150 epoch. 

My used training images are as below (they are at best medium quality)

For RealCartoon-Pixar V8 hopefully I will add Regularization images to this post soon.

For realism, use our very best real unsplash collected regularization images ⤵️

https://www.patreon.com/posts/massive-4k-woman-87700469

I trained both 768x768 and 1024x1024. 768x768 training works better than 1024x1024. Moreover, generating 1024x1024 works better than 768x768. When fixing faces with ADetailer extension, make the ADetailer extension resolution 768x768 even if you generate images in 1024x1024.


Comments

Anonymous

You mean in the main Adetailer tab in "inpainting" where it says"use separate weight/height"? Or do you mean in the settings of Auto1111 under the Adetailer tab?

John Dopamine

OT has a branch now that works for Stable Cascade. In time it'd be interesting to hear your thoughts on best OT settings to train that new model (particularly the 3b one). I'm on the fence as to whether or not it's a model I'll get much use out of. It seems like some people are very optimistic about it w/ regards to training meanwhile those training so far haven't shown much that demonstrates this. Even a "quickstart"/share of preliminary settings to get started would be helpful (the model tab for example is a bit complicated for S.Cascade even though I had training going I don't know if the TXT encoder/UNet were being trained properly. My results either showed minimal learning w/ LRs of 1e-5+, or burn out at 1e-4 etc).

Steve Bruno

this was an issue you worked on last month, I was saying that the answer IS in the thread you worked on github. When the final model is saved the whole file must be specified not just the dir. So if the hypothetical model output destination is C:\OneTrainer\WORKING\OutputDir\ that results in the error I got. I should have done C:\OneTrainer\WORKING\OutputDir\FILENAME if this is in one of your readme's somewhere I missed it and that is what I was doing wrong. Putting in the file name fixed my problem.

Anonymous

I am trying to install OneTrainer on Runpod, but I always have "tkinter" error. Can you help?

Nenad Kuzmanovic

so, i have conclusion about OT: it is NOT using regulation images and it is NOT using prior_preservation... I did a lot of testing plus i read on OT Discord that OT is not using good old Dreambooth training script. It is pure Finetune... For Dreambooth, i will start to use Diffusers library. Btw. this is working really amazing, everybody should try it.. Great results. https://github.com/huggingface/diffusers/tree/main/examples/advanced_diffusion_training I can post my training command, i spend a lot of time to make it run...

Nenad Kuzmanovic

Maybe we should request to integrate Dreambooth, cause if OT has LORA support, then Dreambooth would be no problem, i guess..

Pavel Desort

so One Trainer can't be trained as dreambooth? If I have just strated training (finetuning doesn't help) I need to use the option Lora in One Trainer?

Furkan Gözükara

with using second concept with reg images, we are making it almost as dreambooth : https://youtu.be/yPOadldf6bI no i don't suggest lora training. do this fine tuning and extract lora. it will be much better

Alex

Hi, I trained SDXL model by 15 pictures of a person, I did as you showed in the video, but when after training I generate pictures with keyword I can't generate this person, the person is just not there. I tried to train on SD 1.5 model I got everything working, but on SDXL I can't. What can be the problem?

Roy Ding

I have a problem with config auto backup and sampling. If I put training epochs = 60, sample after 10 epcho, backup after 60 epoch, it won't backup or sample the last epochs. But it will sample 0-50 epoch. Is there something I did wrong?

Roy Ding

I found that onetrainer has a really good feature called Masked training, it will allow training only the face area, or any part that is set in the mask. Maybe you could cover this topic later?

Manpreet Singh

What is the minimum number of training images that will produce good results with this workflow? And how many epochs to train for those number of images? Also how many epochs to train for 20 images? (hopefully min required images is lower than 20)

Jannik

Hi, can you share your settings for the ADetailer?