Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index

Join discord and tell me your discord username to get a special rank : SECourses Discord

Download the attached webui-user.sh and relauncher.py into the workspace/stable-diffusion-webui

It will ask you to overwrite. Overwrite them.

Edit relauncher . py file and change the GPU device index

  • CUDA_VISIBLE_DEVICES=0 : The web UI will start on first GPU
  • CUDA_VISIBLE_DEVICES=1 : The web UI will start on second GPU
  • CUDA_VISIBLE_DEVICES=2 : The web UI will start on third GPU and so on

To start web ui call

  • python relauncher.py
  • This will start web UI instance and it will give you a public Gradio link to use them

Or download files as files as zip.zip

To start multiple Kohya trainings on a single Pod:

With multiple GPUs you need to first get your training command. You can use Kohya GUI print training command feature

Then you need to add below to the beginning of the command

First activate venv 

  • cd /workspace/kohya_ss
  • source venv/bin/activate
  • Then below

Example (it starts training on third GPU on the machine)

CUDA_VISIBLE_DEVICES=3 accelerate launch --num_cpu_threads_per_process=4 "./sdxl_train.py" --pretrained_model_name_or_path="/workspace/stable-diffusion-webui/models/Stable-diffusion/sd_xl_base_1.0.safetensors" --train_data_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/img" --reg_data_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/reg" --resolution="1024,1024" --output_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/model" --logging_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/log" --save_model_as=safetensors --output_name="1e5_ada_40_repeat_wd001" --lr_scheduler_num_cycles="8" --max_data_loader_n_workers="0" --learning_rate="1e-05" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="5200" --save_every_n_epochs="1" --mixed_precision="bf16" --save_precision="bf16" --cache_latents --cache_latents_to_disk --optimizer_type="adafactor"  --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale --noise_offset=0.0 --full_bf16 --optimizer_args scale_parameter=False relative_step=False warmup_init=False weight_decay=0.01

Comments

baran mengül

How much does ıt/s go up ? thanks

Furkan Gözükara

it changes according to the optimizer. adafactor is 1.05 it / s and lion is 1.55 it / s on A5000. or AdamW8Bit is also 1.55 it / s on A5000 GPU - same price as RTX 3090