FLUX Models 1-Click Auto Downloaders for SwarmUI for Windows, RunPod and Massed Compute (Patreon)
Downloads
Content
Patreon exclusive posts index to find our scripts easily
Join discord to get help, chat, discuss and also tell me your discord username to get your special rank : SECourses Discord
Please also Star, Watch and Fork our Stable Diffusion & Generative AI GitHub repository and join our Reddit subreddit and follow me on LinkedIn (my real profile)
Automatic Black Forest Labs FLUX models 1-Click downloaders for Windows, RunPod, Massed Compute
The downloader scripts have resume capability, skip if fully downloaded capability and will auto retry 100 times if an error occurs and continue from wherever left automatically
15 August 2024 Update
Downloader scripts file updated to v6 : FLUX_models_Auto_Downloaders_v6.zip
Newest optimized and quantized models added to downloads like below
Now you can overwrite previously automatically downloaded T5 FP8 text encoder
Comprehensive model and precision tests have been made and published at below post
2 August 2024 Update
aws FP8 model weights downloader added - If you are under 30 GB GPU use them no difference
FP16 T5 XXL clip downloader added - 24 GB can use this
By default SwarmUI downloads FP8 version of T5 text encoder
More comparison tests zip file added
How To Use
Download and extract zip file in the attachments into SwarmUI parent folder like below
Full Windows, Massed Compute and RunPod tutorials here : https://www.patreon.com/posts/106135985
For Kaggle we already have updated notebook here : https://www.patreon.com/posts/106650931
I have compared FP8 vs FP16 mode and Turbo model in below imgsli in details
Running at FP16 requires 28 GB VRAM but produces better quality
FP8 vs FP16 vs Turbo vs FP16 T5 model detailed comparison (16 different tests) : https://imgsli.com/MjgzODYy/7/6
Turbo model and dev model has same VRAM usage and per step speed
Only difference is number of necessary steps needed
When running FP16 model at FP8 it gives identical results as FP8 version - half size
How to run the model at FP16 shown below