Home Artists Posts Import Register

Content

https://gofile.io/d/hIRzpc
ItBeginsAgain

Same release folder for everyone, look for Bastard_v20dpo.safetensors`.

This is a DPO (Direct Preference Optimization) trained SD15 model, taking the last premium release and applying 2000 steps of DPO training, which results in more correct generations across the board.

This is a preview model, and if successful I will conduct a 4x larger, 4x longer training run and see subsequent performance.

From initial tests, it appears at the moment that I should DPO train every version I release, as its just ... better haha.

Comments

No comments found for this post.