Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index

Join discord and tell me your discord username to get a special rank : SECourses Discord

16 May 2024 Update:

7 May 2024 Update:

  • Huge improvements has arrived : stable_cascade_v12.zip

  • 275 Amazing pre-set styles added

  • You can loop all of the styles and find best ones with your prompt

  • RunPod and Massed Compute installers updated to the latest

  • Now prompt box will support multi-line prompts. So each line of your prompt input will be treated as a unique prompt

  • Metadata save system and load system improved and works perfect

5 May 2024 Update:

  • Huge improvements has arrived

  • Updated to V9 please do a fresh install (will not download models again)

  • The zip file includes latest Kaggle notebook, Windows installers, RunPod and Massed Compute installers and instructions

  • https://www.kaggle.com/ - register a free account with phone verification

  • If you don't know how to use Kaggle notebooks here a short video : https://youtu.be/iBT6rhH0Fjs

  • I suggest you to use Massed Compute as a paid cloud service rather than RunPod

  • Metadata automatic save added

  • Metadata display and load tab added, now you can replicate your generations - only from V9 and further

  • FP4 and FP8 precision added, now works as low as with 5 GB VRAM

  • Some bugs fixed

  • Open folders button added, only works in Desktop operating systems such as Windows or Desktop Ubuntu

  • Now supports both FP16 (RTX 2000 and below) and BF16 GPUs (RTX 3000 and above)

  • Randomize seed now working properly

  • How much GPU VRAM each start option uses added to the starter .bat file

14 February 2024 Update V2:

  • Moved to more up-to-date Diffusers repo - this will get merged with main Diffusers

  • Now number of prior inference steps working you can increase decrease

  • It appears that for target image resolution 128 pixels increment and decrement working

  • Sliders are set accordingly to 128 pixel increase or decrease 

  • FP16 start feature added but due to a bug still not working

  • I have reported that bug here : https://github.com/huggingface/diffusers/issues/6970

14 February 2024 Update:

Features

  • Stability AI today (12 Feb 2024) released new model like Stable Diffusion V3 called as Stable Cascade

  • You can read all model details here : https://stability.ai/news/introducing-stable-cascade

  • I have developed an advanced standalone Gradio APP with 1 click install and run

  • Supports low VRAM option (will ask you when you run run_StableCascade .bat file) too so works great with even 8 GB GPUs - tested on RTX 4070 mobile and ultra fast speeds over 2 it / second

  • Preparing free Kaggle account notebook as well

  • Download attached zip file (stable_cascade_v12.zip), extract and folder and use windows_install .bat to install on Windows

  • For RunPod and Linux follow runpod_instructions_READ 

  • On your local Linux don't forget to change folder paths

  • Then use run_StableCascade .bat to start the Gradio app

  • Select the Low VRAM option or not depending on your GPU

  • All generated images will be saved in outputs folder

  • Enjoy

Requirements:

  • Make sure that you have git and Python 3.10.x installed. Best is Python 3.10.11. Also you need to have installed git

  • Below tutorial shows step by step install Python, Git, FFmpeg and C++ tools :

  • https://youtu.be/-NjNy7afOQ0

  • Hugging Face Default Cache folder setup : https://youtu.be/rjXsJ24kQQg


Comments

Pallavi Chauhan

Hi there is new version of A1111 called forge UI, I found this is faster than A1111. I found this comment in one of my communities. "There is a new stable diffusion interface called Forge that seems to be faster then Automatic1111 in special on those who dont have so powerfull pc. I used to struggle to run sdxl on my older pc with 6GB VRAM and crashed a lot, on the forge version seems to run it and even worked with hires fix and is way faster then automatic1111 You can download it from here https://github.com/lllyasviel/stable-diffusion-webui-forge" If it's true please make an installer for this too.

Anonymous

Any updates on training Cascade?

Hassan Alhassan

G:\Cache\hub\models--stabilityai--stable-cascade-prior\snapshots\621fc2ddab5500e57079e716c15358a25b649090 i then replace the items of each folder overwrite the files (*not the folders*) of the diffuser model trained in onetrainer. then start the cascade app and use it. please let me know if my explanation is not clear ill create a screen recording if required. The directory is the cache directory of HF.

Furkan Gözükara

ye this is not appropriate way. we need to be able to load .safetensors. I have reported this issue here you can reply : https://github.com/huggingface/diffusers/pull/6487

Rick B

Running this notebook on Kaggle, generally works fine. What's cmd line option for low VRAM, please? I tried --lowvram after the --fp16, didn't seem to make any difference, still got numerous out of memory errors. MAY be due to the WxH ratios, as I was experimenting with different sizes. Nothing over about 1Mpix would work, so any WxH < 1Mpix seemed to work.

Furkan Gözükara

yes you have to use 128x128 pixels to make it work. i mean increase width or height 128 pixel or reduce 128 pixel

David Allen Neron

I'm not sure what happened, but I clicked "run_StableCascade.bat" like I usually do (it's been working for a couple weeks now no problem) but today, it looked like it was downloading some new safetensors and some other stuff and now it won't work at all. "raise AttributeError(f"module {self.__name__} has no attribute {name}") AttributeError: module diffusers has no attribute StableCascadeUNet" is the last thing that pops up.

David Allen Neron

I managed to get one image to generate but now I'm getting this: Token indices sequence length is longer than the specified maximum sequence length for this model (127 > 77). Running this sequence through the model will result in indexing errors The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['leaf or a water droplet, that morphs into a light bulb or flame (( fire )), representing ideas and innovation springing from natural sources. the overall design should convey a harmonious blend of nature, luxury, and forward - thinking.'] 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:12<00:00, 2.40it/s] Token indices sequence length is longer than the specified maximum sequence length for this model (127 > 77). Running this sequence through the model will result in indexing errors Traceback (most recent call last): File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\gradio\queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\gradio\route_utils.py", line 231, in call_process_api output = await app.get_blocks().process_api( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\gradio\blocks.py", line 1594, in process_api result = await self.call_function( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\gradio\blocks.py", line 1176, in call_function prediction = await anyio.to_thread.run_sync( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run result = context.run(func, *args) File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\gradio\utils.py", line 689, in wrapper response = f(*args, **kwargs) File "D:\AI Art Generator - STABLE CASCADE\StableCascade\app.py", line 340, in generate decoder_output = decoder_pipeline( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AI Art Generator - STABLE CASCADE\StableCascade\pipelines\pipeline_stable_cascade.py", line 398, in __call__ _, prompt_embeds_pooled, _, negative_prompt_embeds_pooled = self.encode_prompt( File "D:\AI Art Generator - STABLE CASCADE\StableCascade\pipelines\pipeline_stable_cascade.py", line 150, in encode_prompt if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper_CUDA__equal) *** I'm not sure what's happening. Any help would be super appreciated.

David Allen Neron

I shortened the prompt and it seemed to help but I'm still getting this error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper_CUDA__equal)