Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index

Join discord and tell me your discord username to get a special rank : SECourses Discord

24 October 2023 Update:

Currently looks like free tier is getting session terminate but paid tier should work perfectly. The notebook is compatible with free tier too but Google seems like not allowing at least for today (15 September 2023)

Open Google Colab : https://colab.research.google.com/

Click new notebook as in this image : new notebook.png

Download Google_Colab_Automatic1111_v3.ipynb from attachments

Click file > upload notebook > select downloaded notebook file :  upload notebook.png

It will ask you leave and click yes

Select T4 GPU and start session and execute cells : connect.png

Make sure that you are connected to the GPU : connected gpu.png

It will give you Gradio Link as below click and start using

The notebook has download options for the following models

  • All SD 1.5 based ControlNet models
  • All SDXL ControlNet models
  • SDXL 1.0
  • SDXL 1.0 best VAE
  • SD 1.5 best VAE
  • SD 1.5 based model Realistic Vision 5.1
  • 4x_NMKD-Superscale-SP_178000_G.pth
  • All SD 1.5 based ControlNet models
  • All SDXL based ControlNet models
  • Pixel_Art_XL_1_1.safetensors - LoRA
  • Patreon requested models as below
  • CyberRealistic_v3_3 from CivitAI
  • AbsoluteReality_v181 from CivitAI
  • AlbedoBase_XL from CivitAI

So with above logic you can also add any model you want or I can help you to add them

Comments

Anonymous

Hi I was delighted with this colab which was working very well... Yesterday I tried to get in and I got a mistake that I put later. I downloaded it again in case the problem was with my version of the drive version but it wasn't. It would be great to update this colab to avoid the error, thanks.NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(2, 4096, 10, 64) (torch.float16) key : shape=(2, 4096, 10, 64) (torch.float16) value : shape=(2, 4096, 10, 64) (torch.float16) attn_bias : p : 0.0 `flshattF` is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support requires A100 GPU Only work on pre-MLIR triton for now `cutlassF` is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 64

Anonymous

Hello, I have only been able to check a basic generation with AlbedoBase_XL using text2img and it seems to work, I have not been able to check yet ControlNet etc...

Quentin Guittard

Runtime disconnected Your runtime has been disconnected due to executing code that is disallowed in our free of charge tier. Colab subsidizes millions of users and prioritizes interactive programming sessions while disallowing some types of usage as outlined in the FAQ. If you believe this message is in error, file an appeal. Please include any relevant context about your usage. Your compute unit balance is 0. Purchase more To connect to a new runtime, click the connect button below.

Furkan Gözükara

yes you need to have a paid colab. use our Kaggle notebook : https://www.patreon.com/posts/run-on-free-like-88714330

Art

Hi Furkan... how to use ngrok (more stable) instead of gradio session?