Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index

Join discord and tell me your discord username to get a special rank : SECourses Discord

How to use NGROK to connect Gradio apps on free Kaggle notebooks : https://youtu.be/iBT6rhH0Fjs

I appreciate very much if you upvote this Reddit thread > https://www.reddit.com/r/StableDiffusion/comments/16cpg9i/how_use_stable_diffusion_sdxl_controlnet_loras/

25 May 2024 Update

  • Notebook updated to V13

  • New IP Adapter SDXL model added ControlNet downloads to be able to do style transfer

  • Reactor extension added and will be auto installed

  • After Detailer (ADetailer) extension added and will be auto installed

  • Torch version upgraded to 2.3.0 and xFormers upgraded to 0.0.26 - really good performance

2 March 2024 Update

  • Don't forget to set Extra path to scan for ControlNet models (e.g. training output directory) folder to /kaggle/temp/cnmodels in settings menu

  • Default models folder path set back to /kaggle/temp/models

  • 275 Amazing Fooocus styles csv file added to the notebook and auto downloaded and ready to use in Automatic1111 SD Web UI styles 

  • New models ip-adapter-faceid Plus and instant_id_sdxl added to the ControlNet models

  • How to use them please read the following threads:

  • Ip Adapter Face ID of ControlNet : https://github.com/Mikubill/sd-webui-controlnet/discussions/2442

  • InstantID of ControlNet : https://github.com/Mikubill/sd-webui-controlnet/discussions/2589 

14 December 2023 Update

  • Do not forget to put your own NGROK auth token when launching Automatic1111 Web UI Cell 

19 October 2023 Update

  • Kaggle upgraded free CPU count to 4 from 2 and RAM to 29 GB from 13

  • Now we are blazing fast when running Automatic1111 and you can use many of the extensions including ControlNet

  • Download newest v6

9 October 2023 Massive Update NGROK

  • Now we will follow a different strategy to use the notebook

  • Please follow this video > https://youtu.be/dBBVQfKDmhM

  • It works much faster and better

9 September 2023 Update

  • Today released 11 new mode ControlNet models added to the Notebook

  • gemasai/4x_NMKD-Superscale-SP_178000_G Upscaler model added to the notebook upon request. You can also download other models with same way

  • 3 Models from CivitAI added to the very bottom upon a request of a Patreon

5 September 2023 Update :

  • SDXL ControlNet added to the notebook and working great

Register a free Kaggle account : https://www.kaggle.com/

Verify your Phone number : https://www.kaggle.com/settings

Start a new notebook by clicking + Create button

Upload this automatic1111-sd-web-ui-free-kaggle-notebook v13.ipynb (import notebook)

Full main tutorial of this notebook > https://youtu.be/dpM02YMj8FY?si=8e7_koUdtroXHR9O

Select Python and GPU P100 or GPT T2 x2

Instructions are well written on the Notebook

Comments

Rasika Singal

How to make it work on TPU?

Anonymous

SDXL & ControlNet ? There's support now??

Anonymous

Launching Web UI with arguments: -f --xformers --ckpt-dir /kaggle/temp/models --enable-insecure-extension-access --no-half-vae --lora-dir /kaggle/input/my-loras \ usage: launch.py [-h] [--update-all-extensions] [--skip-python-version-check] [--skip-torch-cuda-test] [--reinstall-xformers] [--reinstall-torch] [--update-check] [--test-server] [--log-startup] [--skip-prepare-environment] [--skip-install] [--dump-sysinfo] [--loglevel LOGLEVEL] [--do-not-download-clip] [--data-dir DATA_DIR] [--config CONFIG] [--ckpt CKPT] [--ckpt-dir CKPT_DIR] [--vae-dir VAE_DIR] [--gfpgan-dir GFPGAN_DIR] [--gfpgan-model GFPGAN_MODEL] [--no-half] [--no-half-vae] [--no-progressbar-hiding] [--max-batch-count MAX_BATCH_COUNT] [--embeddings-dir EMBEDDINGS_DIR] [--textual-inversion-templates-dir TEXTUAL_INVERSION_TEMPLATES_DIR] [--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code] [--medvram] [--medvram-sdxl] [--lowvram] [--lowram] [--always-batch-cond-uncond] [--unload-gfpgan] [--precision {full,autocast}] [--upcast-sampling] [--share] [--ngrok NGROK] [--ngrok-region NGROK_REGION] [--ngrok-options NGROK_OPTIONS] [--enable-insecure-extension-access] [--codeformer-models-path CODEFORMER_MODELS_PATH] [--gfpgan-models-path GFPGAN_MODELS_PATH] [--esrgan-models-path ESRGAN_MODELS_PATH] [--bsrgan-models-path BSRGAN_MODELS_PATH] [--realesrgan-models-path REALESRGAN_MODELS_PATH] [--clip-models-path CLIP_MODELS_PATH] [--xformers] [--force-enable-xformers] [--xformers-flash-attention] [--deepdanbooru] [--opt-split-attention] [--opt-sub-quad-attention] [--sub-quad-q-chunk-size SUB_QUAD_Q_CHUNK_SIZE] [--sub-quad-kv-chunk-size SUB_QUAD_KV_CHUNK_SIZE] [--sub-quad-chunk-threshold SUB_QUAD_CHUNK_THRESHOLD] [--opt-split-attention-invokeai] [--opt-split-attention-v1] [--opt-sdp-attention] [--opt-sdp-no-mem-attention] [--disable-opt-split-attention] [--disable-nan-check] [--use-cpu USE_CPU [USE_CPU ...]] [--disable-model-loading-ram-optimization] [--listen] [--port PORT] [--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE] [--hide-ui-dir-config] [--freeze-settings] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug] [--gradio-auth GRADIO_AUTH] [--gradio-auth-path GRADIO_AUTH_PATH] [--gradio-img2img-tool GRADIO_IMG2IMG_TOOL] [--gradio-inpaint-tool GRADIO_INPAINT_TOOL] [--gradio-allowed-path GRADIO_ALLOWED_PATH] [--opt-channelslast] [--styles-file STYLES_FILE] [--autolaunch] [--theme THEME] [--use-textbox-seed] [--disable-console-progressbars] [--enable-console-prompts] [--vae-path VAE_PATH] [--disable-safe-unpickle] [--api] [--api-auth API_AUTH] [--api-log] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID] [--administrator] [--cors-allow-origins CORS_ALLOW_ORIGINS] [--cors-allow-origins-regex CORS_ALLOW_ORIGINS_REGEX] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--disable-tls-verify] [--server-name SERVER_NAME] [--gradio-queue] [--no-gradio-queue] [--skip-version-check] [--no-hashing] [--no-download-sd-model] [--subpath SUBPATH] [--add-stop-route] [--api-server-stop] [--timeout-keep-alive TIMEOUT_KEEP_ALIVE] [--disable-all-extensions] [--disable-extra-extensions] [--ldsr-models-path LDSR_MODELS_PATH] [--lora-dir LORA_DIR] [--lyco-dir-backcompat LYCO_DIR_BACKCOMPAT] [--scunet-models-path SCUNET_MODELS_PATH] [--swinir-models-path SWINIR_MODELS_PATH] launch.py: error: unrecognized arguments: \

Furkan Gözükara

remove the very end \ and try again and let me know please

Anonymous

It is not working at this moment right? The space of 70GB runs out and asks to upgrade to google cloud notebooks

Anonymous

The notebook currently isn't making use of both Nvidia T4 GPUs. I was wondering about the benefits of having two T4s. Alternatively, we could consider the P100. Is there a way to configure it for optimal performance?

Gen Zero

and How To install models ip-adapter-plus-face_sd15 ? please advise me Or is there a teaching clip for me?

Rick B

I see you've reset the models dir to the temp directory. I was going to report I spent all day with my checkpoints loaded in /kaggle/temp/ and it worked fine, but you're already aware. Thanks so much for keeping on top of all the updates.

Akshay

Hi, I'm running this on colab Pro with L4 GPU and High RAM but its giving me this error while trying to generate images. NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support attn_bias type is operator wasn't built - see `python -m xformers.info` for more info `flshattF@0.0.0` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info triton is not available Only work on pre-MLIR triton for now `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40

Furkan Gözükara

well that means you need to manually install xformers and torch version to fix the error. install torch 2.2.0 and xformers 0.0.24

Oliver Koch

Minor bug report: In The "Zip all images cell" it should be "/outputs/" instead of "/output/". Otherwise it works great.

Vince Ma

This is fantastic! Never knew we got free lunch, will try it this evening! Thanks a lot for sharing!!! But possible to make a note to run Comfyui at Kaggle Notebook?