Home Artists Posts Import Register

Downloads

Content

Patreon exclusive posts index

Join discord and tell me your discord username to get a special rank : SECourses Discord

SUPIR: Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild 1 click installer scripts.

For those whose models fails to download for some reason, here single zip file that include all models. Extract into SUPIR folder. 45 GB. You can download from browser and resume. Or you get download with software like uGet (working with latest version no changes) : https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/supir_v45_models.zip

SUPIR Sampler and Text CFG Comparison : https://imgsli.com/MjU2ODQz/2/1

6 June 2024 Update

  • Download new zip file :  SUPIR_v49_v3.zip

  • Just run Windows_Update_Version .bat file to update if you had recent new version like V46, 47

  • New feature restore configuration / settings from saved PNG images in outputs folder added

  • Load image into Outputs tab, it will display Metadata

  • You will see Apply Metadata button click it to restore used configuration

  • Kaggle notebook fixed

7 May 2024 Update

  • Updated to V48

  • Now when you load a preset, it will not change your existing selected image to upscale

21 April 2024 Update

  • Full Windows tutorial published that includes how to update, upgrade, and do a fresh install with using downloaded models : https://youtu.be/OYxVEvDf284

  • If you also upvote with Reddit post I appreciate that very much

  • Added Juggernaut XL V10 auto download feature to the Download Checkpoints tab

  • Just run Windows_Update_Version .bat file to get this version

  • New feature save and load presets added

  • If you find any preset you changed is not saved or loaded back please let me know

  • Moreover, Windows_Install .bat is improved and now will automatically install with accurate Python version if you have multiple Pythons and if you have installed Python launcher

17 April 2024 Update

  • Download Checkpoints Tab Added. Tell me which models you want to see there and I will hopefully update that tab so that you can easily download any model you want 1 click (only SFW models)

  • Now you can select only Face restoration and it will only restore face and the rest of the image will remain same

  • You can run Windows_Update_Version .bat file to update into this version

  • --outputs_folder_button command line argument added to the start bat file so that you can quickly open outputs folder from the interface

14 April 2024 Update

  • Quick RunPod tutorial published : https://www.youtube.com/watch?v=RjMJh9fAO10

  • Massed Compute instructions and installers added

  • Massed Compute is so many times better than RunPod

  • No broken Pods, amazing download and hdd Speed

  • 31 cents per hour only for A6000 GPUs

  • A new tutorial will be hopefully recorded but I have an amazing recent tutorial for Massed Compute

  • You can watch this tutorial here : https://youtu.be/0t5l6CP9eBg

  • Please register Massed Compute from here : https://bit.ly/Furkan-Gözükara

  • Please watch the above video and read Massed_Compute_Instructions_READ .txt file

  • All files are renamed to understand easier inside the zip

  • RunPod instructions improved

  • For those whose models fails to download for some reason, here single zip file that include all models. Extract into SUPIR folder. 45 GB. You can download from browser and resume. Or you get download with software like uGet (working with latest version no changes) : https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/supir_v45_models.zip

18 March 2024 Update

SUPIR upgraded to V44

  • Morve improvements and fixes

  • New starting arguments added

  • New full FP8 model precision support which reduces VRAM usage to around 7 GB for 1 Megapixel upscale

  • New full FP16 support which means now we can use on a free Kaggle account or on older GPUs that do not have BF16 support

  • Working Kaggle notebook included in the zip file

  • available_command_line_arguments .txt file is updated read it to see all arguments that you can add to your start_SUPIR_Windows .bat file

  • update_windows_requirements .bat file updated and improved

  • run_linux .sh and start_SUPIR_Windows .bat files updated and new options are added

  • How to upgrade? First close SUPIR run update_windows_version .bat file and see if works or not

  • If that fails, close SUPIR and run update_windows_requirements .bat file

  • If this also fails, please do a fresh install under a new folder such as c:/SUPIR_v44

17 March 2024 Update

  • SUPIR updated to V41 and the changes are as follows:

  • Version info moved to the About tab

  • Folder listing added to the Outputs-Metadata tab

  • Interface is redesigned

  • Auto Unload LLaVA to reduce VRAM is now under LLaVA options section in the interface

  • All of the processes are now displayed properly including steps which were not displayed at all even at the command line interface

  • The back end code also completely redesigned 

  • More starting arguments and their descriptions added

  • --use_fast_tile option added - may impact quality - I haven't had time to test yet

  • available_command_line_arguments .txt file added to the inside of zip file to make it easier for you to read

  • This is a massive update and lots of changes, therefore, first install into a new folder such as SUPIR_V40, verify working and delete older folder

  • When installing do not make parent folder name SUPIR

  • Now you can use a video as an input

  • It will let you select beginning and ending frames

  • The middle icon just displays that current frame on the screen

  • Video final output merge is not still working fully but you can manually merge upscaled images which will be inside outputs/extracted_frames

  • You can manually combined extracted frames with FFmpeg or any tool such as Davinci Resolve

  • extracted_frames folder will be cleared each time when you run a new video processing be careful

  • Prompt Styles feature added

  • Prompt Styles are useful according to your input image type

  • Embedded into images Metadata description improved

  • Checkpoint Type added

  • If you are using SDXL Lightning model check it and it will make steps count = 10 and Text Guidance Scale = 2 automatically for you 

  • I have made a comprehensive comparison between SDXL full model vs Lightning model with different parameters

  • You can see comparison results here : https://imgsli.com/MjQ3ODQ2

  • Play with drop down selections to compare different configurations 

  • Face restoration working amazing in some images tested thoroughly 

  • Carefully read all of the options on the interface with expanding all options

  • Please report any bugs and issues from our Discord channel

  • downloader .py file will automatically download WildCardX-XL LIGHTNING model as SDXL Lightning model into the models/checkpoints folder 

  • I compared several SDXL Lightning models and found this one is best

  • run_linux .sh which will make starting the app easier with options on RunPod or on Linux systems

  • We made SUPIR working on Kaggle hopefully coming soon

  • We made SUPIR working with 8 Bit loading - reduced VRAM - hopefully coming soon

  • There are some new libraries so you need to at least run update_windows_version .bat and update_windows_requirements .bat files but I suggest to do a clean install

11 March 2024 Update V2

  • Auto Deload LLaVA - Free VRAM checkbox added

  • When this checkbox is checked, it will deload LLaVA from VRAM after captioning input images including batch folder processing

  • Apply Stage 1 checkbox fixed. This will be applied and displayed as output

  • Apply LLaVa fixed. You will see the Apply LLaVa checkbox generated prompt in prompt box

  • So you can only check Apply LLaVa checkbox and get the image prompt. Don't forget to uncheck Apply LLaVa if you make changes to the prompt

  • Apply Stage 1 will not be applied to Apply Stage 2 anymore

  • So for best quality you can check Stage 1, LLaVA and Stage 2

  • If you check all these 3 it will work as below

  • * Apply Stage 1 > generate Caption with LLaVA > Use Main input image > Apply Stage 2 (Upscale)

  • Open Outputs folder button added - it will use your set outputs folder via --outputs_folder

11 March 2024 Update

  • If you had updated to V26, just run windows_update_version .bat file, otherwise do a fresh install better

  • Model Selection v0-Q and v0-F fixed and working with hot reload - run time change

  • Face processing fixed. Both supports face prompt and default

  • If output face is not merged, try to change output resolution slightly. This works

  • Now you can upscale images that has multiple faces

  • Hopefully I will add auto caption face with LLaVA and multi face prompt - every line a face prompt

  • Min number of steps reduced to 1 for faster SDXL models

  • Batch processing bugs fixed, properly save in desired outputs folder or in default output folder

  • Read existing captions if there are caption files during batch processing

  • Display full batch processing status properly

  • Save every image after each batch processed image immediately after processing finished

  • Apply on the fly batch LLaVA captioning

  • Please test all above batch processing and let me know if any errors i am also still testing

  • Show batch processing results as Gallery

  • Batch processing related bugs fixed

  • Slider comparison fixed

  • Processing output messages improved

  • Do not save original input image when upscaling in outputs folder

  • New feature model hot reloading added

  • Now put SDXL models inside models/checkpoints folder

  • Use refresh button and select any model while running, whenever you change model, it will reload the new model

10 March 2024 Update

  • Updated to V26 

  • Please do a reinstall

  • You can move models folder - to not download models again edit windows_install .bat and delete the lines  download_models .py and downloader .py

  • The downloader files will automatically download all of the models including LLaVA 7b and Juggernaut-XL_v9

  • Put your old SDXL models into to models/checkpoints folder to work including Juggernaut-XL_v9

  • Initial download of all models may take a while - around 40 GB

  • Complete code and interface redesign

  • Therefore, there can be new bugs and there are still some issues

  • I will hopefully do a new full Tutorial ASAP

  • Image slider now will have full screen feature

  • LLaVA batch caption on the runtime working with 4-bit LLaVA 7b model - uses less than 4GB VRAM

  • Now when you first time load the app, you can pick any base model from --ckpt_dir argument - by default inside models/checkpoints folder

  • If you don't want to change default --ckpt_dir argument, put your SDXL models into the models/checkpoints folder

  • Apply Stage 1 and Apply LLaVA will be used both at batch processing and single image upscale

  • Hopefully I will add generate LLaVA caption button and batch LLaVA caption folder features soon

  • Hopefully I will add batch extract frames of input video and the batch convert back upscaled images into video feature soon

  • Please report any bugs you might encounter

Known Issues At The Moment

  • Number of images will still display only 1 image as slider but all generated images will be saved in outputs folder

  • Outputs folder opening button not added yet

  • When batch processing a folder if you enable video comparison generation it will cause error after first image processed

  • Select your base model at first load otherwise changing model load will not work yet

  • So if you want to use different model, restart the app and select base model before starting

  • Face restoration works but in some cases it will not merge the face with the output final image 

  • You can see restored faces on Restored Faces tab

5 March 2024 Update V2

  • If you didn't install V19, you need to install new libraries as written below or do a fresh install

  • Face restore worked very well for the last test i made

  • However still may not work at some images

  • New feature generate comparison video added

  • This will generate a very smooth video that shows slider movement left to right amazingly working

  • If you enable it, it will generate videos for batch processing as well

  • The videos will be automatically saved inside outputs/compare_videos folder

  • Stage 1 button is moved to the very below

  • I found that Text Guidance Scale 6 working better than 7.5 for reducing the hallucination for XL_v9_RunDiffusionPhoto_v2

5 March 2024 Update

  • If you didn't install V19, you need to install new libraries as written below or do a fresh install

  • If you have installed V19, just run windows_update_version .bat file

  • Base model changing logic changed from yaml file modification to argument passing

  • Edit start_SUPIR_Windows .bat and append your wanted model path as below example

  • python gradio_demo.py %lowvram% %tiledVAE% %slider% %theme% --ckpt "R:\auto1111 installer\stable-diffusion-webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors"

  • To use auto queue system, open http://127.0.0.1:7860/ on multiple tabs and set your parameters and start upscaling

  • It will queue each upscale automatically for you. You can open as many as tabs you want

  • The effect of Text Guidance Scale significant. If you get too much hallucination, reduce it to like 6 or 5 and try

  • Target output resolution info added below to the input image

  • So you can see what will be the target resolution as you change the upscale ratio

  • Model loading logic & pipeline updated

  • Now the app will start immediately but models will be loaded when you first time use

  • Dark theme option added - you will be asked when starting start_SUPIR_Windows .bat file

  • direct_start_no_options .bat file added as example. Rename it and modify so you won't have to select options again and again

  • Supported parameters are as below

  • parser.add_argument("--ip", type=str, default='127.0.0.1')

  • parser.add_argument("--share", type=str, default=False)

  • parser.add_argument("--port", type=int, default=7860)

  • parser.add_argument("--use_image_slider", action='store_true', default=False)

  • parser.add_argument("--log_history", action='store_true', default=False)

  • parser.add_argument("--loading_half_params", action='store_true', default=False)

  • parser.add_argument("--use_tile_vae", action='store_true', default=False)

  • parser.add_argument("--encoder_tile_size", type=int, default=512)

  • parser.add_argument("--decoder_tile_size", type=int, default=64)

  • parser.add_argument("--ckpt", type=str, default='models/Juggernaut-XL_v9_RunDiffusionPhoto_v2.safetensors')

  • parser.add_argument("--theme", type=str, default='gr.themes.Default()')

  • #parser.add_argument("--theme", type=str, default='d8ahazard/material_design_rd')

  • parser.add_argument("--outputs_folder")


4 March 2024 Update

  • Updated to V19:  You need to install new libraries. I suggest to delete downloader .py, install into a fresh folder and move models folder if you had previously downloaded models folder. If you can't make this, just do a new fresh install

  • Our app updated to latest original repo code

  • Linear CFG is auto enabled and set to 4 now (authors did this way)

  • Image Comparison slider option added

  • You will be asked to start image Slider or Gradio Gallery option in start_SUPIR_Windows .bat file

  • Gradio slider is not at the quality level we want yet

  • So I made a topic about it please leave a support reply : https://github.com/gradio-app/gradio/issues/7588

  • Face restore feature added

  • It hugely restores the faces however there is a bug currently

  • So restored face is not applied to the final image yet

  • You can reply to the bug thread i opened : https://github.com/Fanghua-Yu/SUPIR/issues/57

  • Still, you can see the restored face at the Restored Faces tab at the top of the Gradio app

  • Let me know if you encounter any bug. Please report bugs with steps and details

  • More info about how to use:


2 March 2024 Update

  • Just run windows_update_version .bat file

  • Requested MetaData feature added

  • All config of generation saved inside image as MetaData

  • Moreover, it will save all MetaData as txt file inside desired outputs folder under images_meta_data folder

  • If you apply Stage 1, that info will not be saved in MetaData so if you don't get exact same image, apply Stage 1

  • I also added MetaData viewer tab - at the very top of the Gradio app

  • In MetaData viewer tab, you can load image and see its MetaData

  • It should support every Image MetaData including Automatic1111 SD Web UI Generation

  • --outputs_folder bugs fixed. It will be used if you set it in start_SUPIR_Windows .bat file

  • Example : python gradio_demo.py %lowvram% %tiledVAE% --outputs_folder "R:\SUPIR_v8\test2"

  • Apply stage 1 before stage 2 feature added

  • Newest interface image metadata.png , updated interface.png

  • I am working on other requested features. Thank you for keep supporting me

29 February 2024 Update

  • Output naming completely changed as you requested

  • It will save images with the same file name e.g. bird.png

  • If bird.png exists then it will start naming as bird_0001.png then bird_0002.png

  • If you delete bird_0001.png and left bird_0002.png, it will name next as bird_0001.png

  • So it will scan folder and name as same if exists it will look for next empty spot

  • All you need to do is run windows_update_version .bat file if you had installed V8 or above previously

Important Tips From Our Discord Members

If you want to upscale an image that already has decent quality, do not use Stage 1! Stage one is there to smooth out extremely low res images so the upscaling process can re-add details. But if you do this on decent quality images, you rather lose details instead of gaining the best quality output. I have tested this with a few images and the output was always better to not use stage one on already good images. You can just upload your own image instead of using stage 1.


Another tip is, if your test case image has text, SDXL Base 1.0 may work better. I am working on a way to be able to change models while app running. 

28 February 2024 Update

  • For your convenience windows_update_version .bat file added

  • It will do git pull automatically for you. But it will not install new libraries if the update includes new libraries

  • Now you can set  outputs folder from start_SUPIR_Windows .bat with adding a parameter like below

  • --outputs_folder "R:\SUPIR_v8\test2"

  • I am currently working on adding face restore option + better naming of output images like input_image_file_name_00001.png + saving used prompts in the same outputs folder so you can carry on captions. 

27 February 2024 Update V2

  • Image upscale ratio slider improved. Now you can set values as 1.5 as well. Working great tested.

  • Batch processing added

  • You can give no prompt and it will use default positive and negative prompts still will work

  • If you had installed V8 or V9, just do a git pull on the SUPIR folder and restart



27 February 2024 Update - Works at 12 GB VRAM:

  • Base model changed to Juggernaut-XL-v9 and this model will be auto downloaded 

  • Juggernaut-XL yields way better results

  • 2 new optimization added

  • --loading_half_params and --use_tile_vae

  • The start_SUPIR_Windows bat file updated so you will be asked to start with them on Windows

  • Also updated RunPod instructions and added them

  • Now uses around 12 GB VRAM

  • Tested on RTX 3060 12GB and worked perfectly - make sure that you use around 500 MB VRAM before starting the app

  • Please do a new fresh install

Original repo is here : https://github.com/Fanghua-Yu/SUPIR

Full tutorial video added : https://youtu.be/PqREA6-bC3w

Requirements

How To Install

  • Download latest zip file from the attachments and extract files wherever you want to install

  • Double click windows_install .bat and let it install and download necessary models

  • Requires 12 GB VRAM without LLaVA. Tested on RTX 3060

  • Make sure that your VRAM usage is not bigger than 500 MB before starting the SUPIR

  • If you want to use LLaVA you can use our better installer : https://www.patreon.com/posts/90744385

  • LLaVA is not mandatory as shown in full tutorial video : https://youtu.be/PqREA6-bC3w

Extra Features

  • 1 click install including automatically downloading necessary models

  • Generate any number of images each time with different seed 

  • So you can generate 200 different images with different seeds at 1 click

  • Every generated image saved in outputs folder automatically

  • Display progress via progress info of Gradio including each image generation speed

  • This info displayed at the CMD / terminal as well

  • Improved more useful Gradio interface

  • Randomize seed feature

  • Includes pip freeze info for future library fixes

  • Auto installers for both Windows & RunPod (Any Linux)

  • Supports VRAM optimizations and now works at 12 GB GPUs

  • Tested on RTX 3060

  • Make sure that your VRAM usage is not bigger than 500 MB before starting the SUPIR

How To Use

  • In Stage 2 options you will find many settings

  • Try different settings

  • After uploading image hit Stage 2 and it will start generating upscaled & enhanced images

  • When using try to play with Linear CFG and Linear Stage2 Guidance

  • They make difference

  • All generated images will be automatically saved inside outputs folder

  • I tried enable Linear Stage2 Guidance and made 0.5 and I think improved

  • As I said try parameters

  • If you also want to use LLaVA we have better LLaVA scripts which supports 4-bit, 8-bit and 16-bit loading 

  • So you can also install it with 34b and use it for captioning

  • LLaVA auto installer : https://www.patreon.com/posts/90744385

  • But even with simple captions it works great you can just type

  • Image comparison sliders to Test : https://imgsli.com/ , https://web-toolbox.dev/en/tools/image-compare-slider , https://www.diffchecker.com/image-compare/

  • Here example. All default except Linear Stage2 Guidance is enabled and set 0.5

  • input image : input_1.png

Basic Caption : a trex dinosaur in jurassic park

Output : basic_caption.png

Better caption : A gigantic dinosaur with sharp teeth is standing in a lush green landscape with mountains in the background. The sky is partly cloudy, and the dinosaur appears to be in motion, possibly running or lunging forward. The setting seems to be reminiscent of prehistoric or Jurassic environments, likely intended to represent the natural habitat of such a creature.

Output : better_caption.png

Camel

A family photo



Files

Comments

leem0nchu

Awesome.

AI Squad

Any plans of having a Colab or Kaggle version of it?

Sidharth

30 GB VRAM??? T_T , the RTX 6000 ada is out of stock too

GeekZolda

Man that sounds and looks amazing. Shame it's essentially unusable in it's current state due to the VRAM requirement.

Furkan Gözükara

Use community cloud. I have shown it in this post : https://www.patreon.com/posts/how-to-deploy-on-97919576

puk

I became an error on start the test.py file under windows: 国 C:\Windows\py.exe X + Traceback (most recent call last): File "C: Users \AI\SUPIR \SUPIR\test.py", Line 3, in ‹module> from SUPIR.util import create_SUPIR_model, PIL2Tensor, Tensor2PIL, convert File "C: \Users\AI\SUPIR\SUPIR\SUPIR\util.py", line 7, in ‹module> from omegaconf import OmegaConf ModuleNotFoundError: No module named 'omegaconf'

Furkan Gözükara

hello welcome. this installer developed for gradio py. I just added Gradio starter file for windows. please download V6 and run start_SUPIR_Windows.bat

Neil Rhodes

installing now! WIll let you know how it goes!

puk

New install with Version 7 flowoing errors: Enter your choice (1-2): 1 stabl usage: gradio_demo.py [-h] [--ip IP] [--share SHARE] [--port PORT] [--no_llava] [--use_image_slider] [--log_history] gradio_demo.py: error: unrecognized arguments: --loading_half_params --use_tile_vae Windows 11 RTX4090

Neil Rhodes

first test Error A:\SUPIR_v7\SUPIR\venv\lib\site-packages\torch\nn\functional.py:5476: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)

Furkan Gözükara

hello. please do a reinstall. delete older folder and install again. those arguments added with newest update. if you know how git works, you can do git pull, activate venv and install diffusers. after it should work

Samuel

Can you control level of hallucinations?

Furkan Gözükara

This model designed to prevent hallucinations. I don't know for sure yet though but it almost has 0 hallucinating

Neil Rhodes

how long should it take as this is still running at 308 seconds with zero per cent progress

Furkan Gözükara

Follow cmd. If your input image is high resolution it could be using shared vram. That reduces speed a lot. Also did you check vram usage before starting the app?

puk

I did a new Install, now i have the following error: │ │ 84 │ │ 85 │ │ 86 def load_state_dict(checkpoint_path: str, map_location='cpu'): │ │ ❱ 87 │ checkpoint = torch.load(checkpoint_path, map_location=map_location) │ │ 88 │ if isinstance(checkpoint, dict) and 'state_dict' in checkpoint: │ │ 89 │ │ state_dict = checkpoint['state_dict'] │ │ 90 │ else: │ │ │ │ C:\Users\AI-PUK\SUPIR\SUPIR\venv\lib\site-packages\torch\serialization.py:1005 in load │ │ │ │ 1002 │ │ │ # reset back to the original position. │ │ 1003 │ │ │ orig_position = opened_file.tell() │ │ 1004 │ │ │ overall_storage = None │ │ ❱ 1005 │ │ │ with _open_zipfile_reader(opened_file) as opened_zipfile: │ │ 1006 │ │ │ │ if _is_torchscript_zip(opened_zipfile): │ │ 1007 │ │ │ │ │ warnings.warn("'torch.load' received a zip file that looks like a To │ │ 1008 │ │ │ │ │ │ │ │ " dispatching to 'torch.jit.load' (call 'torch.jit.loa │ │ │ │ C:\Users\AI-PUK\SUPIR\SUPIR\venv\lib\site-packages\torch\serialization.py:457 in __init__ │ │ │ │ 454 │ │ 455 class _open_zipfile_reader(_opener): │ │ 456 │ def __init__(self, name_or_buffer) -> None: │ │ ❱ 457 │ │ super().__init__(torch._C.PyTorchFileReader(name_or_buffer)) │ │ 458 │ │ 459 │ │ 460 class _open_zipfile_writer_file(_opener): │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

AI Squad

WTF?! haha... Is there any demo available online? Do you know?

Neil Rhodes

1024 x 1024 image trying to change to 2048x2048 fresh reboot nothing running except v7 cpu5950x ram:64GB VRam 12GB 3080

daniel mendoza

excellent, thanks for your work. Hey, I had the idea of ​​using another model other than the base and it works even better, for the test I used the "juggernaut v9 sdxl" model

Furkan Gözükara

Hello. We just updated to V7. now works even with 12 GB. so you can locally install and use enjoy :) I plan to make kaggle once they fix FP16 bug.

Furkan Gözükara

I think download of some files were failed for some reason. can you try this? make a folder inside your C:/test_supir and make a new install there. and let me know

Furkan Gözükara

Ok I think your VRAM not being sufficient for that resolution. Can you try 768x768. So probably it used shared VRAM which caused huge slow down. 768x768 still should yield a good upscale with 1x upscale. try and let me know please. I am also looking for even further improvements of VRAM

George Gostyshev

First of all - nice release. Second - is there a way to update without redownloading all models? Also have a strange behaviour - 1x upscale works fine but 2x longs for ages without any ooms - it's just computing and computing and never ending. Using 4090 card

Furkan Gözükara

hello. for v7 to v8, just replace old yaml file with new one and download RunDiffusion/Juggernaut-XL-v9 into models folder you can also edit downloader.py file and remove the models you already have that taking forever means that system started using shared VRAM. therefore slightly reduce input image resolution with resizing. like from 1024x1024 into 768x768 and watch shared VRAM

George Gostyshev

If it's possible - I'm asking for a future adding such as Batch Upscale - it'll be perfect strike for such technology)

puk

🙄 Next try with version 8: C: \Users\AI\SUPIR2\SUPIR\venv\lib\site-packages\safetensors\torch.py:308 in load_file 305 306 307 ) 308 309 310 311 result= 青 with safe_open(filename, framework="pt", device=device) as f: for k in f. keys): result[k] = f.get_tensor(k) return result SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

Erik

Works great! Thanks! Can you make a one-click install for this colorization tool too? https://github.com/piddnad/DDColor

Furkan Gözükara

hello. what is your python version? looks like download fail. if you are from china, your internet is being blocked and failing some downloads

Alex

I just checked, for the photo model Juggernaut-XL-v9 works worse than xl-base-1.0

Jonathan Streeter

I get this error on runing: RuntimeError: Pretrained weights (models/open_clip_pytorch_model.bin) not found for model ViT-bigG-14.Available pretrained tags (['laion2b_s39b_b160k'].

Furkan Gözükara

hello. this happens when you failed to download accurately. please check that downloader.py is run accurately during installation. if you show me entire install cmd logs i can show you where error is

John Dopamine

Works well, thanks! If you do any other updates it might be nice if a double click on the "Upscaled Images Output" opened the image full screen (a la A1111) instead of just cycling to the other image. Lava doesn't work for me at the moment ("LLaVA is not available. Please add text manually.") but that could be my firewall or something I need to fix on my end.

Furkan Gözükara

thanks. ye i didn't add llava. you can use our installer works better and lesser VRAM : https://www.patreon.com/posts/90744385

PM

I'm using the same GPU you used on your demo, but do you think using a newer one would make it any faster? or stop it from crashing on larger upscales? Or does it not so much depended on hardware

Khoa Vo

I get "LLaVA is not available. Please add text manually." when clicking the LLaVA button. Is there anything else I have to do after using the 1-click installer?

John Dopamine

One other suggestion if you do an update: Please make an option to carry the caption file (if there is one) when processing the batch and/or an option to use the same filename for the output. I have folders w/ 0001.jpg, 0002.jpg, etc w/ same named .txt files with their captions. When I batch through your app the output filenames are dates(etc) and the caption is not carried over to output folder w/ a similar name. If the names are left the same it'd be easy to use the old .txt caption files, or if the captions are carried over and renamed that would work also. Thanks if you consider should there be a next version.

Erik

Also, can you add face restoration to SUPIR? It's the only thing missing I think. If you add it, then it's perfect.

Kallamamran

We need to be able to change what checkpoint is used. Preferably via drop down and a configuration setting pointing to a checkpoint folder of choice.

Algi

It doesn't work.. I'm on a 3060 and I've let an update run for 30 minutes when trying to upscale 512 to 1024 without changeing any of the settings.. I installed v10. Did a git pull but says everything is up to date..

JackOppss

Hi, I keep getting errors when I try to run the Windows version, I got first a gradio and gradio_imagslider error but I installed them with pip, now I am getting this: Traceback (most recent call last): File "D:\AI\SUPIR\SUPIR\gradio_demo.py", line 7, in from SUPIR.util import HWC3, upscale_image, fix_resize, convert_dtype, Tensor2PIL File "D:\AI\SUPIR\SUPIR\SUPIR\util.py", line 4, in import cv2 ModuleNotFoundError: No module named 'cv2' Press any key to continue . . .

Furkan Gözükara

Hello. Can you check your task manager before starting the app and tell me how much VRAM your GPU using? If it starts using shared VRAM it will get 20 times slower

Furkan Gözükara

Hello. This happens when there were errors in windows installer. are you using Python 3.10? can you reinstall and show me entire output of the installer CMD?

Furkan Gözükara

well they made it loading from a yaml file all. so i didn't spend time to it. but you are right. I will make suggestion of it.

JackOppss

I am reinstalling it now, but I use python 3.11, is it this issue?

Furkan Gözükara

Yes it is likely to issue. Please use 3.10. If you don't know how to have multiple python I shown in this video in details https://youtu.be/-NjNy7afOQ0?si=VP3Pyt8mEjHjSwVV

Furkan Gözükara

Yes I didn't add llava. Please use our own llava it is better : https://www.patreon.com/posts/sota-image-for-2-90744385

Furkan Gözükara

It depends on vram. So if you need higher res you need to get more vram. Also start with optimizations and get higher Ram machine. When optimizations are enabled Ram is also important. You can get much bigger resolution

Erik

Upscale and then face restore. Since this model warps the faces a little after upscaling.

Furkan Gözükara

Hello not stuck working. If taking too long that means you did big upscale thus using shared vram. Please check from task manager shared vram usage. I contacted original authors to add cpu off loading to reduce vram further

John Dopamine

Thanks! It'd be perfection if it could include the caption file w/ the same name in .txt format.

Zoltán István Bíró

What can I do wrong? I have a 12 GB RTX 3060, I installed SUPIR without any error message, but above upscale "1" the computer just works indefinitely (64 GB RAM) and the image does not get rendered. Currently I do not set any value, I just paste the original image file, click on "Stage 1 Run", then "Stage 2 Run". Do I need any more settings? Should I install LLaVA manually separately? What should I do to make it work?

Furkan Gözükara

ok you need to do these 2. restart computer. open task manager and see how much VRAM is being used. then while generating the image check if it is using shared vram or not and let me know please.

Erik

They are pretty similar. Maybe both? You do get better results if you blend the two. But if you can only do one, then I would go with GFPGAN.

AI Squad

It's incredible how good it is for an open-source model. In my opinion it was better than Magnific.

Steve Bruno (edited)

Comment edits

2024-03-10 20:48:29 When I try to do a git pull it asks me to identify myself? Is there a good strategy for using your other SOTA with this for training? I was thinking maybe trying to use your auto crop and resize pictures to 512x512 and then running batch supir 1x and again batch at 1.5x and ending up at 768x768?
2024-02-28 15:51:29 When I try to do a git pull it asks me to identify myself? Is there a good strategy for using your other SOTA with this for training? I was thinking maybe trying to use your auto crop and resize pictures to 512x512 and then running batch supir 1x and again batch at 1.5x and ending up at 768x768?

When I try to do a git pull it asks me to identify myself? Is there a good strategy for using your other SOTA with this for training? I was thinking maybe trying to use your auto crop and resize pictures to 512x512 and then running batch supir 1x and again batch at 1.5x and ending up at 768x768?

Zoltán István Bíró

Thank you! After restarting the computer, Utilisation: 98%, Dedicated GPU Memory: 11.5/12.0 GB, GPU Memory: 11.6/43.9 GB and requested Shared GPU memory: 0.0/31.9 GB

Alex

Check with a picture of the text and you'll see that Juggernaut made the wrong text.

Vlad Selotkin

What could be an options to work with text. Is there any way to preserve text on the image?

Furkan Gözükara

hello. researchers are collecting not working images examples. can you post there and ask their opinion with showing image? here the link : https://github.com/Fanghua-Yu/SUPIR/issues/42

Steve Bruno (edited)

Comment edits

2024-03-10 20:48:29 I want to training models of images of people scraped from Reddit. I've been collecting them for a long time now and I keep pruning the bad ones a lot of off content stuff gets posted in the wrong subs. I noticed your gender classifier has a sort sharpness? does that basically sort images by quality? because that would really save me a ton of time
2024-02-28 21:52:00 I want to training models of images of people scraped from Reddit. I've been collecting them for a long time now and I keep pruning the bad ones a lot of off content stuff gets posted in the wrong subs. I noticed your gender classifier has a sort sharpness? does that basically sort images by quality? because that would really save me a ton of time

I want to training models of images of people scraped from Reddit. I've been collecting them for a long time now and I keep pruning the bad ones a lot of off content stuff gets posted in the wrong subs. I noticed your gender classifier has a sort sharpness? does that basically sort images by quality? because that would really save me a ton of time

Cyril F

Unfortunately you cannot upscale at very high resolution - With a 4090, starting with an image at 3000px - it takes 300s for an upscale at 1.2 & 50 steps - and do not work with an upscale of x1.5 or x2. The Quality is pretty good and probably one of the best OpenSrouce upscalers, but do not compare (yet) with Magnific.

Furkan Gözükara

i literally compared with magnific :) in every case SUPIR is better. if you need higher resolution you can rent a big GPU from any cloud such as RunPod and do big upscale there. by the way the tile size may also allow higher resolution. i will research this

Furkan Gözükara

for llava we have auto installer here but you won't be able to run both at the same time. it also uses a lot of GPU : https://www.patreon.com/posts/sota-image-for-2-90744385

Adan Aguilar Cisneros

I get this error after trying to start stage one or stage 2. Could not locate cudnn_ops_infer64_8.dll. Please make sure it is in your library path! then closes.

Furkan Gözükara

hello. did you get any error during install? probably you need to install cuda drivers. i have shown in this video : https://youtu.be/-NjNy7afOQ0

Adan Aguilar Cisneros

Hi, it happens after i add an image to SUPIR and click Stage 1 Run. I have Kohya, Stable Diffusion, InstaID, OneTrainer and i dont get this error with any of them. Do i have to install it manually within the SUPIR_v12 folder?

Furkan Gözükara

ok in that case my guess is that you installed during hugging face was down. can you try to install into a fresh folder and send me entire install cmd logs? monstermmorpg@gmail.com and you are using python 3.10.x right?

Diggy Dre

I am getting this when trying to run a stage 2: py:5476: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)

Raf Stahelin

It is absolutely essential to allow the script to BATCH SAVE image filenames SAME AS SOURCE. I have 500 images in my dataset that I need to keep the filneame intact in my pipeline. Please please dont change the filename!!!

Zoltán István Bíró

Yes, I think so too, everything is set up correctly, and yet it can only do 1 upscle, the rest fail. It would help me - and maybe others - if you could make a much more detailed, step-by-step guide to USING it. Click here, set it with parameters from here, etc.

sadelcri

I can't find the SUPIR_v0.yaml file, where is it supposed to be?

Ant-2014

Thanks for your Herculean labor and regular updates of your projects. I want it to give you more strength and motivation for all your future endeavors! I would like to ask you - if it is possible to insert in SUPIR 1 Click Windows in the program interface Menu a button or window with a drop-down LIST of my different models from PC folder for testing and comparing: Juggernaut, Realvision, SD 1.5 and others from the list (as it is in Stable automatic1111.) :)

Erik

Also, it would be great if you could add an option for fixed seeds between generations. Thanks!

Cyril F

There's a difference between upscaling little images to have fun with and very hi-res images for billboard and prints (8K res and up) - You should do test with an image starting at 4000px and upscale it x2. Yes we can rent a big GPU, set it up, download all the 30+GB of files... And do it there, for a monthly cost that would be similar than Maginifc, where you slide an image, wait for a minute and have your beautifully upscaled ready to be retouched. If it is for fun, yes why not, but professionally with big clients expectations and short deadlines, you just pay for the convenience...

Furkan Gözükara

hello it is warning but still working. if taking too much time that means you are using shared vram. therefore you need to reduce upscale resolution

daniel mendoza

ON v14 stage 1 don't work. AttributeError: 'str' object has no attribute 'dtype'

Furkan Gözükara

I plan to make another tutorial after I added some more features. currently adding new features requested by supporters.

Nobodys Hero

Still no image slider ,😉

daniel mendoza

I installed it from scratch deleting everything, it still gives an error. I went to the version that previously had V12. Now work perfectly

John Dopamine

Batch isn't working for me after the update due to picking up the filenames(?). Error processing 2024-02-29_01-59-11-training-sample-0-0-0.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_02-19-11-training-sample-757-0-757.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_02-39-12-training-sample-1458-0-1458.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_02-59-11-training-sample-2204-0-2204.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_03-19-12-training-sample-2896-0-2896.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_03-39-13-training-sample-3585-0-3585.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_03-59-13-training-sample-4334-1-583.png: [WinError 3] The system cannot find the path specified: '' Error processing 2024-02-29_04-19-11-training-sample-5076-1-1325.png: [WinError 3] The system cannot find the path specified: '' etc

John Dopamine

Looks like the issue is that I didn't specify an output path. I don't think leaving it blank so it outputs to the default output folder works anymore. If I have any further issues I'll update.

Furkan Gözükara

just fixed with v15. please close app, do a git pull or run update, start again and it should work now. ty

Furkan Gözükara

just fixed with v15. please close app, do a git pull or run update, start again and it should work now. ty

Onkel Fraggel

RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16. So I had to change Auto-Encoder Data Type to fp32. Is this the right way to do it? Im running on a 2080ti 11gb vram card. How to optimize the settings to get upscale faster?

Pedro Serapio

No luck on Window 11, the installation run without any errors. When starting it always complains that gradio is not present.

Furkan Gözükara

that means during installation you had an error. what is your python version? can you reinstall into a fresh folder and show me entire cmd output? you can email logs to monstermmorpg@gmail.com

Furkan Gözükara

yes you cant us eon GPUs that doesn't support bfloat right now. we reported this error. please you also reply for quicker fix : https://github.com/Fanghua-Yu/SUPIR/issues/33 11 gb is already low vram so your best shot is getting this error fixed.

Betinho Formado

this new version just broked my working version sadly not running anymore I will try to re-install again would be good for this install recognise the models so doesn't download when is existing on the folder I think would save time.

Neven Krcmarek

Thank you for this easy installment. I have 3090 and when I do Stage 2 for x2 upscale of the image 2048x2560px. It eats up whole 24 vram and it takes very long to upscale. Is this something that will get faster? I would very much like to make 8k res upscaling but this is taking really long just for 4k per image. Thank you.

Furkan Gözükara

Thank you so much. Yes the code is very VRAM demanding and they didn't implement shard in into multiple GPUs. So in your case it starts using shared VRAM instead of double gpu. This is also NVIDIA guilt. NVIDIA want you to buy pro cards. But I am talking with authors to get implemented cpu off loading to further reduce vram usage.

Furkan Gözükara

You can do this. Delete downloader py so it won't download models. But make sure to copy paste models later. By the can you give me more info how broke? I should fix if there is error

Betinho Formado

completely broke reinstall on my computer v12 and v13 don't run anymore try installing both versions from scratch and again like before stop when reads the "SUPIR_v0.yaml" Then no extra message just saying press any key and close the cmd and doesn't give me any more information about the URL I'm back to square one haha I need to avoid those updates.

Andre Bopp

A suggestion: I found out that my graphics card can generate an output of at most 1344 x 1344 pixels without generating an OOM. To use this information, I had to calculate the upscale factor for each input image depending on the resolution. Would it be possible to add a field with "upscale to ... max pixel/side" so that the upscale factor for each image is automatically calculated in the background? So the longer side of the image is taken and scaled up to a length of 1344 pixels, in my case.

Neven Krcmarek

Just for the reference. The upscale from 2048x2560px to 4096x5120px took 5 hours. It was an amazing result so kudos to the programmers. If it helps here is the image, maybe you can use it for something, it's all fine by me. https://www.dropbox.com/scl/fo/twrnmvz7iwzfm2p309hgk/h?rlkey=cb87jh3ag8ijoeu3lpxvvag31&dl=0

Attila Karácsonyi

Just a tip: user Everything from void tools, is an instant search tool and You will find "everything"!

Attila Karácsonyi

I just joined and would like to express my gratitude for the extreme effort You are putting into educating all of us!! ❤️

Attila Karácsonyi

I had the same issue then checked my SysEnv and fixed the path for python, and voilà :) it loaded the UI. I am about to try it for the first time NOW! :)

Furkan Gözükara

the thing is i am not sure exactly how output is calculated. for example if i give 1024x1024 and upscale 1x i get 1024x1024 output then i give 512x512 and do 1x upscale and i get output 1024x1024 then i give 256x256 and do 1x upscale and i get output 1024x1024

Furkan Gözükara

dmn :D you can rent a big GPU on RunPod or anywhere and get done quickly. by the way upscale is mind blowing

Hung Do

a little confused about Param Setting and Model Selection. If I choose Fidelity in Param Setting shouldn't Model Selection auto switch to v0F.ckpt? Or is it 4 possible combination

AI Squad

Still dreaming with an Update where you will say that a Kaggle notebook is available \o/ 😁

Marly Rodrigues

I'm using it on Windows 11 and it doesn't work. It loads phase 1 and when it goes to phase 2, it keeps loading the preloading animation and doesn't finish. :/

Marly Rodrigues

My computer has 128gb ram with a 12gb video card. I was trying to upscale to 4x but was unsuccessful. I formatted the computer and will try again. It would be interesting if you published a new version or made available an updated version of 1click install

Marly Rodrigues

https://i.imgur.com/DMrYipI.jpeg As you can see, I formatted the computer, updated the drivers and it still didn't work for me again. I tried decreasing the upscale to 2x and it didn't work either (I followed exactly the recommendations in your video, clicking on 1click install) I tried a new test, putting it on the windows c drive (Because before I tried on an external ssd and it didn't work either)

Marly Rodrigues

in stage 2 it loads infinitely, I check the command prompt and it loads the processing of the queue several times and does not give me the result. Nor does the progress bar appear as it does in the video tutorial. can you help me?

Dmitry

Not a man, you're a machine) I can't imagine when you manage to do so much in so many different directions. Just thank you for your hard work

Gary

Good stuff. thanks! and I paid ;D question: running a 4090 .. i uploaded a photo of a cowboy wearing a detailed shirt... the SUPIR image comes out with his face looking better but his clothes all "smoothed" out. Suggestions? Thanks

Meito

hi anyway to roll back to supir v16, im getting worse results with the new one and its taking 3x longer for me

daniel mendoza

It happened to me with ocean waves, the texture was incorrect until I specified it in the prompt. Put the type of texture of the clothing fabric in the prompt and it will come out as it should.

Furkan Gözükara

interesting. are you using the model we defined or another model? did you define a good prompt? can you share image with me? monstermmorpg@gmail.com

Furkan Gözükara

Hi yes you can but can you message me from discord? if there is an issue I would like to fix because we are adding all new features. I need some details. For example we added Linear CFG by default did you turned it off and tested? to return back v16, open a cmd inside supir and do this git checkout 7702b200a082f3173ccaf6d31d457a3b3ccad704

JamZam WamBam

i have python 3:10:11 installed but it won't install.

JamZam WamBam

I get this error : ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url: /repos/1b/95/1b9587c6131e50dc30fec9d8d829ec2de37d44cf22183191ea1235a5089d904f/223b123e44b95e148904ab2475e6a6230a013813cb25e80d11f9d6d062b768b8?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27triton-2.1.0-cp310-cp310-win_amd64.whl%3B+filename%3D%22triton-2.1.0-cp310-cp310-win_amd64.whl%22%3B&Expires=1709861169&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwOTg2MTE2OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy8xYi85NS8xYjk1ODdjNjEzMWU1MGRjMzBmZWM5ZDhkODI5ZWMyZGUzN2Q0NGNmMjIxODMxOTFlYTEyMzVhNTA4OWQ5MDRmLzIyM2IxMjNlNDRiOTVlMTQ4OTA0YWIyNDc1ZTZhNjIzMGEwMTM4MTNjYjI1ZTgwZDExZjlkNmQwNjJiNzY4Yjg~cmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qIn1dfQ__&Signature=gQNTYXxgejsDNSw7iEWSDNVPesa0orlgv6EAbzuS2yC0aYYC0MmF5VclER4369Egx-3EvSMzOMR-mq0yblToEeLuzldrDLvQBpKBbSNhUm9cbVpY1g2hYdF~YJMgCwDBfTnRbdBEqNzSNbjqtVUxAerQfNb0KR1S1jUajuOsOkCUxP9WfiD5v4diJaTetpk9z9sdWOKMQUlb-NQ5KL3jHg2TpUPf9Wm1ak~950aB0lH1G~IJFpVyO9rL8PHAyfbqHLhWI6sVhS2Wx1FElDpfKhJAGGWqzmXsg4F94whXNe3NqoXv0YaaCsbQi-gtFa8976-X3ToqA6QNNgj1AEGoRQ__&Key-Pair-Id=KVTP0A1DKRTAX (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))

Furkan Gözükara

hello. this is related to your internet connection. it can be due to your internet connection provider or your anti virus. i would restart computer and try again. reset your modem too

J (edited)

Comment edits

2024-03-10 20:48:29 Nice stuff, Few questions. What is the point of the Stage 1, it looks like it downsizes the image and smoothens it. Also it would be great if you could implement a model changer within the gradio itself.
2024-03-05 11:35:12

J

Ah makes sense, does this mean you should skip straight to stage 2 if you aren't using llava?

Furkan Gözükara

yes 100%. we dont have also integrated llava as it is inferior to our own llava. https://www.patreon.com/posts/sota-image-for-2-90744385

J

Just some more questions if you don't mind, when upscale is set to 1 does it still upscale the image size, shouldn't this be the same scale as the native image? What's the difference between the Param Settings? Is the Prompt just an addition onto the Default Positive Prompt?

Furkan Gözükara

1 : yes minimum 1024 px - the lower dimension side 2 : please read this post from top to bottom added a lot of info 3 : yes. but it makes difference use it

J

Thanks, is it possible to lower that minimum by any chance?

Furkan Gözükara

yes you can edit gradio app and change input_image = upscale_image(input_image, upscale, unit_resolution=32, min_size=1024) make min_size like 512. but you can also downscale your input image resolution with any app like paint .net easier

Gary

Suggestion: Would be great to have a option to create a ffmpeg video of the slider.. smooth movement from original to SUPIR'd

J

Thanks, is it possible to implement a change like this https://github.com/kijai/ComfyUI-SUPIR/commit/c01e040f5538fc3bceeff1b79d5d591a75aa838e as well so the original image fits the exact dimensions of the original input

J

Sorry no, what I meant was sometimes if you feed it an image it seems to stretch it a little bit or change the dimensions a little bit even with the low "min_size", You can fix it in another image editing software but it would be nice to automatically be fixed. So if you have an image that is 500x250 on 1x scale it should just be 500x250 and nothing else. Also I don't know if its possible but this plus ControlNet would be powerful.

Gary

that solved it for me as well. Lol. I keep getting tempted to forget how AI works.. its funny/weird remembering to prompt

Gary

hahah... love that you did that. hahaha! Awesxome. Sidenote: Listen.. im not trying to blow smoke up anything, but you did a great job with this. I sent some photos to my family and they were blown away. I hope you make a lot of $$$ cause it was way worth the $5

Furkan Gözükara

thank you so much. yes this software is normally 100s dollars and nothing good like this :) If you upgrade Patreon level or buy me a coffee I appreciate very much : https://www.buymeacoffee.com/drfurkan

Gary

Save to video feedback: It's hit or miss.. i've had a perfectly square photo work and then I've had a height taller than width fail. Here's the latest error (I run a 4090): RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Dead mau5

Hi could you please help me resolve this issue, it happens after clicking on stage 2 button: File "X:\Upscaling\SUPIR\venv\lib\site-packages\gradio\queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "X:\Upscaling\SUPIR\venv\lib\site-packages\gradio\route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( File "X:\Upscaling\SUPIR\venv\lib\site-packages\gradio\blocks.py", line 1627, in process_api result = await self.call_function( File "X:\Upscaling\SUPIR\venv\lib\site-packages\gradio\blocks.py", line 1173, in call_function prediction = await anyio.to_thread.run_sync( File "X:\Upscaling\SUPIR\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "X:\Upscaling\SUPIR\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "X:\Upscaling\SUPIR\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run result = context.run(func, *args) File "X:\Upscaling\SUPIR\venv\lib\site-packages\gradio\utils.py", line 690, in wrapper response = f(*args, **kwargs) File "X:\Upscaling\SUPIR\gradio_demo.py", line 313, in stage2_process load_qf() File "X:\Upscaling\SUPIR\gradio_demo.py", line 112, in load_qf ckpt_Q, ckpt_F = load_QF_ckpt('options/SUPIR_v0.yaml') File "X:\Upscaling\SUPIR\SUPIR\util.py", line 75, in load_QF_ckpt ckpt_F = torch.load(config.SUPIR_CKPT_F, map_location=device) File "X:\Upscaling\SUPIR\venv\lib\site-packages\torch\serialization.py", line 1005, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "X:\Upscaling\SUPIR\venv\lib\site-packages\torch\serialization.py", line 457, in __init__ super().__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

Furkan Gözükara

i think it is related to your computer. try restart the computer. i plan to add batch extract frames and also convert frames back into video feature as well

Furkan Gözükara

yes i can make that by removing upscale to min but SDXL works best with 1024px i think that is why upscales to 1024 minimum

Tiger

thank you for the great works. Much appreciated.

Cyril F

it seems there's a bug with FaceRestore. It happened a few times where it is restoring the face properly, but not upscaling the rest of the image, or sometimes it is upscaling everything, but leaving a grid (or artifacts) on the face. See examples here >> https://i.gyazo.com/45a67262f1dea39b9a074d7fcb2a9ab2.jpg

he da

ModuleNotFoundError: No module named 'facexlib'

Ivan Stoyneshki

On V22 I get no error but in shell I see - Press any key to continue.... and thats all it shuts off... this happens every time just after one upscale, the second image crashes every time. Tried numerous times. And the process of start i really slow. The interface shows up faster but after that its very slow. I run ot 4060 TI 16GB with 32 gb ram system memory.

Furkan Gözükara

it fast load because it doesn't load models at beginning. then it starts loading after you click. can you message me from discord i can connect via any desk and check out

masharegister

Hello/ How can i fix this mistake Traceback (most recent call last): File "C:\Supir0603\SUPIR\venv\lib\site-packages\gradio\queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "C:\Supir0603\SUPIR\venv\lib\site-packages\gradio\route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( File "C:\Supir0603\SUPIR\venv\lib\site-packages\gradio\blocks.py", line 1627, in process_api result = await self.call_function( File "C:\Supir0603\SUPIR\venv\lib\site-packages\gradio\blocks.py", line 1173, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Supir0603\SUPIR\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "C:\Supir0603\SUPIR\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "C:\Supir0603\SUPIR\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run result = context.run(func, *args) File "C:\Supir0603\SUPIR\venv\lib\site-packages\gradio\utils.py", line 690, in wrapper response = f(*args, **kwargs) File "C:\Supir0603\SUPIR\gradio_demo.py", line 446, in stage2_process results = [face_helper.paste_faces_to_input_image(upsample_img=_bg[0])] File "C:\Supir0603\SUPIR\SUPIR\utils\face_restoration_helper.py", line 374, in paste_faces_to_input_image assert len(self.restored_faces) == len( AssertionError: length of restored_faces and affine_matrices are different.

Furkan Gözükara

hello. face restore is still not perfect sadly and there are bugs. if you open an issue here with the image you tried would help a lot : https://github.com/Fanghua-Yu/SUPIR/issues

masharegister

Thanks for the advice. The problem was solved by completely reinstalling the application. There is no more of this error. I've updated the video card driver too! çok teşekkür ederim

Mikael Svenson

A minor thing you should rename the option "Start With White Theme" to be "Start with Light Theme". Matches "Dark" better and is the correct term :)

VRMOTION

I have installed V22 but I don't have pressing stage 2 only stage1. rather, the stage 2 button is missing

VRMOTION

After I press the button, the result ends with an error. assert len(self.restored_faces) == len( AssertionError: length of restored_faces and affine_matrices are different.

Davit Sharian

can I change the checkpoint for example to realviis lightning model, and make it 4-5 steps and lower cfg scale ?

Furkan Gözükara

some people did it and reported it is working. please let me know if you try

hacbachvotinh

In version 26, if we can restore the video RAM (VRAM) back to its original state after using LLAVA to process information, then running Supir won't occupy more than 12GB of video memory and the processing speed won't significantly slow down.

masharegister

How can I install and not update anymore version 14 or 16 of the program? It worked so well for me.

Flambo

Yes this is possible, but you must modify SUPIR\gradio_demo.py and change minimum=20 to minimum=4 to allow for lower than 20 steps to be picked.

daniel mendoza

Hello, I have a suggestion. You can set the upscale number to be a target resolution instead of a number. And use this formula, for example, the input image is 1035x1035 and my target resolution is 2560x2560. Divide the target resolution with the input image resolution and this is my scale number. 2560/1035=2.47 unique value. This is so that the scale always has a target output resolution and does not have to accommodate it depending on the pixel size of the input image. I'm doing it this way manually but I would like the app to have that feature and I would appreciate it if you included it. I understand that if the image is not a 1:1 aspect ratio it becomes complicated. But it would be applying the formula only on one of the dimensions (width or height), for example: 1024x768 is the input resolution, I want to expand it to 1560, then: 1560/1024=1.52 upscale value.

hacbachvotinh

I tested it and found that the realvisxlV40_XL_V40_Bakedvae.safetensors model produces good quality images similar to Juggernaut-XL_v9_RunDiffusionPhoto_v2.

Нона Ангелова

I just subscribed to your Patreon and downloaded the latest version (I'm on Windows 11 and have all the requirements) However, when I try to start it, I bet this error "H:\A.I. Software\Supir\SUPIR\gradio_demo.py:1046: SyntaxWarning: invalid escape sequence '\S' placeholder="R:\SUPIR video\comparison_images") H:\A.I. Software\Supir\SUPIR\gradio_demo.py:1049: SyntaxWarning: invalid escape sequence '\S' placeholder="R:\SUPIR video\comparison_images\outputs") Traceback (most recent call last): File "H:\A.I. Software\Supir\SUPIR\gradio_demo.py", line 11, in import einops ModuleNotFoundError: No module named 'einops'" In the same time, I have no issue running Supir from Pinokio

Meito

Hi Dr, can you please share the git checkout for v19 v20 These seem to be the best versions for me. Also doing backtesting. Latest one doesn't work for me. Trying to figure out the issue as well. Thanks

puk

Hy Furkan, i tested he version 27 when i apply face restoration the restored face is not fused with the scaled image. also the output image is blurry and look like the output of stage 1

Furkan Gözükara

hello. if your python version is 3.10.11 then your error is using same folder name. please install into h:\supir_new and it will work hopefully

Furkan Gözükara

when you set 2.47 does it work exactly? i think we can add target resolution option but what if aspect ratio of user input not matches? how should it behave?

C. Jonas

Hi Mate, when using LLAVA, the VRAM does not clean afterwards and before Stage 2? Without LLAVA it uses aroung 10GB, using LLAVA it takes > 14GB, see: [Tiled VAE]: Done in 51.395s, max VRAM alloc 14153.721 MB I have a 12GB 3060, so Stage 2 then takes forever when LLAVA is activated. Can you implement a VRAM wipe before Stage 2, or write out the LLAVA result in the output folder before stage 2? So we could use it manually as a prompt, if we have to restart SUPIR (then without LLAVA) because of VRAM

Нона Ангелова

I tried it but without any luck. The compile is wrong, because it tries to find something that's structured like on your PC, for example "R:\SUPIR video\comparison_images\outputs" - I don't have "R" directory on my PC, I guess this is yours.

VRMOTION

when you click on LLaVA, nothing happens, no hint is generated. I just did everything as in the lesson. Should LLaVA work?

Furkan Gözükara

in code we don't have anything like that except as a placeholder to show you how to set your paths accurately. can you please show me screenshots from discord? error screenshot

hacbachvotinh

A pretty good experience: I exited Photoshop before running supir. Although it only saved 0.8GB of video ram, the image speed of Supir is much faster because it uses less than 12GB of video ram of the 3080ti.

kenishii

Hey there, I don't know what I am doing wrong, but it alwas crashes: Processing images (Stage 2) Building a Downsample layer with 2 dims. --> settings are: in-chn: 320, out-chn: 320, kernel-size: 3, stride: 2, padding: 1 Building a Downsample layer with 2 dims. --> settings are: in-chn: 640, out-chn: 640, kernel-size: 3, stride: 2, padding: 1 Drücken Sie eine beliebige Taste . . . (press any key...)

Diego Sienra

Hi, great update!! I have a problem, I have a rtx 3090 and I was using no tiled mode and I seems to have better results in some cases, for example more natural hears running the whole image in one run vs tiled, the problem with the new version is that i can run it just one time, in the second generation I get out of memory, there is way to free the vram after each generation? may be this can fix the issue. thanks

Diego Sienra

another question, because the models are sdxl models with a native resolution of 1024x1024 tiling at 1024x1024 rather than 512x512 is posible to get better results? additional unrelated question what is BG restoration?

Davit Sharian

V22 works for me, but when starting the V37 getting this error message, my gpu is RTX3090 WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cpu) Python 3.11.6 (you have 3.11.5) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details Traceback (most recent call last): File "C:\Users\davsh\Documents\SUPIR_v37\SUPIR\gradio_demo.py", line 69, in raise ValueError('Currently support CUDA only.') ValueError: Currently support CUDA only.

Furkan Gözükara

please install python 3.10.11 and reinstall SUPIR and it will get fixed hopefully : https://youtu.be/-NjNy7afOQ0

Furkan Gözükara

i was never able to run without tiles on RTX 3090 :D i dont know if we can reduce VRAM. are you using LLaVA?

Davit Sharian

its worked, UI loaded, but when I tried to upscale, it gives this message Repository Not Found for url: https://huggingface.co/api/models/models/clip-vit-large-patch14/revision/main. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.

Furkan Gözükara

the reason is Python 3.11. because on Python 3.10 we don't get that error. i will test to verify this now on a fresh install

Diego Sienra

Yes Im using LLaVA, It works for one generation starting with "Start As Half Params" and "Start Without Tiled VAE". without LLaVA it works fine, I tested it in previews versions but if I set larger resolutions starts to use gpu shared memory and it gets very slow, I have 48 RAM and 24gb VRAM

Diego Sienra

Use both VRAM optimizations and it uses around 12 GB VRAM GPU If you have over 30 GB VRAM, you can start both full Params and no Tiled-VAE modify this file and add --share if you want Gradio share Please select an option: 1. Start As Half Params - Uses Lesser VRAM Preferred 2. Start As Full Params Enter your choice (1-2): 1 Please select an option: 1. Start Using Tiled VAE - Uses Lesser VRAM Preferred 2. Start Without Tiled VAE Enter your choice (1-2): 2 Please select an option: 1. Start With Light Theme 2. Start With Dark Theme Enter your choice (1-2): 2 Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Processing LLaVA Processing images (Stage 1) Building a Downsample layer with 2 dims. --> settings are: in-chn: 320, out-chn: 320, kernel-size: 3, stride: 2, padding: 1 Building a Downsample layer with 2 dims. --> settings are: in-chn: 640, out-chn: 640, kernel-size: 3, stride: 2, padding: 1 making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Building a Downsample layer with 2 dims. --> settings are: in-chn: 320, out-chn: 320, kernel-size: 3, stride: 2, padding: 1 Building a Downsample layer with 2 dims. --> settings are: in-chn: 640, out-chn: 640, kernel-size: 3, stride: 2, padding: 1 Loaded model config from [options/SUPIR_v0.yaml] and moved to cpu Loaded state_dict from [C:\Users\Diego\Downloads\SUPIR_v36\SUPIR\models/checkpoints\Juggernaut-XL_v9_RunDiffusionPhoto_v2.safetensors] Loaded state_dict from [models/v0Q.ckpt] You are using a model of type llava to instantiate a model of type llava_supir. This is not supported for all configurations of models and can yield errors. Loading vision tower: openai/clip-vit-large-patch14-336 Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:17<00:00, 8.85s/it] LLaVA loaded. LLaVA moved to GPU. C:\Users\Diego\Downloads\SUPIR_v36\SUPIR\venv\lib\site-packages\transformers\models\llama\modeling_llama.py:671: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) attn_output = torch.nn.functional.scaled_dot_product_attention( Processing images (Stage 2) Seed set to 475310794 Image 1/1 upscale completed. Last upscale completed in 26.83 seconds Label changed: Image Upscaling Completed: processed 1 images at in 30.23 seconds #2 Image Upscaling Completed: processed 1 images at in 30.23 seconds #2 Updating Single Output Image

Diego Sienra

This is my log my gpu usage is 21.3GB but it works including LLaVA, I tried again it works up to 1024x1024 multiple times, fails at higher resolutions or aspect ratios

kenishii

Hi, thank you very much for your fast reply. It is 3.10.11. but today in the morning I tried it again and now it works. I have no clue what the reason was. sometimes just going to sleep helps. :-)

Casper Smit

Could you please me a short walkthrough? I can't keep up with the updates

So Sha

Unfortunately after first run my system crashed! ( RTX 4090 & 72 GB RAM ) ( Enabled "Apply LLaVa" & "Apply Stage 1" & "Apply Stage 2" ) I restarted again and run again without activating llava and it worked. One more thing, how we can add more details to the images and lets AI do some imagination like magnific?

Furkan Gözükara

if computer got blue screen or restart, it is likely due to RAM CPU voltage and speeds. happened me as well. to add more details you can try to increase Text Guidance Scale

Ec Jep

I also have 4090 with 24gb and using s1, s2 and llava so far no issues. Thx for releasing this image processor! It is really fixing blurry pics with excellent quality

masharegister

Good day. Why the application Supir installed to the C:\Windows\System32 , but not in the same folder where I run the installation.

Diggy Dre

I am getting this error: Repository Not Found for url: https://huggingface.co/api/models/models/clip-vit-large-patch14/revision/main. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated.

Furkan Gözükara

this happens when donwload models were not completed. please complete them. you can do either fresh install or execute download_models.py

Siva

Is it possible to make it work on 3080 10GB?

hacbachvotinh

You are a person who is extremely dedicated to the product, constantly updating and very responsive to user needs. I will follow and support you for a long time.

C. Jonas

Hi Furkan, the file "update_windows_requirements.bat" is new, do we need to run it regularly?

C. Jonas

v41 gives some weird ghost effects on the created images. looks like the face can be seen twice but very faint and only parts of it, shifted to the left or right of the actual face. default settings on v41 with juggernautxl. switched back to v38 for now.

Scott

This is such a great tool, though it takes me a very long time to run anything over about 2,000 x 2,000 pixels (and even that takes about 2-5 minutes to enlarge on a rtx 4060 ti 16GB). if I have different size images in the batch folder and some upscale too far it takes a long time. It would be great if the UI could also target upscaling by size or megapixel size. Some graphics programs have something like this like "resize so longest side is x pixels." or even "target x megapixels" and have it choose the right scale while keeping the aspect ratio. That would make it a bit more predictable when running batches. Also, is there a rule about what photos need stage 1 run first? Is it mostly just tiny thumbnails that need to have that run?

VRMOTION

ERROR: Could not install packages due to an OSError: [WinError 5] Отказано в доступе: 'F:\\TUTORIAL_AI\\SUPIR\\SUP\\SUPIR_v41\\SUPIR\\venv\\Lib\\site-packages\\~orch\\lib\\asmjit.dll' Check the permissions.

Furkan Gözükara

hello. Try to install exactly into this folder C:\\SUPIR_v41 moreover what is your python version 3.10.11?

Ivan Stoyneshki

I get this screen and no output

Furkan Gözükara

it looks like worked. where did you set your output param? which disk how? remove outputs_folder param and try default folder

Ivan Stoyneshki

--outputs_folder_button "True" --outputs_folder "F:\Supir-Output" - supir is installed o F

Ivan Stoyneshki

started with default same the right windows stays empty and no images in output folders

Furkan Gözükara

are you using latest version? i have no such issues. what do you see on CMD can you email me? monstermmorpg@gmail.com. message me entire CMD window message after upscaling an image

Ivan Stoyneshki

latest version, clean install, restarted the pc to try, and the same it says its complete but there is no image. Dont know your mail so here is the link to CMD window log - https://drive.google.com/file/d/1L8MYzFlmOPrvIRkpOfIZXvcARMzGEQm1/view?usp=sharing

Furkan Gözükara

No errors. Can you try different images simple ones. No face restore. Just make sure all images are failing. Also message me from discord so only way is connect your computer and debug

Andre Bopp

I don't know why, but my results are always more blurry if I'm using face restoration. (other settings are default) Am I the only one with this issue? And does the resolution of the input image have a influence on this process? Nevertheless, thanks for the great work and commitment.

Furkan Gözükara

thank you so much. sadly face restore is far from perfect. it is a hit or miss. in some images works perfect and in some fails.

crow

ran the installer and everything went good. loaded up the UI and tried to run an upscale but my machine keeps hard crashing on stage2 loading model SUPIR_v0_tiled.yaml. any help would be great

Furkan Gözükara

ye they look great. so to debug your issues i need these 2 info or connect your pc 1 : fresh install and entire log of your cmd 2 : entire log of your cmd when you do inference / upscale can you email me them : monstermmorpg@gmail.com

HFS123

I have it installed and running but the final generated images are all just textured gray. installed it slightly differently since I already had the models in my machine, i made sure all the paths are correct in the scripts as well as the config files. So I don't know what is missing?

Furkan Gözükara

sadly no idea. i think try to do a fresh install and try again. can you also post example of textured gray?

HFS123

i've added the screenshot to snipboard. https://snipboard.io/SZMUcL.jpg you can see the grey/brown image on the right side of the image slider ** update :- it's okay, it's working now. just an issue with one of my paths! all fine

HFS123

all working now

HFS123

so i have narrowed down the problem. my SUPIR models were originally named: SUPIR-v0F.ckpt and SUPIR-v0Q.ckpt your package renames them to v0F.ckpt and v0Q.ckpt which is fine. but if I choose to keep the original names (SUPIR-v0F.ckpt and SUPIR-v0Q.ckpt) - and i make sure the paths are correct in the yaml files, that is when i get gray/brown images. I searched all the files for references to these files thinking they are maybe hardcoded somewhere, but I can't find them.. can you explain? is there somewhere where it insists that they must be named v0F.ckpt and v0Q.ckpt?

Furkan Gözükara

we develop with multiple people so yes probably we have some hard coded of them. you can search entire code to see

Additional Contributions

File "D:\SUR\SUPIR\gradio_demo.py", line 3, in import gradio as gr ModuleNotFoundError: No module named 'gradio'

Additional Contributions

Dr., I re-installed and installed successfully, China's Tsinghua source is not working well, I canceled it and it works, thanks!

Additional Contributions

Doc, why don't you combine the captioning model with the GPT, Claude, which could make you more profitable.

Siva

I think with this new version it should work on mine.

masato ogawa

I did a v44 install on windows 10, GPU is a 3080ti 12gb. The installation was completed successfully, but when I started the program, dragged and dropped the image into the input image, set the upscale size to 2, and clicked on Process Signle, I got the message "An exception occurred. Please try again." The process does not proceed.

Marko Radosavljevic

An exception occurred: stat: path should be string, bytes, os.PathLike or integer, not NoneType at Traceback (most recent call last): File "P:\SUPIR_V44\SUPIR\gradio_demo.py", line 744, in start_single_process _, result = batch_process(img_data, **values_dict) File "P:\SUPIR_V44\SUPIR\gradio_demo.py", line 1054, in batch_process ckpt_select = get_ckpt_path(ckpt_select) File "P:\SUPIR_V44\SUPIR\gradio_demo.py", line 249, in get_ckpt_path if os.path.exists(ckpt_path): File "C:\Users\TBR\AppData\Local\Programs\Python\Python310\lib\genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

Marko Radosavljevic

Tried everything, also clean install... always the same error... BTW, good job man, thanks :)

Christophe Rolland

Same error : An exception occurred: stat: path should be string, bytes, os.PathLike or integer, not NoneType at Traceback (most recent call last): File "C:\Windows\System32\SUPIR\gradio_demo.py", line 744, in start_single_process _, result = batch_process(img_data, **values_dict) File "C:\Windows\System32\SUPIR\gradio_demo.py", line 1054, in batch_process ckpt_select = get_ckpt_path(ckpt_select) File "C:\Windows\System32\SUPIR\gradio_demo.py", line 249, in get_ckpt_path if os.path.exists(ckpt_path): File "C:\Program Files\Python310\lib\genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType. Everything worked fine before v44 installation. Tryed clean install too but no succes :[

Furkan Gözükara

hello. did you provide --ckpt argument? did you get any error while installing? i think your models were not downloaded. what do you see in models folder and in models/checkpoints folder?

Furkan Gözükara

hello. did you provide --ckpt argument? did you get any error while installing? i think your models were not downloaded. what do you see in models folder and in models/checkpoints folder?

Marko Radosavljevic

Models are in folder, but invisible in model selector in gradio... No errors during install, no arguments added... I restored previous version, works like charm :) I just wanted to inform you about ver_44 error.

Furkan Gözükara

I just did a fresh install and literally no issues. if you contact me from discord i can do any desk and check your computer

Christophe Rolland

Hi, after another reinstall, models were downloaded. All is right now, thanks !

masato ogawa

To create a public link, set `share=True` in `launch()`. Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: Traceback (most recent call last): File "C:\Users\****\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, *self._args) File "C:\Users\****\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 165, in _call_connection_lost self._sock.shutdown(socket.SHUT_RDWR) ConnectionResetError: [WinError 10054] 既存の接続はリモート ホストに強制的に切断されました。 Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: Traceback (most recent call last): File "C:\Users\****\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, *self._args) File "C:\Users\****\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 165, in _call_connection_lost self._sock.shutdown(socket.SHUT_RDWR) ConnectionResetError: [WinError 10054] 既存の接続はリモート ホストに強制的に切断されました。 An exception occurred: stat: path should be string, bytes, os.PathLike or integer, not NoneType at Traceback (most recent call last): File "E:\SUPIR_v44\SUPIR\gradio_demo.py", line 744, in start_single_process _, result = batch_process(img_data, **values_dict) File "E:\SUPIR_v44\SUPIR\gradio_demo.py", line 1054, in batch_process ckpt_select = get_ckpt_path(ckpt_select) File "E:\SUPIR_v44\SUPIR\gradio_demo.py", line 249, in get_ckpt_path if os.path.exists(ckpt_path): File "C:\Users\****\AppData\Local\Programs\Python\Python310\lib\genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

Furkan Gözükara

hello. this means your model downloads were failed. please do a fresh install and pay attention to all messages. you can email me entire log of fresh install : monstermmorpg@gmail.com

masato ogawa

I should have reinstalled first, I reinstalled and it is working fine, thank you very much for your help.

Jian Shen

Hello, can you help me with this error: raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-660952ce-60eb16b47db69e39669e7575;714b3109-cbde-4694-8055-0d10bc28f6c7) Repository Not Found for url:https://huggingface.co/api/models/models/clip-vit-large-patch14/revision/main. Please make sure you specified the correct 'repo_id' and 'repo_type'. If you are trying to access a private or gated repo, make sure you are authenticated. InValid username or password.

Furkan Gözükara

Hello. welcome. i just replied you from email. the reason is that during initial install some models were not fully downloaded.

Jian Shen

Thank you! fully reinstalled and solved the problem. And the app works well, thumbs up!

Diggy Dre

For some reason, RunPod refuses to start a Supir job now. It just comes right back with "An exception occurred, please try again"

Abdo Asker

I have a problem .. this appears to me:

Abdo Asker

An exception occurred: stat: path should be string, bytes, os.PathLike or integer, not NoneType at Traceback (most recent call last): File "/workspace/SUPIR/gradio_demo.py", line 744, in start_single_process _, result = batch_process(img_data, **values_dict) File "/workspace/SUPIR/gradio_demo.py", line 1054, in batch_process ckpt_select = get_ckpt_path(ckpt_select) File "/workspace/SUPIR/gradio_demo.py", line 249, in get_ckpt_path if os.path.exists(ckpt_path): File "/usr/lib/python3.10/genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

Furkan Gözükara

please download this via browser and extract into SUPIR folder. obviously you had error while downloading models : https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/supir_v45_models.zip

Abdo Asker

Loading model from [options/SUPIR_v0_tiled.yaml] ./run_linux.sh: line 67: 4708 Killed python gradio_demo.py $lowvram $tiledVAE $theme $cpuMove --open_browser --share True

Abdo Asker

Okay, this is solved but there's another problem (below this comment)

Furkan Gözükara

that means out of ram. restart pod try again. also how much ram your pod has? if fails again add this to arguments --fast_load_sd

Abdo Asker

another error: RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.

Abdo Asker (edited)

Comment edits

2024-04-05 18:00:04 I want to talk to you directly on telegram not on patreon ... this is my telegram account: https://t.me/+201030190466
2024-04-05 02:53:49

Cemil Hacimahmutoglu

hocam üyeliği aktif ettik ve şu anda windows_install.bat dosyasını tıkladım ve indirmeye başladı sürecek gibi görünüyor. Merak ettiğim bende 3080ti var localde çalıştırabilirim değilmi, ayrıca windows_install.bat osyasıa tıklamam yeterli olcak mı, yapmam gereken başka bir şey olacakmı

Abdo Asker

what should I do then ? I use RTX A6000 and installed every thing as you have done in the video.

Philip Heggie

I installed C++ and python 3.10.11 but running the windows install bat file python shows requirements.txt is missing and downloader.py is missing where are they how do i fix this install aborted

Furkan Gözükara

can you DM me on Twitter and show screenshots how you are trying to install. screenshot of your folder and install logs CMD logs

Philip Heggie

Yes but i found the problem git for windows wasn't installed on my system installing properly after that sorry for my ignorance.

Tuncay Akyazıt

11gb vramli 1080ti kartımla sorunsuz çalışıyor teşekkürler.

InvisibleInkDoodles

I followed the video but for some reason, the faces always come out looking really-ugly XD and not like the original person. Everything else looks good, just the face, whereas in your videos it handled faces well. I've tried with a mix of low to high quality images but so far none have looked right, I've tried with face restoration on and off, I use the same juggernaut model, settings are as close as I can get them to yours but same problem, so I'm not sure where I'm going wrong.

rey flores

an option for Creativity Strength like krea ai please

San Milano

Hello! The results are amazing. Much better than the ones I get from ComfyUI. Is it possible to save the settings I used for upscaling one of the images? I found the perfect settings and I would like to keep them that way

rey flores

can we add the new Juggernaut XL v10 model?

matte

I'm getting this error: Repository Not Found for url: https://huggingface.co/api/models/models/clip-vit-large-patch14/revision/main. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.

Furkan Gözükara

yes i added it already. update version (run Windows_Update_Version) and in checkpoint downloader you will see it

puk

Thank you for this great tool. I've been using this tool since you introduced it and it just keeps getting better. I have experimented myself and used my own trained high quality sdxl loras with 1500 training images as a model. When it came to just the face it worked quite well. My question is, does it make sense to integrate an extra LoRA file in addition to the models as a second option in your workflow to achieve better results? I could of course create my own Dreambooth file. But these are much bigger than a Lora.

Furkan Gözükara

i plan to add lora option too hopefully. currently you can merge your lora with your model to use. it should work fine

egormly

I am getting: gradio_demo.py", line 13, in import einops ModuleNotFoundError: No module named 'einops' I have confirmed einops is installed and running update or requirements says all is there, any ideas?

Flambo

Has the massive slowdown issues since v37 been fixed?

Furkan Gözükara

this means your python version is not 3.10 when installing. make sure that your python is 3.10 and reinstall. i also just made update to the installer file to accurately select python 3.10 if you have multiple versions

Furkan Gözükara

by the way if you dont use tiled vae it works even faster but it requires more vram. we are looking into problem

J

I'm noticing that on some full body photos the face restoration doesn't work correctly and parts of the face are pixelated in the final photo: https://imgsli.com/MjU3NjY0

Flambo

Yes but it also overloads a 4090 for anything bigger than a thumbnail due to vram usage, making it practically unusable for local upscaling compared to v37 which just works.

Rick B

SUPIR_V47, used the Kaggle notebook, getting the following when running gradio /kaggle/temp/SUPIR * Serving Flask app '__main__' * Debug mode: off Traceback (most recent call last): File "/kaggle/temp/SUPIR/gradio_demo.py", line 2, in from asyncio.windows_events import NULL File "/opt/conda/lib/python3.10/asyncio/windows_events.py", line 6, in raise ImportError('win32 only') ImportError: win32 only Same error with somewhat earlier Kaggle notebook, not certain which zip file. Then the notebook stops running, so doen't run in kaggle suddenly?

Hexadecimal22

I'm getting the same error. And I just downloaded the newest SUPIR_47.zip and installed it from the Windows_Install.bat , didn't change or do anything at all during the install. What am I doing wrong?

egormly

Had multiple versions, the updated script and also updating requirements fixed it, thanks.

Furkan Gözükara

it is not your mistake. downloading huge files via CMD sometimes failing. please download all models single zip file and extract as shown in this video : https://youtu.be/OYxVEvDf284

Rick B

I'm sorry, I don't understand. What got fixed? Is there a new notebook to download or is there a new kaggle notebook in the same SUPIR_V47 zip file?

Chinmoy Basak

Commenting hear after Youtube: Could not create share link. Please check your internet connection or our status page: https://status.gradio.app. ### Facing this issue with runpod I tried with pytorch 2.1 and 2.0.1 Also tried to follow this following..... Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 2. Rename the downloaded file to: frpc_linux_amd64_v0.2 3. Move the file to this location: /workspace/SUPIR/venv/lib/python3.10/site-packages/gradio

Chinmoy Basak

Thank you, runpod, tried with two different versions of pytorch, please suggest

Furkan Gözükara

it is the error of gradio. gradio share is down at the moment. you can use massed compute . a good tutorial for massed compute here but i will also hopefully make tutorial for supir too : https://youtu.be/0t5l6CP9eBg

Chinmoy Basak

Thank you again , massed compute I found A6000[alt config] has 24GB RAM, will be enough for SUPIR? As you have mentioned we would need 30 GB. Do you think creating Docker would solve these kind of versioning issues

Furkan Gözükara

no it wont be . there are 2 options on runpod. rent the one with bigger RAM : Specs: RAM: 48 GB, Storage: 256 GB, vCPU: 6

Nate

Getting "ModuleNotFoundError: No module named 'einops'", I think because: C:\SUPIR_v47\SUPIR>pip install -r requirements.txt Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121 ERROR: triton-2.1.0-cp310-cp310-win_amd64.whl is not a supported wheel on this platform. Any ideas?

Nate

python --version Python 3.11.5 Is this OK? Do I really need 3.10?

Furkan Gözükara

yes you need 3.10. if you install it our installer will automatically select it if you have python launcher installed

Nate

Dear god this is stupid: >python --version Python 3.10.11 >curl -sS https://bootstrap.pypa.io/get-pip.py | python Collecting pip Using cached pip-24.0-py3-none-any.whl.metadata (3.6 kB) Collecting setuptools Using cached setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB) Collecting wheel Using cached wheel-0.43.0-py3-none-any.whl.metadata (2.2 kB) Using cached pip-24.0-py3-none-any.whl (2.1 MB) Using cached setuptools-69.5.1-py3-none-any.whl (894 kB) Using cached wheel-0.43.0-py3-none-any.whl (65 kB) Installing collected packages: wheel, setuptools, pip Successfully installed pip-24.0 setuptools-69.5.1 wheel-0.43.0 >pip --version Traceback (most recent call last): File "runpy.py", line 196, in _run_module_as_main File "runpy.py", line 86, in _run_code File "C:\SUPIR_v47\python-3.10.11\Scripts\pip.exe\__main__.py", line 4, in ModuleNotFoundError: No module named 'pip' I hate all of this. I'm using your 1-click installer to avoid this terrible, terrible python hell. Maybe you can just package the right python with the installer and have it use that.

Furkan Gözükara

it is easy relax. i have shown how to install everything step by step this tutorial. or else i can connect your pc and fix all if you become a gold member : https://youtu.be/-NjNy7afOQ0

Nate

I wanted to use portable Python 3.10 so I don't litter junk all over my OS. I used the Python 3.10 installer, deleted venv, and now I'm downloading everything again. If this doesn't work I swear I'm deleting it all and giving up. During this new install I see: Installing collected packages: torch, torchvision, torchaudio Attempting uninstall: torch Found existing installation: torch 2.1.0+cu121 Uninstalling torch-2.1.0+cu121: Successfully uninstalled torch-2.1.0+cu121 Attempting uninstall: torchvision Found existing installation: torchvision 0.16.0+cu121 Uninstalling torchvision-0.16.0+cu121: Successfully uninstalled torchvision-0.16.0+cu121 Attempting uninstall: torchaudio Found existing installation: torchaudio 2.1.0+cu121 Uninstalling torchaudio-2.1.0+cu121: Successfully uninstalled torchaudio-2.1.0+cu121 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. xformers 0.0.22.post7 requires torch==2.1.0, but you have torch 2.2.0+cu121 which is incompatible. Successfully installed torch-2.2.0+cu121 torchaudio-2.2.0+cu121 torchvision-0.17.0+cu121 It's frustrating that everything is so fragile with Python. If 3.10 is required, the script should fail if Python is not 3.10. Better would be to include Python so the script can't fail this way. After all, a 1-click install should not have such gotchas. I've been a software engineer for 30 years. This is not a user problem, it's a Python problem. The setup is absolutely terrible. Most AI projects have instructions that are out of date. Most commonly dependencies aren't specified properly, so the instructions worked when they were written, but later newer versions are used which breaks pickle deserialization or otherwise have API breakage. Your 1-click setup is nice, if I can get past this Python 3.10 nonsense, but I imagine takes you a lot of effort to keep it working, since so much needs to be downloaded. I see that not every dependency has a version, which is just a bad idea and will inevitably break in the future. The only project that has worked well is Upscayl. It just worked out of the box. The custom models just work. Topaz Gigapixels also just worked, and is better than Upscayl. I want to try SUPIR to see how it compares, but man the pain is high.

Furkan Gözükara

well i am also a software engineer since 2008 and you are absolutely right about python :D i hate it. i am a c# developer. giving a version to every dependency is even more painful. but i include pip freeze

Nate

When my firewall blocks communication (I enabled it for 30 minutes, but downloading took longer) the command fails, but the script keeps going, running more commands that try to download and fail. Then at the end it says everything was installed just fine, but it wasn't. Traceback (most recent call last): File "C:\Users\Nate\Desktop\SUPIR_v47\downloader.py", line 48, in download_file(file_url, folder, file_name) File "C:\Users\Nate\Desktop\SUPIR_v47\downloader.py", line 19, in download_file with requests.get(url, stream=True) as r: File "C:\Users\Nate\Desktop\SUPIR_v47\SUPIR\venv\lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, **kwargs) File "C:\Users\Nate\Desktop\SUPIR_v47\SUPIR\venv\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\Nate\Desktop\SUPIR_v47\SUPIR\venv\lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "C:\Users\Nate\Desktop\SUPIR_v47\SUPIR\venv\lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "C:\Users\Nate\Desktop\SUPIR_v47\SUPIR\venv\lib\site-packages\requests\adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /RunDiffusion/Juggernaut-XL-v9/resolve/main/Juggernaut-XL_v9_RunDiffusionPhoto_v2.safetensors (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')) Virtual environment made and installed properly

Furkan Gözükara

here single link for you : https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/supir_v45_models.zip

Nate

Thanks, I should have tried that to start. It seems better than scripts downloading so many things. My latest reinstall actually worked though, so I'm happy about that! Thanks for your help, I'd never find the patience to get this working otherwise!

Nate

I got SUPIR to work, cheers! I want to go from 4000x2192 to 18600x9900, which is 4.65x. SUPIR crashes trying that. I did get 2x to work and the results scaled up the rest of the way using bicubic are amazing, much better than Topaz Gigapixel. If it could do 4.65x it would be amazing! Any tips for getting 4.65x to work?

Furkan Gözükara

for such mega pixel rent A6000 on Massed Compute, 48 GB GPU, try tiled and 8 bit. we have auto installers and instructions for massed compute

Nate

Oh, that sounds awesome! Can you point to the instructions I should follow? You've been a great help, I'll happily send a donation your way.

Furkan Gözükara

the instructions and installers are inside SUPIR_v47.zip file. by also watching this video you can understand how to use Massed compute so easy : https://youtu.be/LeHfgq_lAXU

Nate

Damn this is cool! I get "python3: can't open file .../SUPIR/downloader.py". I can run SUPIR, but no models appear in the selectbox in the web UI. Any ideas?

Nate

I tried to push the limits using an H100 (128GB VRAM), first with full precision and no tiled, failed needing 80GB+ VRAM. Did again full precision and tiled, seemed to be working! The failed with: [Tiled VAE]: Executing Decoder Task Queue: 56%|... | 47063/84132 [02:13<1:14:35, 8.28it/s] ./Massed_Compute_Start_SUPIR.sh: line 67: 8101 Killed python3 gradio_demo.py $lowvram $tiledVAE $theme $cpuMove --open_browser --share True Is it weird is just stopped with "Killed" like that? Currently I'm running again with BF16/FP16 and tiled.

Nate

Damn, fails in exactly the same place, at 56%. I'm guessing it's the OS's out of memory killer. Trying again with 8bit, as you suggested.

Nate

It succeeded! I had to increase the swap size to 500GB. At least, 200GB was not enough. Maybe this helps someone else: sudo swapoff /swapfile sudo rm /swapfile sudo fallocate -l 500G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile Unfortunately 4.65x doesn't look as good as 2x. It hallucinated leaves where there should have been wood texture, and other problems. I will try 2x and then 2x again.

Nate

I did a year of gold. Thanks again for your help!

jonathan sanchez

for a 14900k and a rtx4090, what starting configuration I choose, it is not clear in the video.

Furkan Gözükara

select 1-1-1-1 it will allow you to upscale into very high res if you need even bigger resolution select 2-1-1-1

Matt

Thanks for announcing supir v48, but where can I find SUPIR_v48.zip?

Matt

Got it - saw the change on github in gradio_demo.py - Thanks again!

Der Sandmann

Hi, I just wanted to thank you for the application. It really works wonders for preparing images that I want to use for training a LORA model. You are probably already aware of some of the weaknesses, or other users have already pointed them out to you. Nevertheless, I would like to mention what I have noticed in the hope that it will help you to further improve the application: 1. disfigured hands and feet, crooked and foul teeth and deformed irises and pupils. This is a known and common problem. Perhaps the implementation of LORA models specifically designed to correct these problems could help? 2. I have noticed that the model tends to create age wrinkles and oversized “doll-like” eyelashes. I have found that adding “age wrinkles” and “oversized lashes” to the standard negative prompt fixes this problem for the most part. 3. Freckles. The model tends to create single circular dots placed symmetrically next to each other on the skin, making it look unnatural. A more detailed prompt like "A generous sprinkling of small, light brown freckles that are densely clustered and subtly fade into the surrounding skin, resembling a natural, sun-kissed pattern" mitigates this effect. 4. overbaked hair and “hair morphing”. The former is mainly a problem with the fidelity model. The latter occurs, for example, when the person wears long hair and a necklace. The two merge into one another. In addition, it regularly happens that individual rootless hairs “stick” to the skin. This can easily solved via Photoshop etc. but I thought I mention it anyway. 5. is it possible and useful to implement the use of LORA models and/or face-swapping models (Reactor, Facefusion, etc.) in the application to give it a certain reference that increases fidelity by restoring details more correctly? Again, thanks for this amazing application. :-)

Furkan Gözükara

your best option to fix all frist 1-4 issues is finding the best model that can produce best results like your taste and doing dreambooth / fine tuning over it. then if you need LoRA you can extract it Reactor of Facefusion all uses a single same low faceswap quality so won't work. but fine tuned / dreambooth model can be used with SUPIR as a base model to restore face when upscaling

ivan

Hello. Thank you for the excellent automation of the installation process. You are stating that there is an update: "7 May 2024 Update Updated to V48", however in the attached files there is only V47. Can you please tell me where I can download V48.zip?

Z

Linux?

Andy Limited

how to fix this: ImportError: cannot import name 'packaging' from 'pkg_resources' (E:\Download\Compressed\SUPIR_v49\SUPIR\venv\lib\site-packages\pkg_resources\__init__.py)

Furkan Gözükara

that means install error. please reinstall and send me all the output of the installation CMD. you can select all copy save into text file and email me : monstermmorpg@gmail.com

reaper557

Thank you very much for the all-in-one zip that has all the models. I was running into the exact problem of getting model download errors, and this is just what I need.

Ay**e Animation

EDIT: Nvm, I watched your video and it showed how to do it! Thank you very much! Hi! Thank you so much for this! Even on my 8GB VRAM 2070, it works very well! I thank you so much for your hard work of reducing the 48GB VRAM usage down to being within even an 8GB VRAM environment! By the way, I want to know how to get it to change the directory for Stable Diffusion checkpoints so I can save on space (since I already have SD models installed in another location)? Thank you!

Walt

thanks in Advance

Furkan Gözükara

hello that means you had error during install. so i need you to verify install your cuda and python exactly as shown in this video and reinstall supir and send me your installation logs. if you also upgrade to gold member i can connect your pc and install for you : https://youtu.be/-NjNy7afOQ0

hacbachvotinh

6x256 [SAR 1:1 DAR 1:1], 907 kb/s, 8 fps, 8 tbr, 160k tbn (default) Metadata: creation_time : 2024-07-13T15:07:11.000000Z handler_name : Mainconcept MP4 Video Media Handler vendor_id : [0][0][0][0] encoder : AVC Coding Stream map '1:a:0' matches no streams. To ignore this, add a trailing '?' to the map. Failed to set value '1:a:0' for option 'map': Invalid argument Error parsing options for output file outputs_audio_restored.mp4. Error opening output files: Invalid argument Error restoring audio: Command '['ffmpeg', '-hwaccel', 'auto', '-i', 'outputs.mp4', '-i', 'D:\\temp\\gradio\\c283e9cdcad2354e07b4b1ffe87c0025122da96a\\CountrySide-256.mp4', '-ss', '0.0', '-to', '2.0', '-map', '0:v:0', '-map', '1:a:0', '-c:v', 'copy', '-shortest', 'outputs_audio_restored.mp4']' returned non-zero exit status 4294967274. Audio restoration failed: outputs.mp4 Video compiled successfully. Model moved to CPU All moved to CPU i got this error when upscale video.

Furkan Gözükara

yes upscale video is still not working perfectly. it has the frames upscaled in the folder so you can merge later. but i would suggest you to extract all frames of the video into a folder and do a batch upscale