1 Click Auto RunPod Installer For Rerender A Video - 1 Click Video To Anime (Patreon)
Videos
-
2_sec_example.mp4
Downloads
Content
Join discord and tell me your discord username to get a special rank : SECourses Discord
5 February 2024 Update:
- A tutorial video published : https://youtu.be/cVf9Qf_pKks
- Please also upvote this Reddit Thread I appreciate that
- Go to your RunPod : https://runpod.io?ref=1aka98lq
- Select template RunPod Pytorch 2.1.1
- Customize deployment and set volume disk size to 50
- After pod started connect to jupyter lab interface
- Default downloaded models improved
- Default selected source model set as MeinaMix V11
- Download attached installer.sh and upload into the workspace folder
- Run below command (copy paste into new terminal).
- chmod +x installer.sh
- ./installer.sh
The above command will clone ReRender repo and install all of the requirements fully automatically for you. It will even download compiled ebsynth so you will not wait its compile.
It will also auto download modified to run on RunPod Rerender A Video web UI as well.
Moreover installer sh file will download the below model files with their proper sd_model_cfg py file so you can select them from web UI dropdown and use directly.
model_dict = {
'Realistic Vision V6' : 'models/Realistic_Vision_V6.safetensors',
'Flat 2D Animerge': 'models/Flat_2D_Animerge.safetensors',
'MeinaMix V11': 'models/MeinaMix_V11.safetensors'
}
If you also want to add more models you can download them with wget and and add them to sd_model_cfg py file. Look at the installer . sh file to see how to download from CivitAI or from Hugging Face.
After installation you need to run below code to start ReRender web UI.
It will give you a Gradio share link use it. You can turn off your Pod start again and directly use again with below command (copy paste into new terminal).
Don't forget chmod +x of ebsynth
- export HF_HOME="/workspace"
- chmod +x /workspace/Rerender_A_Video/deps/ebsynth/bin/ebsynth
- cd /workspace
- cd Rerender_A_Video
- source venv/bin/activate
- python webUI.py
Select video and click render first frame. Since it will upload video until you get correct first frame try again and again and watch the command line interface messages.
Moreover make your video FPS like 24 25 30. Do not make it like 23.976fps
You can use ffmpeg to re-encode FPS like below code
- ffmpeg -i input.mp4 -vf "fps=24" -c:v libx264 -crf 7 -c:a copy larry_24fps.mp4
Make your video minimum 2 seconds. You can test example attached 2_sec_example.mp4 file
To see all process edit video_blend py file and set OPEN_EBSYNTH_LOG = True
The result will be inside /workspace/Rerender_A_Video/result/your_uploaded_video_file_name folder
The file name will be blend.mp4
The progress will be slow and it won't show messages
You can check result folder changes to follow
First I suggest you to test 2 second clip then do full clip with same settings
Watch your VRAM usage too
Control type: Canny works better than Hed option
Bigger resolution will significantly improve quality like 768px
You can also play with Canny threshold to get more accurate output
Finally it will not copy the sound of the original video
- Put output render video and initial original video into the same folder and run below command.
- Don't forget to change file names. Also you need to have installed (add system path) or downloaded ffmpeg into the same folder
- ffmpeg -i blend.mp4 -i source_video.mp4 -c:v copy -c:a copy -map 0:v:0 -map 1:a:0 output.mp4