Home Artists Posts Import Register

Videos

  • example_driving_video_SECourses_73_seconds.mp4

Downloads

Content

Patreon exclusive posts index to find our scripts easily, Patreon scripts updates history to see which updates arrived to which scripts and amazing Patreon special generative scripts list that you can use in any of your task.

Join discord to get help, chat, discuss and also tell me your discord username to get your special rank : SECourses Discord

Please also Star, Watch and Fork our Stable Diffusion & Generative AI  GitHub repository and join our Reddit subreddit and follow me on LinkedIn (my real profile)

LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control

This is the newest open source project even better than the paid apps. Its speed is mind blowing. Like you can generate 20 seconds animation in lesser than 50 seconds with RTX 3090.

This is the official repository ( https://github.com/KwaiVGI/LivePortrait ) 

8 August 2024

22 July 2024

  • Updated to V4

  • Source maximum dimension option added

  • So let's say you can modify 1920x1080p video and get same output resolution

  • If you have installed V3 just run Windows_Update_Version.bat file. Zip file still same

21 July 2024

  • Please make a fresh install since we changed the GitHub repository

  • A lot of bugs fixed

  • Video to video option added

  • Working amazing just try it

  • For video to video, don't forget to pick video 🎞️ Source Video in the step 1 when uploading source

10 July 2024 Update

  • Full Windows tutorial with manually written captions / subtitles and video chapters published

  • 🔗 https://youtu.be/FPtpNrmuwXk

  • Full Cloud (Mac users follow this) Tutorial for Massed Compute, RunPod & Kaggle with manually written captions / subtitles and video chapters published

  • 🔗 https://youtu.be/wG7oPp01COg

  • If you also upvote this Reddit Post I appreciate that very much

8 July 2024 Update

Our New Features

  • Fixed FPS issue

  • Audio is automatically saved in output videos

  • Output video naming improved and overwriting previous files issue fixed

  • Open folders tab added (works on Windows and desktop Linux systems)

  • New feature Target Eye Lip Open Ratio added

  • When this is enabled, you can change the opening rate of mouth and the eyelid. However when this is enabled, the movement of the head is currently lost and not maintained.

  • 1-Click to install on Windows, RunPod, Massed Compute and Kaggle.

How to Use

  • Extract attached zip file and install and use on your desired platform.

  • The generated animations will be automatically saved inside animations folder.

Requirements for Windows

Cloud Requirements

  • Full Cloud (Mac users follow this) tutorial for Massed Compute, RunPod and Kaggle : https://youtu.be/wG7oPp01COg

  • Windows tutorial : https://youtu.be/FPtpNrmuwXk

  • For RunPod and Massed Compute follow instructions inside the zip file.

  • You will see Massed_Compute_Instructions_READ.txt and Runpod_Instructions_READ.txt files.

  • For Kaggle follow instructions on the Kaggle notebook.

  • To be able to use Kaggle notebook get a free account and verify your phone number.

  • Zip file also has demo material so you can quickly test. 

  • Upload / download big files / models on cloud via Hugging Face tutorial : https://youtu.be/X5WVZ0NMaTg

  • How to use permanent storage system of RunPod (storage network volume) : https://youtu.be/8Qf4x3-DFf4

  • Massive RunPod tutorial (shows runpodctl) : https://youtu.be/QN1vdGhjcRc 

Usage Tips

  • If you feel the animated video has some jitter, please adjust the driving_smooth_observation_variance parameter in src/config/argument_config.py. We recommend that you change it by a factor of 10. For example, if the current value is 3e-7 and you feel the animated result is a bit jittery, you can adjust it to 3e-6, and so on. If you are using app.py, the corresponding adjustment location is motion smooth strength (v2v).

  • What does cropping do : https://github.com/KwaiVGI/LivePortrait/issues/187#issuecomment-2242107784

  • In the driving video first frame must have mouth closed - not open, not smiling, not showing teeths etc

  •  

Comments

Tobe2d

Amazing work as always! I am wondaring if you have you seen EchoMimic? it was released today https://github.com/BadToBest/EchoMimic

JamZam WamBam

I heard they are adding video to this very soon. That would make it an even more outstanding tool. Great for music videos.

Guile Lindroth

Great work! When running the file "Windows_Start_app.bat" the gradio UI opens and I can select image and video, but when clicking on Run, I'm getting the error: global loadsave.cpp:241 cv::findDecoder imread_('C:\Users\Usu├írio\AppData\Local\Temp\gradio\bef81c8bfee48fec27ef2aef06cf22f20ce3848b\s7.jpeg'): can't open/read file: check file path/integrity I believe this is due to my Windows language being in Portuguese, and the system path to the images is: C:\Users\Usuário\AppData\Local\Temp\gradio Note that I have an accent in the "a" from the path "Usuário" word. Is there a workaround to fix that? ***Solved**** The fix is very simple. Open your Windows Environment Variables and change the "Temp" path to any folder without accented Unicode characters

J D

Thank you so much!!!

Mark Hernandez

I had Live Portrait v3 installed. I extracted and replaced the files from v5 into the existing v3 folder. Then I ran the two .bat files: Run Windows_Update_Version.bat file Run Windows_Install.bat - will only download new models and new requirements When trying to load, I get file not found errors.

Mark Hernandez

I deleted everything and downloaded and installed the new v5 version. After running, I can select the 'Human Live Portrait' option and it works. The 'Animal Live Portrait' option doesn't load and gives errors: Please select an option: 1. Human Live Portrait 2. Animal Live Portrait Enter your choice (1 or 2): 2 Starting Animal Live Portrait... [10:58:06] Load appearance_feature_extractor from live_portrait_wrapper.py:361 D:\Live_Portrait_V5\LivePortrait\pretrained_weights\liveportrait_animals\base_mo dels\appearance_feature_extractor.pth done. Load motion_extractor from live_portrait_wrapper.py:364 D:\Live_Portrait_V5\LivePortrait\pretrained_weights\liveportrait_animals\base_mo dels\motion_extractor.pth done. Load warping_module from live_portrait_wrapper.py:367 D:\Live_Portrait_V5\LivePortrait\pretrained_weights\liveportrait_animals\base_mo dels\warping_module.pth done. [10:58:07] Load spade_generator from live_portrait_wrapper.py:370 D:\Live_Portrait_V5\LivePortrait\pretrained_weights\liveportrait_animals\base_mo dels\spade_generator.pth done. Load stitching_retargeting_module from live_portrait_wrapper.py:374 D:\Live_Portrait_V5\LivePortrait\pretrained_weights\liveportrait\retargeting_mod els\stitching_retargeting_module.pth done. [10:58:08] FaceAnalysisDIY warmup time: 1.076s face_analysis_diy.py:79 [10:58:09] LandmarkRunner warmup time: 0.788s human_landmark_runner.py:95 Traceback (most recent call last): File "D:\Live_Portrait_V5\LivePortrait\app_animals.py", line 62, in gradio_pipeline_animal: GradioPipelineAnimal = GradioPipelineAnimal( File "D:\Live_Portrait_V5\LivePortrait\src\gradio_pipeline.py", line 516, in __init__ super().__init__(inference_cfg, crop_cfg) File "D:\Live_Portrait_V5\LivePortrait\src\live_portrait_pipeline_animal.py", line 55, in __init__ self.cropper: Cropper = Cropper(crop_cfg=crop_cfg, image_type='animal_face', flag_use_half_precision=inference_cfg.flag_use_half_precision) File "D:\Live_Portrait_V5\LivePortrait\src\utils\cropper.py", line 79, in __init__ from .animal_landmark_runner import XPoseRunner as AnimalLandmarkRunner File "D:\Live_Portrait_V5\LivePortrait\src\utils\animal_landmark_runner.py", line 19, in from .dependencies.XPose.models import build_model File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\__init__.py", line 7, in from .UniPose.unipose import build_unipose File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\__init__.py", line 10, in from .unipose import build_unipose File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\unipose.py", line 23, in from .deformable_transformer import build_deformable_transformer File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\deformable_transformer.py", line 30, in from .ops.modules import MSDeformAttn File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\ops\modules\__init__.py", line 9, in from .ms_deform_attn import MSDeformAttn File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\ops\modules\ms_deform_attn.py", line 23, in from src.utils.dependencies.XPose.models.UniPose.ops.functions.ms_deform_attn_func import MSDeformAttnFunction File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\ops\functions\__init__.py", line 9, in from .ms_deform_attn_func import MSDeformAttnFunction File "D:\Live_Portrait_V5\LivePortrait\src\utils\dependencies\XPose\models\UniPose\ops\functions\ms_deform_attn_func.py", line 18, in import MultiScaleDeformableAttention as MSDA ModuleNotFoundError: No module named 'MultiScaleDeformableAttention' Press any key to continue . . .