Home Artists Posts Import Register
Join the new SimpleX Chat Group!

Downloads

Content

Hey everyone! I've created some simple launcher for the xtts & extras api servers for SillyTavern, the best Ui for LLM roleplay. Featuring character animation, text-to-speech, voice recognition, group chat, Dynamic Audio and so much more! You Absolutely need to try this out!   

Watch the video on SillyTavern: https://youtu.be/enWO16x6tRM

Here's how it works:

1. Download the bat files, drag and drop them inside the SillyTavern folder
2. Whenever you want to run either the xtts server or api server run those files
3.???
4. Profit  😎  

If you want to modify the arguments or options of the api, just simply edit the files with notepad before launching it.

You of course need to have SillyTavern Ui installed first:  https://www.patreon.com/posts/95125999 

And as always, supporting me on Patreon allows me to keep creating helpful resources like this for the community. Thank you for your support - now go have some...huh... fun *wink*😉 *wink*😉!

Files

Comments

Wojciech Lenartowicz

Thanks a lot for sharing! Where are the files, though? 😉

T V

cant see the files

TomiTom1234

There are no files, buddy.

Devin Taylor

Getting an Error. "ModuleNotFoundError: No module named 'deepspeed'" when using the XTTS bat

Devin Taylor

Here is what is says on line 2: xtts_api_server.modeldownloader:install_deepspeed_based_on_python_version:64 - Unsupported Python version on Windows.

Aitrepreneur

try running pip install deepspeed in a command prompt window then try again if you still get the error, you might have to edit the file and remove deepspeed from the arguments

Devin Taylor

I had to remove it from the bat. I tried to install it via pip but was getting errors

kingdoofy

When i start LaunchXTTS the command window opens, but then nothing happens except the slash iron flashing. waiting for input

Kuzo

Using your 1 click installer I got everything running in SillyTavern, However when I try to do this I can get the LaunchExtras.bat to work but when I try to run LaunchXTTS.bat the cmd prompt opens for a half a second and closes. It doesn't load the server like you show in your video. Any ideas?

Aitrepreneur

usually when a cmd prompt opens and close by itself it's because of some antivirus blocking the file, so disable it or create an exception and try again

Dougomite

I'm running into this too. I assume it has something to do with me having other versions of Python installed as well. So I think some python calls were using my older 3.8 version while the 3.11 version was installed by the batch file.

Aitrepreneur

you can also try this: go the xtts folder, click on the folder path type cmd press enter, this will bring the command prompt window and there type venv\Scripts\activate then press enter and then type pip install deepspeed then press enter if that still doesn't work for some reason, then edit the launchxtts.bat file and remove the deepspeed argument

Kuzo

Thanks, I've added the entire folder to my exclusion list and even tried turning off my virus and threat protection but still having the same issue unfortunately

Kuzo

Tried turning off firewall as well because I don't have any other security running on this pc but still same issue.

Aitrepreneur

Then try this then: Go inside the xtts folder, type cmd in the folder path to open the command prompt window then type venv\Scripts\activate Then press enter. Then type python -m xtts_api_server --deepspeed --streaming-mode-improve --stream-play-sync Then press enter

J Rosen

When I try to run the LaunchXTTS.bat files I get the following error: D:\AI-ST\SillyTavern>LaunchXTTS.bat D:\AI-ST\SillyTavern\xtts\venv\Scripts\python.exe: No module named xtts_api_server

Gh9stRide

Same here... done everything in this thread, so far. And I get the "No Module..." too.

AICaos Chaos

i get error no module deepspeed

Kuzo

Thank you for your help, it says no module named xtts_api_server. So was the 1 click installer not able to install that or was it something I was supposed to install separately. Sorry I'm not very tech savvy when it comes to this stuff.

Kuzo

I got it to work finally by following the XTTS voice cloning guide from SillyTavern

J Rosen

I was able to get this resolved by going through the "1-Click INSTALL Coqui Text-To-Speech For Webui" again and making sure that was working correctly. That fixed this issue. However, it caused a new issue in that now when the Text gen model and Coqui XTTS model and silly tavern are running they are using 14 GB of VRAM and the responses time goes from 5 -7 seconds to 80-90 seconds on my 8GB VRAM RTX 3070. I wouldn't mind using one of the cloud based services to fix this if it is low cost, like the one advertised on chub.ai. That might be a good video to make...?

caidicus

Also stuck on deepspeed module not found.

Aitrepreneur

try this: go the xtts folder, click on the folder path type cmd press enter, this will bring the command prompt window and there type venv\Scripts\activate then press enter and then type pip install deepspeed then press enter if that still doesn't work for some reason, then edit the launchxtts.bat file and remove the deepspeed argument

Aitrepreneur

try this: go the xtts folder, click on the folder path type cmd press enter, this will bring the command prompt window and there type venv\Scripts\activate then press enter and then type pip install deepspeed then press enter if that still doesn't work for some reason, then edit the launchxtts.bat file and remove the deepspeed argument

Aitrepreneur

well there is no magic, to run a llm + tts and everything in between takes a lot of vram

Kevin Maree

Hmm, once everything has started and I am receiving the webURL and I try top open it it states '404 Not Found' in the console. But looking there it seems to change the port address in the error message. I have tried http://localhost:8020, Localhost8020, 127.0.0.1:8020 and the error messages seem to add random ports. Do you have any ideas?

Kevin Maree

To add to this: This is what the log says: INFO: Started server process [5772] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8020 (Press CTRL+C to quit) INFO: 127.0.0.1:63220 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:63237 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:63237 - "GET / HTTP/1.1" 404 Not Found INFO: ::1:63334 - "GET / HTTP/1.1" 404 Not Found The 404's are attempts

Aitrepreneur

what exactly are you trying to do here? Is this what happens when launching the extras extension?

Kevin Maree

This is after LaunchXTTS.bat is done loading. The INFO are the last lines on the console window.

Aitrepreneur

does the tts work anyway inside sillytavern? I can't find anyone else having this error with xtts so replicating it is hard

Kevin Maree

Yea, I've spend quite a lot of hours now searching the internet and it seems to have to do with how it loads the UI. But if I clone the xtts-webui version of it all works fine. As for your question; Yes, it does load the audio, I've not had great results with custom ones but I am looking through 'xtts-webui' and I am getting there. I'll file this under a 'me' problem and if I end up finding the solution I will post it here. Thank you for your time Aitrepreneur, I appreciate your hard work.

dustyday

did anyone get this to work yet? getting similar errors, i've tried excluding folder from antivirus, installing deepspeed pip install and removing deepspeed line. on launch window pops up and vanishes in a second: "Unable to import torch, pre-compiling ops will be disabled." even after i run pip install torch in same folder

Aitrepreneur

if the window vanishes that's indeed because of the antivirus / firewall blocking the file from running further. I usually often recommend to people to not install any antivirus to begin with since you don't need it anymore on windows but try to disable it completely before running the installer again in maybe another folder

DuFFy

I have the same issue as you I'll see if your solution works for me too.

DuFFy

exactly the same issue identical to yours Kuzo your a legend thanks~!

Alex Iannelli

I used the installers as well at trying to install manually

Aitrepreneur

In your case you might need to completely uninstall python and reinstall it manually, you probably had an already installed bad python version, use this one instead: https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe

dustyday

bro your installer not only made my antivirus crazy but it wiped my sd (python) out - not impressed. having to reinstall from backup :/

Aitrepreneur

There is nothing in this installer that would wipe a python installation, the antivirus one is always a false positive, I usually recommend people not to use antiviruses anyway, you don't need it with the build in windows protection

cool1

In the launch_extras.bat the first command is "cd SillyTavern-extras" (which fails on mine as there isnt one) - are we supposed to create a SillTavern-extras folder somewhere in our sillytavern folder and if so what do we put in it? right now it's not allowing to save an updated expressions file.

Alex Iannelli

I updated to 3.10.6 (via conda) and reran the 1_INSTALL-REQ.bat and 2_UPDATE-TTS.bat in text-generation-webui. When I run LaunchXTTS.bat from the SillyTavern folder, the CMD window just opens and closes. I ran the call venv\Scripts\activate python -m xtts_api_server --streaming-mode-improve --deepspeed --stream-play-sync manually from the CMD prompt in XTTS folder and it displays the following S:\SillyTavern\xtts>call venv\Scripts\activate (venv) S:\SillyTavern\xtts>python -m xtts_api_server --streaming-mode-improve --deepspeed --stream-play-sync Traceback (most recent call last): File "S:\SillyTavern\xtts\venv\lib\site-packages\numpy\core\__init__.py", line 23, in from . import multiarray File "S:\SillyTavern\xtts\venv\lib\site-packages\numpy\core\multiarray.py", line 10, in from . import overrides File "S:\SillyTavern\xtts\venv\lib\site-packages\numpy\core\overrides.py", line 6, in from numpy.core._multiarray_umath import ( ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\aliin.000\miniconda3\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\aliin.000\miniconda3\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "S:\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\__main__.py", line 46, in from xtts_api_server.server import app File "S:\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\server.py", line 1, in from TTS.api import TTS File "S:\SillyTavern\xtts\venv\lib\site-packages\TTS\api.py", line 6, in import numpy as np File "S:\SillyTavern\xtts\venv\lib\site-packages\numpy\__init__.py", line 144, in from . import core File "S:\SillyTavern\xtts\venv\lib\site-packages\numpy\core\__init__.py", line 49, in raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.10 from "S:\SillyTavern\xtts\venv\Scripts\python.exe" * The NumPy version is: "1.22.0" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: No module named 'numpy.core._multiarray_umath'

Aitrepreneur

don't update with conda, as I said, use the manual python installer to uninstall python completely then use the python installer again to reinstall it by adding python to path, your probably had an older version of python incorrectly installed previously

Aitrepreneur

the extras folder is supposed to be automatically installed with the 1 click installer, so you shouldn't have to do anything

cool1

Thanks. It's somehow working now. I think it did say it was installing that stuff in the installation of it but for some reason the uploading of expressions wasn't working. But I clicked the "extensions" and "download extensions & assets" and somehow after that it's started allowing me to add the different emotions images. It still can show old images there unless the page is refreshed sometimes (which then means you need to reconnect- which just seems a slight bug in the app) but it's working mostly thanks.

This guy

I seem to have it working, but there is no voice comming back, when I look into the TTS folders "model, outpu & speakers" all 3 of them are empty, that in return means there is no voices to choose from under the settings i Silly Tavern. Anyone got an Idea where to go from here? :/

CocoGrizzly

I finally got this working (installed LaunchXTTS before running Silly Tavern for the first time did the trick) but my AIs voice is all choppy and slow. Any recommendations on how to fix?

CocoGrizzly

AMD Radeon 6750XT - when it dictates, it's generally a half-second between words

Aitrepreneur

Yeah probably because it's an amd...so a lot of the optimisation might not work, could also be the model you are using that is taking too much ressources on your gpu to run quickly

CocoGrizzly

That tracks - thanks anyway. And thanks for all the great guides, your stuff is some of the best out there dude

JORBO

when i go to launchxtts.abat the windoow just closes and nothiung happens

Vaivij

I get the following error after following all the steps (I think) TTS Provider not ready. TypeError: Failed to fetch After I click on launchTTS file, I get the following error: More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates Traceback (most recent call last): File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 289, in _synthesize_worker tts.load_checkpoint( File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\models\xtts.py", line 772, in load_checkpoint self.gpt.init_gpt_for_inference(kv_cache=self.args.kv_cache, use_deepspeed=use_deepspeed) File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\layers\xtts\gpt.py", line 222, in init_gpt_for_inference import deepspeed File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\deepspeed\__init__.py", line 9, in from .runtime.engine import DeepSpeedEngine File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\deepspeed\runtime\engine.py", line 16, in from tensorboardX import SummaryWriter File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\__init__.py", line 5, in from .torchvis import TorchVis File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\torchvis.py", line 11, in from .writer import SummaryWriter File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\writer.py", line 15, in from .event_file_writer import EventFileWriter File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\event_file_writer.py", line 28, in from .proto import event_pb2 File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\proto\event_pb2.py", line 15, in from tensorboardX.proto import summary_pb2 as tensorboardX_dot_proto_dot_summary__pb2 File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\proto\summary_pb2.py", line 15, in from tensorboardX.proto import tensor_pb2 as tensorboardX_dot_proto_dot_tensor__pb2 File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\proto\tensor_pb2.py", line 15, in from tensorboardX.proto import resource_handle_pb2 as tensorboardX_dot_proto_dot_resource__handle__pb2 File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\tensorboardX\proto\resource_handle_pb2.py", line 35, in _descriptor.FieldDescriptor( File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\google\protobuf\descriptor.py", line 561, in __new__ _message.Message._CheckCalledFromGeneratedFile() TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates Process Process-1: Traceback (most recent call last): File "C:\Users\vaivi\miniconda3\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\vaivi\miniconda3\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "D:\OneDrive\Documents\GitHub\sillytavern\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server

Maxi23

I had the same issue, I had installed Python 3.12, not Python 3.10. It will not work with 3.12. I removed Python, installed Python 3.10 and its now working correctly. **Forgot to add - Re-install sillytavern using the full installer.bat file once you have the correct Python installed.

Aitrepreneur

that's usually because of your antivirus/firewall blocking the file from running, disable them and relaunch it

Aitrepreneur

do not install sillytavern and webui in a One drive folder. Install that on your pc, desktop folder or somewhere else

Neker Play

it automatically close, running from prompt says that No module named xtts_api_server, but it's installed it

Aitrepreneur

if it automatically closes that's usually your antivirus stopping the file from running further, disable it and try again.

Neker Play

resolved reinstalling, now launching the LuancXTTS.bat says ImportError: DLL load failed while importing transformer_inference_op: impossible find the specified module

Luna

whatn sould i do when it sais TTS Provider not ready. TypeError: Failed to fetch

Aitrepreneur

Where does it say that? Also you can try refreshing the tts provider by clicking on the refresh button

Dominus

I have the exact same issue but I found the cause. Somehow, LaunchXTTS and LaunchExtras are mutually exclusive. If I run One and it does the instalation, the other one stops working and has the issue where it opens for half a second and closes. If I run XTTS then the issue is with Extras. If I reinstall everything and run Extras first, the issue is with XTTS.

Walker4k

Following error. Prior to attempting ton install LaunchXTTS.bat, I used your OCI's to install SD / Koyah & ST basic install (LaunchEXTRAS.bat also throws an error LaunchXTTS.bat O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) 2024-01-18 14:17:46.460 | INFO | xtts_api_server.modeldownloader:install_deepspeed_based_on_python_version:47 - Python version: 3.9 2024-01-18 14:17:46.460 | ERROR | xtts_api_server.modeldownloader:install_deepspeed_based_on_python_version:64 - Unsupported Python version on Windows. 2024-01-18 14:17:46.469 | WARNING | xtts_api_server.server::62 - 'Streaming Mode' has certain limitations, you can read about them here https://github.com/daswer123/xtts-api-server#about-streaming-mode 2024-01-18 14:17:46.469 | INFO | xtts_api_server.server::65 - You launched an improved version of streaming, this version features an improved tokenizer and more context when processing sentences, which can be good for complex languages like Chinese 2024-01-18 14:17:46.470 | INFO | xtts_api_server.RealtimeTTS.engines.coqui_engine:__init__:103 - Loading official model 'v2.0.2' for streaming v2.0.2 O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) > Using model: xtts CoquiEngine: Error initializing main coqui engine model: No module named 'deepspeed' Traceback (most recent call last): File "O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 289, in _synthesize_worker tts.load_checkpoint( File "O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\models\xtts.py", line 772, in load_checkpoint self.gpt.init_gpt_for_inference(kv_cache=self.args.kv_cache, use_deepspeed=use_deepspeed) File "O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\layers\xtts\gpt.py", line 222, in init_gpt_for_inference import deepspeed ModuleNotFoundError: No module named 'deepspeed' Process Process-1: Traceback (most recent call last): File "C:\Users\johnw\miniconda3\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\johnw\miniconda3\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 289, in _synthesize_worker tts.load_checkpoint( File "O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\models\xtts.py", line 772, in load_checkpoint self.gpt.init_gpt_for_inference(kv_cache=self.args.kv_cache, use_deepspeed=use_deepspeed) File "O:\SillyTavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\layers\xtts\gpt.py", line 222, in init_gpt_for_inference import deepspeed ModuleNotFoundError: No module named 'deepspeed'

Aitrepreneur

try this: go the xtts folder, click on the folder path type cmd press enter, this will bring the command prompt window and there type venv\Scripts\activate then press enter and then type pip install deepspeed then press enter if that still doesn't work for some reason, then edit the launchxtts.bat file and remove the deepspeed argument

Wielobel

I know this might not be the best place to ask this question, but can anyone recommend some good model for NSFW roleplay that not get lost and can follow leads/ideas etc.? Most of the time despite character or the model it gets confused, lost and start talking gibberish. They are fine for a quick session, but if i want to get more from them, like expand or continue its just gone. As far as i remember the best results I've ever got were on the site that used jailbroken openai but they they moved form that to custom model and the quality is gone. Any suggestions? Oh, and i run on 16GB nvidia, but for a good model i am willing to upgrade/expand since for almost a year it is my holly grail to find some good model/character combination.

Aitrepreneur

this has to do with context lengh, you need to choose models with as much context lengh as possible but the more context lengh, the more vram it's gonna use. For your GPU, I might recommend trying this one: https://huggingface.co/TheBloke/dolphin-2.6-mistral-7B-GPTQ but obviously even this model after a while will get confused, they all have limits

xdsp

I too am having issues with the XTTS2. Upon running the LaunchXTTS.bat, I get a windows warning. I ignore the warning and run the file anyways, then A command window appears and immediately closes out a millisecond later, and I am unable to run the XTTS server to connect SillyTavern.

Wielobel

Thanks - I will definitely try the model you've suggested. Yeah, I guess the limitation will be there no matter what, but i will keep on hoping :D BTW thanks for all the materials you are putting out there :) great job man!

xdsp

XTTS seems to work just fine in regular TextGen Webui, but upon loading up SillyTavern and enabling XTTS in there, I get a "TTS Provider failed to return voice ids." error message at the top of the SillyTavern GUI.

xdsp

As for the actual LaunchXTTS.bat window itself.. it doesn't say anything, from what I notice. It opens the command window to a blank _ for about a second before closing the command window automatically.

Vinicius Appel

FileNotFoundError: Could not find module 'Z:\_AI\SillyTavern\xtts\venv\Lib\site-packages\torchaudio\lib\libtorchaudio.pyd' (or one of its dependencies). Try using the full path with constructor syntax.

Dawson Pitcher

Wanted to hop in and say for reference that I had the same error and removing the argument worked.

Beto Matias

LaunchEXTRAS seems to be working fine (at least it is creating a server running on localhost) but when I try to run LaunchXTTS it wont even open the cmd, it looks like it tried but then no window opens, any idea of what could it be?

Aitrepreneur

open a command prompt window and drag and drop the launchxtts inside and press enter, should give you an error code at least, see what's wrong

Beto Matias

C:\Users\myuser>F:\ArtificialInteligence\SillyTavern\LaunchXTTS.bat The system could not find the specified path. The system could not find the specified path. C:\Program Files\Python310\python.exe: No module named xtts_api_server That's the error it shows when I try with the prompt windows and launching from there, but first it atempted to open a new window of the cmd just to behave like before closing it instantly

Cisco Q

I'm having the same issues as those listed above. Getting told....C:\Users\myuser>C:\Users\user\OneDrive\Desktop\LaunchEXTRAS.bat The system cannot find the path specified. python: can't open file 'C:\\Users\\user\\server.py': [Errno 2] No such file or directory (extras) C:\Users\user>C:\Users\neoda\OneDrive\Desktop\LaunchXTTS.bat The system cannot find the path specified. The system cannot find the path specified. C:\Users\user\miniconda3\envs\extras\python.exe: No module named xtts_api_server

Aitrepreneur

are you correctly running the launcher from the sillytavern folder? Is the xtts folder present inside the sillytavern folder?

trevor gus

im having a similar problem it says the following "Microsoft Windows [Version 10.0.22621.3085] (c) Microsoft Corporation. All rights reserved. C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern>C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\LaunchXTTS.bat Traceback (most recent call last): File "C:\Users\Gwumbles\miniconda3\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Gwumbles\miniconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\__main__.py", line 46, in from xtts_api_server.server import app File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\xtts_api_server\server.py", line 1, in from TTS.api import TTS File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\TTS\api.py", line 12, in from TTS.utils.synthesizer import Synthesizer File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\TTS\utils\synthesizer.py", line 11, in from TTS.tts.configs.vits_config import VitsConfig File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\configs\vits_config.py", line 5, in from TTS.tts.models.vits import VitsArgs, VitsAudioConfig File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\TTS\tts\models\vits.py", line 10, in import torchaudio File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\torchaudio\__init__.py", line 1, in from . import ( # noqa: F401 File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\torchaudio\_extension\__init__.py", line 45, in _load_lib("libtorchaudio") File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\torchaudio\_extension\utils.py", line 64, in _load_lib torch.ops.load_library(path) File "C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\lib\site-packages\torch\_ops.py", line 933, in load_library ctypes.CDLL(path) File "C:\Users\Gwumbles\miniconda3\lib\ctypes\__init__.py", line 382, in __init__ self._handle = _dlopen(self._name, mode) FileNotFoundError: Could not find module 'C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts\venv\Lib\site-packages\torchaudio\lib\libtorchaudio.pyd' (or one of its dependencies). Try using the full path with constructor syntax. (venv) C:\Users\Gwumbles\Desktop\AI_Bot_bs\silly_tavern\SillyTavern\xtts>" im not exactly sure what i did wrong but i used your one click install bat files

Irving Sanchez

Whats the difference between all the tts.bat files?

Janne Kallio

I noticed after reinstalling windows and these programs that I had to remove all files from the old installation to make it work. My drive names changed in windows installation and something was pointing to the old drive name. after clean install to an empty folder, everything started to work again

Janne Kallio

I noticed after reinstalling windows and these programs that I had to remove all files from the old installation to make it work. My drive names changed in windows installation and something was pointing to the old drive name. after clean install to an empty folder, everything started to work again

MattMatt

I know this video is a little older, but I'm having an issue with getting XTTS to work through SillyTavern. It actually works if I use it through Text GenerationWebUI, but fails when sending replies to the characters in SillyTavern. Here is the error: Output generated in 42.60 seconds (0.33 tokens/s, 14 tokens, context 501, seed 786865685) Exception in ASGI application Traceback (most recent call last): File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 411, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\fastapi\applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__ raise exc File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\cors.py", line 85, in __call__ await self.app(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app raise exc File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 756, in __call__ await self.middleware_stack(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 776, in app await route.handle(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 297, in handle await self.app(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 77, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app raise exc File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\starlette\routing.py", line 72, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\fastapi\routing.py", line 278, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\installer_files\env\Lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\extensions\openai\script.py", line 116, in openai_completions response = OAIcompletions.completions(to_dict(request_data), is_legacy=is_legacy) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\extensions\openai\completions.py", line 555, in completions return deque(generator, maxlen=1).pop() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\extensions\openai\completions.py", line 440, in completions_common for a in generator: File "C:\Llama\TextWebUI\text-generation-webui\modules\text_generation.py", line 36, in generate_reply for result in _generate_reply(*args, **kwargs): File "C:\Llama\TextWebUI\text-generation-webui\modules\text_generation.py", line 122, in _generate_reply reply = apply_extensions('output', reply, state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\modules\extensions.py", line 231, in apply_extensions return EXTENSION_MAP[typ](*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\modules\extensions.py", line 89, in _apply_string_extensions text = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Llama\TextWebUI\text-generation-webui\extensions\coqui_tts\script.py", line 144, in output_modifier output_file = Path(f'extensions/coqui_tts/outputs/{state["character_menu"]}_{int(time.time())}.wav') ~~~~~^^^^^^^^^^^^^^^^^^ KeyError: 'character_menu'

MattMatt

Ignore this. It's an issue with the coqui_tts extension. The XTTSv2 works without enabling it in Text Generation WebUI. My bad.