Home Artists Posts Import Register

Content


I made a video for this previously here , but people keep asking me how to set up image AI in VAM so I'm writing this post too to have it also in text form, easier to update, and as a place for people to ask questions if they have any about this.


HOW TO INSTALL

1. INSTALL. You'll need AUTOMATIC1111 to run in the background (install,  info on some features , there are many better guides about Automatic1111 on youtube)
2. ENABLE API. After you install AUTOMATIC1111, you need to enable it's API feature for Alive to be able to communicate with it:

You need to edit in notepad or another text editor the file webui-user.bat from A1111 (the main file that you start it with) and add the "--api" argument to the COMMANDLINE_ARGS line. Restart A1111 after making this change.

3. ENABLE AI IMAGES IN ALIVE. In the Settings app, in the Services tab, you should now see images AI as Connected and you can enable it:


HOW TO USE

To start taking AI photos, you can go to the Cam app inside the Alive UI. The images will go in VAM's screenshot folder, Alive will create another folder in there for AI images. There will also be some leftover images directly in the screenshots folders, you can ignore those. I didn't delete those because I think they require VAM to ask permission every time. The actual AI images will be saved in "screenshots/Diffusion Cam".

The parameters in the Cam app are the same as the ones from A1111, you can read more on them in the Automatic1111 wiki and it requires a bit of basic knowledge of A1111, experimenting with the basic usage there or checking some youtube tutorials. But as a very quick outline:
- steps - is the number of operations (turns) that the AI takes to generate the image. More steps means better images but it takes more time to generate them
- CFG - is how strictly/how much importance the AI gives to your prompt (the text description). Lower means the images look more natural (closer to what the AI was "taught", while higher makes it "force" things more towards your particular description).

For experienced users, you might also want to check my LCM post  that allows the images to be generated a lot quicker. For very basic 1st time use, things should work with the default settings and you don't need to change anything except maybe for the Cam > AI Prompt > Prompt field where you might want to add your own description of the image, the scene, to help the AI generate the kind of images you want.


The most important setting for VAM purposes is the denoise slider. This controls how much creative freedom the AI gets. At lower values it tries to stick to what it "sees" in VAM, while at higher values it goes crazy with it. In the video of this post I tried showcasing a bit the differences between different Denoise slider values:


In the video, the images with the pink background were made at a denoise value of 0.25. This is a low value and it makes the AI stay closer to what's happening in VAM. This is great for consistency:

In the example above, VAM elements are still noticeable if you look close enough. Like the hairline and the 3d/cgi look on the clothing. Low denoise values keep image consistency, things stay similar between different shots, but at the cost of AI-driven quality and creativity.

The images with the blue background had a higher denoise value, at 0.3

You can notice that the images started looking better. But since the AI has more freedom to alter things, it will also modify things you might not want it to. Like how it replaced in the first image the flower hairclip with a butterfly. I had "butterfly tattoo" in the prompt and to the AI the flower kind of looked like a butterfly, it was also blue, so it went with that. The consistency is still pretty good but there will be noticeable differences usually between shots. 

The orange background images have a much higher denoise value, at 0.5

The higher denoise value makes the images look even more detailed. But giving the AI more creative freedom also means that images will have many more differences between them. You can see in this example that the AI modifies the background a lot and there are other differences between the images like the tattoo color.


With higher denoise values, it's possible also to use the drawing feature in Alive's Aethetics app to draw items of clothing directly on the model that the AI will later add, like glasses for example:

Or piercings, tattoos of course, masks, and so on:


Files

AI Photography: 10 minute photoshoot results

Testing some stuff and made a quick video of ALIVE for virtamate and the AI Camera functionality with different levels of denoise. Alive uses local #stablediffusion running on #automatic1111 to generate the AI images. ALIVE has free public releases on my patreon, most recent public release at the moment of writing this is V65 https://www.patreon.com/spqr_aeternum . It requires Virt-A-Mate (https://www.patreon.com/meshedvr). CREDITS: ddaamm - hair VL_13 - lashes https://www.patreon.com/VL_13 VRDollz - hair flower https://www.patreon.com/vrdollz Re-Visions_VAM - gloves and boots https://www.patreon.com/ReVisions_Vam Eros - stockings https://www.patreon.com/Tonyerho Best-VR-Babes - catsuit https://www.patreon.com/Best_VR_Babes SONG: Extan - Dark Matter https://www.youtube.com/extan https://soundcloud.com/extandnb/dark-matter Dark Matter By Extan is licensed under a Creative Commons License.

Comments

salvador Gonzalez

so what models are you playing with ? I've been downloading a few and playing around but was wondering what you are using ?

SPQRAeternum

They're so many now, I don't really keep up.. I have my own models (unreleased) that are pretty good. They have so many models mixed in them that I don't even know what's in them anymore. Started as HassanBend mixed with URPM but I probably mixed more than 10 models by now. The link in the post is to an old guide I did on how to quickly mix models with EasyDiffusion. I used to mix model A with B at 10 variations. Then run the same prompt for the same seeds for each variation (very quick to do with EasyDiffusion, you just queue them). I would then pick the best variation out of all the 10, keep that as my model. Then I'd notice another cool model, C, and repeat the process: mix C with that model that resulted from A+B, and check if any of the AB+C variations gave better results than the AB one

WILLIAM GAMER

It would be more practical if there was a tutorial video between the communication of this AI and the plugin. Nothing I do is working.

SPQRAeternum

you can click on the "here" link in the first sentence of the post, there's a tutorial video there what problems do you have? you'll have to be more specific