Home Artists Posts Import Register
The Offical Matrix Groupchat is online! >>CLICK HERE<<

Content

Narrator: "It was clickbait..."


Hi everyone, this post contains an image of the current update I posted here on patreon before, if you haven't checked the previous posts or haven't played v0.4 yet, wait to read this post until you do. This post is pretty long, so you might want to make yourself a cup of coffee/tea or brew yourself some mate =).


INTRODUCTION - What can I do to increase production?

  • (You can skip this part if you want).

What is this post about? Well... it's about an experiment I've been working on. This last update has been extremely difficult to complete, it took too much time, and although I've been told many times that you guys don't expect updates every month and you know that quality takes time, I'd like to have more regular updates.

I have been planning to do something about that for a long time now, to attack the problem of the slow speed of image production. My plan (far into the future) is to replace Koikatsu with proprietary 3d models made in blender. The current process takes a lot of time, I don't want to spoil things of the new update but the CGs use less and less tracing assistance and I end up drawing everything myself (because there is a big difference between how the characters look in Koi and how I want them to look).

Using my own 3d models (made to look the way I want from the get go) would help me a lot to reduce the amount of time it takes to produce a single CG, I could even hire someone to do the linework for the body and I could focus on the faces. But that's just a theory, I have yet to try posing one of these 3d models and producing an image to see if it really saves as much time as I think it does. Taz (Patron / Tomboy enjoyer) has been helping me with this, but since I haven't had time to learn blender, I haven't tried yet, and so I don't plan on commissioning any models yet, as I want to learn how to use the program and prove that this method could work first.

Using blender to increase production is the future plan, but what can I do now? I've thought about releasing single scene updates, or limiting the animations, that way releasing updates faster... But that only affects the amount I produce, not the speed per piece. That's why I've been experimenting with AI to assist me in production...


CURRENT METHOD - Where could it be improved?

I think AI still has some time to go before you can generate consistent content, nowadays it is possible to generate good quality images very quickly, but to apply it to game development, it still has some way to go. First of all, I'm not looking for AI to replace all my work, I just want it to assist me a little bit.

(1) Current process.

I've explained this before, but in case you are new, my current process starts from a Koikatsu screenshot (where I design/pose the characters) to assist me in the scene, then I fade that image as much as possible and sketch/trace what I want to do. For the backgrounds, I use external 3d renders made Lumion and modeled with SketchUp (with a little bit of drawing) or completely by hand if the scene is too complicated/unique. Then I vectorize the linework and paint trying to use the original screenshot as much as possible to save work, although lately I add more and more details (shadows/lights) by hand, since koi renders are very basic in that sense. All these processes (except for the original screenshot) take a lot of hours, especially the vectorizing and coloring + effects. What I want to do with AI is to eliminate 2 of these stages, the sketching and coloring part (the final touches are still done by hand), and MAYBE minimize the time of the other steps.

TL;DR - Currently the process is something like this:

  • Koikatsu screenshot
  • Sketching/Tracing + adding of unique elements
  • 3d Background render
  • Linework
  • Coloring and Cleanup
  • Lights/shadows + Fine details


EXPERIMENTATION - AI applications in image creation.

What equipment I'm using?

  • Stable diffusion + Novel AI
  • Nvidia RTX 3060 12gb VRAM + 32gb RAM
  • Img2img


Before all this craziness with Midjourney, Dall-e and Stable Diffusion, I had already been analyzing the possibility of integrating AI into my work, but in a different way. If using Blender models worked, I could commission animations that looked as good as those Overwatch ones that people like so much...  With that method, the final result of the static images would still be 2d, but the animations could be 3d to save time. In case people prefer 2d animations, there is a way to transform those animations into 2d. With the help of AI, I would just draw certain keyframes and then let the machine draw the intermediate frames (but this method turned out to be not so accurate, so it is discarded for now). 

My current goal is to increase the production speed without compromising quality, I could achieve this by improving the basic renders I get from Koikatsu with AI, thus skipping the sketch stage and jumping directly to vectorization. In case the resulting image is really good, it can also result in reducing time in the final stage (shadows and highlights). Unfortunately, this new method adds more steps to the whole process, but I believe that with practice it can end up reducing the total development time.

The process would be like this:

  • Koikatsu screenshot
  • Image prep for the AI
  • 3d Background render
  • AI image generation with img2img + upscale
  • AI mistake fixing + cleanup
  • Adding my style + unique elements
  • Linework
  • Highlights/shadows (minor corrections) and fine details

(2) AI MC experiment: Koikatsu screenshot, image prep, AI img2img output.

Here you can see the first steps of the new system, the image preparation consists of drawing over the Koi screenshot, this image is made deliberately simple to make it easier for the IA to read, its quality has to be the minimum necessary for the machine to do the rest of the work, thus saving time. Next, the image generation through Img2img with low denoising, takes quite some time (getting what I want, not the image generation), but I will expand on this below, this MC image came out relatively fast, more iterations were not necessary, but this is not the case with the other images.

(3) AI MC experiment: AI img2img output, vectorized line-work, final piece.

Having the AI image, I rescale it, clean it, add missing elements and proceed to the linework. With this process I skipped sketching (I still need to do some of it to add the collar and so on). Despite taking more time to do, I think the resulting quality is superior to what I could have achieved on my own, if I can replicate this quality in other angles and pieces, I would say that this part of the experiment was successful.

ACTUAL USE IN THE GAME - Making lewds...

In the next experiment, I tried to generate an image that could be in the game, something useful. The amount of hours I invested in this image was a risk, because it might not have worked and I would lose all those hours + the CG. This experiment almost failed, if it would have worked, I would have also cut the linework part in half, but the result was something that didn't look like it was made by me, so I decided to go traditional, since the style of the game is already inconsistent enough.

(4) Kana experiment: Koikatsu screenshot, image prep, AI img2img output (with edits).

As in the previous experiment, I started with a screenshot, but I had to spend more time prepping the image as it is a complicated composition and the AI was not generating anything useful, I also added a simple background + shadows to help ground the character in the scene. In this experiment, generating a useful image took much more time (the one you see here has manual retouching and inpainting), and investing more time didn't help much more. Although the AI didn't give the expected results, I think it's possible to take advantage of it by taking pieces of several images I find useful (hair, eyes, tummy and sweater) and draw the rest by myself.

(5) Kana experiment: final piece. "Love from Kazakhstan!"

I'm happy with the final result, I think that if I had given up earlier with the AI image generation (and had drawn the missing parts myself), I would have finished it faster. As with the previous experiment, I think the resulting quality is something I wouldn't have achieved on my own, even if I had spent more time on it than I did on this one.


OTHER USES - Generating facial expressions.

Although the main image takes a long time to make, that can be offset by saving time in the generation of different facial expressions. It takes me a long time to make them and they don't always come out as I would like, so, using the resulting image above, I try to generate different mouths to save time and increase quality of the final piece.

(6) Facial expressions experiment: inpainting / img2img.

I started from a simplified version of the mouth I wanted and have been experimenting with 2 methods, 1 is to run just the face through the AI and let the AI modify it (with low denoinsing), the second is inpainting (only modifies the mouth). Ideally, if I could make inpainting work, it could help me to generate better expressions faster, but I think that normal img2img focused on the face makes more usable pieces, even if it takes more work to integrate them later.

(7) Facial expressions experiment: inpainting/img2img, the best outcomes.

The results are not ideal, I have seen others do it and I think they get more usable results, maybe it is the prompts I use that are the problem, maybe it is because I use the basic version of SD +novelAI. For now, it only serves as inspiration. I made many iterations and  this is the best I could get (see image 7).


POSITIVES AND NEGATIVES - What it can and cannot do.

These are the issues:

  • Adds more labor time per piece.
  • The quality it produces is not reliable yet.
  • Does not work for all pieces.
  • Can't do hands ( but neither can I lol).
  • For parts that require variations, it may not work. *

*If it is a static CG that only has variations on the face, it can be used, if I have to move limbs or change things around, it may not work even if I use the same seed due to the changes in the style (this needs to be checked).

For what uses can I use it now?

  • "Tenderizing" the image. *
  • Improve on fine details (skin, hair, clothes)
  • Inspiration for facial expressions.

* By "tenderizing" I mean reducing the work I need to do to make the koikatsu screenshot look good, it makes them look less stiff. At the moment it is not possible to increase my work speed with this (it even adheres more), but it helps to raise the quality of the drawing, so I will keep trying.


OTHER METHODS - What else could I try?

There are things I haven't tried, for example, installing other models and combining them, improving my prompts, generating more iterations to increase the chance of a successful rendition. Improving the prompts I use may help a lot, as I am not able to generate consistent images in text2img either, so once I get better at that it may fix the img2img images. I'm currently trying to build a set of prompts that I can use every time to generate an acceptable image.

Another thing I could try is TEXTUAL INVERSION, which consists of training the AI with my own style, which is what I originally wanted to do, but the current problem is that the faces have a lot of errors, so feeding the AI with my images is not a priority yet and will only contaminate the results. I haven't done it either because training the AI would take a lot of time with my hardware and I have to keep working. 

Generating more images might work... maybe a better video card could help, but my 3060 is enough for the moment, after all, I just need the AI to improve the base image to make my job easier, if it has little defects I can fix them by hand.


FINAL THOUGHTS - A summary in case you skipped all of the above.


The results of this experiment didn't turn out as I expected, using AI ended up increasing the time it takes me to generate an image instead of reducing it, but it did improve the overall quality of it. I think it's something worthy of further experimentation. Once I prove I can use SD as a tool consistently, I'm going to turn to the discord server for help, at the moment I'm not sure if this will work so I don't want to rally people for anything, I don't want to commit to this method just yet.


  • What do you think of the quality of the new images? Are they too different from my original style? Do you think I shouldn't use AI?
  • Do you think using AI in this case is morally wrong?
  • Do you think it's something worth exploring?

I would really like to know what you guys think about this...


That's it for now, I still haven't decided what I'm going to do next, if a content update, a quick update to add the gallery or if I' m going to redo the beginning of the game... but don't worry, I'm still working on the game anyway! Bye Bye!

Files

Comments

interlinked

Not gonna lie, this is a tricky one. The is always a fine line between quality and quantity of content. If you focus on quantity too mutch, consumers like the product less. In your case though, you already set the bar so high that quality reduction woulnd't have mutch of a impact. On the other hand, high quality takes too mutch time, so cosumers are going to loose interest in your product eventually. I feel like your game already has such a high quality that I definitely wouldn't opt for increasing it even further and delaying updates as a consequence. I honestly think you should prioritize quantity a little bit more. I already forgot this game existed, some random discussion on f95 mentioned you got an update and I had to replay the game to remember what's going on. I'm really looking forward to what the AI art is going to be capable of in the future. My bet is that we are not far away from the point where devs will just use AI art to generate scenes + animations withouth having to interfere at all. (3 years tops) I know google is already playing around with character integrity, meaning you can generate different scenes with the same character. This was a big issue because it was impossible to get the exact same character rerendered again. I humbly suggest not wasting your time on redoing already implemented scenes. You mentioned beggining of the game, but I never found mysel thinking "I feel like this scene needs more polishing" while playing your game.

Bruce Banner

I'm not too put off by the difference in your style from incorporating AI, but if it takes longer then I can do without lol. Experiment all you like and find a happy medium between the speed of completion of the art, and the appearance of the final product.