Home Artists Posts Import Register

Content

A few days ago I made a post for you to ask me some technical questions for answering. Keep in mind this isn't to be answered as a guide.
I'm also planning the hub post that I said yesterday. It will be up soon (not today yet).

So here we go with the questions.

Q: What resolutions you use and for upscale ?

A: For SDXL/Pony it's usually a mix between 768px 1024px 1152px, Upscale usually 1.5x to 2x. It depends on the kind of angle/poses and if I'm getting too many errors. For SD 1.5 is 512x768 and vice-versa or just 512x512. Upscale 2x.

Q: Cool idea for a post! - A high level description of your process would be amazing. Like what are the key steps from beginning to end result, what AI programs you use / the main inputs you provide to steer the AI correctly. Maybe what you think sets you apart from other creators? Key tips you wish you knew when you were just starting out? Thanks! 🙏 looking forward to the future post on this topic

A: I think I wouldn't show from start to finish what I do unless I'm considering retirement. Last time I teached something more in-depth I got the biggest drop in patron numbers that I had ever. But what I can say is that to know the correct prompts for each model is ideal. Just see what they are based in, like Pony diffusion models have different prompts than SDXL based models and they are different from SD 1.5.
For setting me apart from other creators, I think is what people already say: quantity with quality and posting everyday.
I think that when I started only a few good extensions existed, but they are good to know about when starting out. Maybe knowing that there are models for many concepts and different poses. I believe now is way easier than when I started so I don't know much what to say about it.
If you wish to improve the overall quality of your image you do like any other digital art, increase the resolution as much as you can. Since this is AI, you should also use better quality AI models.
Another thing is that I at least try to fix mistakes, while some people seems to just go with them. They will happen to me, but it's less frequent.
So a summary would be: more resolution and better models.

Q: Thank you for this post. Is there a model or checkpoint that you frequently use and recommend? I tried generating for a bit but couldn't get a result in which character looks exactly like in the anime, like we see in your posts.

A: To generate like the anime you need a lora that is trained with the anime style. The checkpoint is only more important for stuff that the anime doesn't train. For example other clothing, maybe backgrounds, poses, genitals, concepts. If the person trained using Pony Diffusion, using it will look more like the original data from training. I think most checkpoints on civitai are excellent, they will look exactly how they are shown in preview images. Again if you wish for it to look like the anime, the most important part is the LorA for the character, or a style LoRA. Sometimes there are specific cases that people train anime styles in checkpoints too. Just look for it.

Q: What exactly is the limit on characters you can use? Can you just do any character the person request or is there a technical factor that might prevent you from using that character in particular?
Example: What if someone asks you for a background character that only appears screen once? Or what if the character they request has gone through multiple art styles?
Does that make it any more difficult?

A: Yes there are a lot of limits. First, if there isn't a model that is trained with the character, then it is nearly impossible to do. If for some reason I decide to train a model on such a character it should have some images that I can use for training an AI model that can generate more images of that character. Fewer images available makes the model less flexible and may look bad.
If the character is trained in multiple artstyles it may get unpredictable results, so they are usually trained with finetuning so the style will look more like the checkpoint that's being used. You're basically making cosplay model.

Q: Do you input an entire scene's prompt when creating images with actions and poses, or do you use a fixed set of prompts and let it generate the scene on its own?

A: I pretty much describe almost everything I want in the scene, of course, sometimes it includes extra random things, specially in the background. You can try to add more variation if you use dynamic prompts extension and type some prompts, for example, like this: {from side|from above|from behind} it will randomize between these prompts.

Files

Comments

Kuba

this is unrelated to the qna, but, are u gonna make arisu with changed body?

kevin

Thank you so much! The explanation was very detailed.❤️❤️