Denoiser wisdom
Since a lot of people showed interest in my workflow of using SD like a renderer for existing sketches, I'll be sharing the little tricks I find while exploring the capabilities of SD with Neuroslug. Read the inpainting post to understand this one.
When inpainting, the model takes into consideration what is already in the area it regenerates and in the areas around it. How exactly it'll follow these guidelines is determined by denoising strength. At low values it'll stick closely to the areas of color it sees and won't create anything radically different from the base. At high denoising strength it'll gladly insert colors, shapes and silhouettes that weren't there originally.
Basically the more you trust your sketch the smaller your denoising strength should be. It doesn't mean you won't need the high denoising at some point. Let me explain it using yesterday's artwork.
It all starts with a rough sketch.
Since I have a particular composition in mind and want it to be maintained, I'll be using a low denoising strength to fully regenerate this image.
It means that the algorithm won't have enough freedom to fix my large-scale mistakes, it's simply not allowed to change the areas of color too dramatically. So if you want to do this yourself make sure to set the image to black and white first and check that your values are working and contrast is good.
To make sure the result isn't too cartoony and flat I used brushes with strong color jitter and threw a rather aggressive noise texture over the whole thing. This'll give the denoiser a little wiggle room to sprout details out of thin air.
It kept the composition, the suggested lighting and the majority of flowers kept their intended colors too. This was denoising strength 0f 0.4. To contrast that, same base image with denoising at 0.7:
It's pretty, but it's neither the style nor composition I wanted.
Let's refine the newly redrawn base to include the details that were lost in transition.
These were intended to be roses.
It's here where I learned a little trick. You can mix and match different models to achieve the look you desire.
Neuroslug is good at detailed moths and painterly environments. It's not good at spitting out really detailed flowers, they end up looking very impressionist which is not what I want in foreground.
So, I switched to an anime focused model and let it run wild on this bush with high (0.7) denoising strength.
Nice definition, but it looks too smooth and isn't in line with what I want.
Switching back to Neuroslug with denoising at 0.5 and letting it work over these roses.
This way, I get both the silhouette and contrast of the anime model (counterfeitV30) and the matching style of Neuroslug. It's also useful in cases where the model doesn't know a particular flower. You can generate an abstract flower cluster with the anime model and use the base model to remind the AI that what you want is in fact a phlox specifically.
So I did this to basically every flower cluster on the image to arrive at this:
It's still a bit of a mess but it has taken me about 80% of the way there, the rest I'll be fixing up myself.
My "Lazy Foliage" brush set was really helpful for this. I'll release that one once it accumulates enough brushes to be really versatile.
Now we block in the character.
Yes, I left the hands wonky since I intend to be drawing them manually later, same about the foot. There's so much opportunity for the AI to mess them up that I'd rather have all the control on these details.
When it renders the face it can really mess up everything, so I do it with low (0.45) denoising strength to discourage new eyes popping up in inappropriate places. Take note that I kept the antennae out of the mask. AI is easily confused when one subject overlaps the other.
Good, good.
Wait. Why are your eyes hairy?
Now, mask out the eyes, remove all mention of fur from the prompt and
That's about right. Since the eyes are all one color block I can afford to raise the denoising strength for more wild results.
Same for areas of just fluff on the entire body, it's all one texture and having the denoiser at 0.6-0.75 is beneficial because it's going to add locks, stray hairs and other fluffy goodness. Just make sure to not make the mask too tight to the silhouette, it needs some space to add hairs sticking out.
With the skirt it was back to really low denoising. The folds I blocked in make sense with the position oh her legs under it, so I didn't want it to be lost.
Lastly, I drew in a flower that she's planting and ran over it with moderately high denoising to make it match the surrounding style. Ignore the biblically accurate roots there, I'll fix them by hand.
One last pass over the whole thing in Procreate. I draw the hands and add details such as the round pseudopupils, face ridges and wing markings to keep the character consistent with the previous image of her. And a bit of focal blur for a feeling of depth.
Phew, even with generous use of AI this whole thing took an entire day of work. In the end what determines quality isn't the tool you use but the attention you choose to pay to finding inconsistencies and fixing them.