Avatar

Slimy!

@redslug / redslug.tumblr.com

Avatar
Anonymous asked:

If you wanna draw more diverse bugs.

Hear me out- tree hoppers.

Oh I do love them a lot. It's just that with my world building only specific types of bugs are anthros. Tree hoppers are feral and act a lot like pigeons.

You can see them hanging out on the window glass in the teal drawing of a fly reading I did recently

Avatar

Denoiser wisdom

Since a lot of people showed interest in my workflow of using SD like a renderer for existing sketches, I'll be sharing the little tricks I find while exploring the capabilities of SD with Neuroslug. Read the inpainting post to understand this one. When inpainting, the model takes into consideration what is already in the area it regenerates and in the areas around it. How exactly it'll follow these guidelines is determined by denoising strength. At low values it'll stick closely to the areas of color it sees and won't create anything radically different from the base. At high denoising strength it'll gladly insert colors, shapes and silhouettes that weren't there originally. Basically the more you trust your sketch the smaller your denoising strength should be. It doesn't mean you won't need the high denoising at some point. Let me explain it using yesterday's artwork. It all starts with a rough sketch.

Since I have a particular composition in mind and want it to be maintained, I'll be using a low denoising strength to fully regenerate this image.

It means that the algorithm won't have enough freedom to fix my large-scale mistakes, it's simply not allowed to change the areas of color too dramatically. So if you want to do this yourself make sure to set the image to black and white first and check that your values are working and contrast is good.

To make sure the result isn't too cartoony and flat I used brushes with strong color jitter and threw a rather aggressive noise texture over the whole thing. This'll give the denoiser a little wiggle room to sprout details out of thin air.

It kept the composition, the suggested lighting and the majority of flowers kept their intended colors too. This was denoising strength 0f 0.4. To contrast that, same base image with denoising at 0.7:

It's pretty, but it's neither the style nor composition I wanted. Let's refine the newly redrawn base to include the details that were lost in transition. These were intended to be roses.

It's here where I learned a little trick. You can mix and match different models to achieve the look you desire. Neuroslug is good at detailed moths and painterly environments. It's not good at spitting out really detailed flowers, they end up looking very impressionist which is not what I want in foreground. So, I switched to an anime focused model and let it run wild on this bush with high (0.7) denoising strength.

Nice definition, but it looks too smooth and isn't in line with what I want. Switching back to Neuroslug with denoising at 0.5 and letting it work over these roses.

This way, I get both the silhouette and contrast of the anime model (counterfeitV30) and the matching style of Neuroslug. It's also useful in cases where the model doesn't know a particular flower. You can generate an abstract flower cluster with the anime model and use the base model to remind the AI that what you want is in fact a phlox specifically. So I did this to basically every flower cluster on the image to arrive at this:

It's still a bit of a mess but it has taken me about 80% of the way there, the rest I'll be fixing up myself.

My "Lazy Foliage" brush set was really helpful for this. I'll release that one once it accumulates enough brushes to be really versatile. Now we block in the character.

Yes, I left the hands wonky since I intend to be drawing them manually later, same about the foot. There's so much opportunity for the AI to mess them up that I'd rather have all the control on these details.

When it renders the face it can really mess up everything, so I do it with low (0.45) denoising strength to discourage new eyes popping up in inappropriate places. Take note that I kept the antennae out of the mask. AI is easily confused when one subject overlaps the other.

Good, good. Wait. Why are your eyes hairy? Now, mask out the eyes, remove all mention of fur from the prompt and

That's about right. Since the eyes are all one color block I can afford to raise the denoising strength for more wild results. Same for areas of just fluff on the entire body, it's all one texture and having the denoiser at 0.6-0.75 is beneficial because it's going to add locks, stray hairs and other fluffy goodness. Just make sure to not make the mask too tight to the silhouette, it needs some space to add hairs sticking out.

With the skirt it was back to really low denoising. The folds I blocked in make sense with the position oh her legs under it, so I didn't want it to be lost.

Lastly, I drew in a flower that she's planting and ran over it with moderately high denoising to make it match the surrounding style. Ignore the biblically accurate roots there, I'll fix them by hand.

One last pass over the whole thing in Procreate. I draw the hands and add details such as the round pseudopupils, face ridges and wing markings to keep the character consistent with the previous image of her. And a bit of focal blur for a feeling of depth. Phew, even with generous use of AI this whole thing took an entire day of work. In the end what determines quality isn't the tool you use but the attention you choose to pay to finding inconsistencies and fixing them.

Avatar
Anonymous asked:

I see a big misconception that AI completely "destroys" the original data used through transformation, but, from my somewhat limited knowledge in how computer programs work, that is in no way true. Computers aren't people who have to use mental power to even try and copy the art of others, they just use the data given and spit it back out in some form. You would probably see it through just having an AI with a singular image trained dataset, where it'll just spit out the same exact image with barely any changes (chances are artifacting will exist, we call that "noise" and what would poison data sets through training on other AI pieces). The AI is definitely using copyrighted images basically whole sale, its just that they are layered on each other and add noise to hide that fact.

LAION dataset it was trained on is 240 terabytes. Stable Diffusion v1.5 base checkpoint takes up measly 4 gigabytes. It can't use the images wholesale, because it doesn't save the originals. It only has a vague memory of the concepts it learned from the originals.

If you were to show me, a human, a singular image of ughh... Futurcrum without it looking like something I've already seen and without providing context of what Futurcrum is and then ask me to draw it, I'll do a slightly modified copy of the single image you provided. It's unreasonable to expect a program to understand something a human wouldn't.

Avatar
Anonymous asked:

So, if that's the case, then you are'nt training the AI on your own dataset, you're influencing the AI to try and make it replicate your own while still using the LAION dataset (Much like inserting words into the prompt window)?

I went and trained a model on my dataset. Assembled the pictures, resized them to something that is edible for the program, went through and hand captioned each image (50 of them) with consideration of which prompts I intend to use it with and when it was done I cycled through the checkpoints it made during training to see which one grasped the concept the best.

And did all that several times after adding more pictures to learn from. Six iterations of all that, to be exact.

A LoRA is a type of model that still requires training, and it very heavily influences the result of the generation even if base SD's knowledge is still working behind it.

Just so you know, if you ask base SD for an anthro moth it will produce some kind of fleshy Cronenberg monster with random wings sticking out of weird places. Add my model on top and it will make you decently cute moth girls most of the time. It also will draw nice dresses, but's a side effect, not the main goal.

Avatar
Anonymous asked:

I think it’s kind of a question, kind of a statement but, there seem to be a lot of people upset about you utilizing ai art recently. Correct me if I’m wrong, but if you’re training an ai on your OWN art, doesn’t that cut out a lot of the unethical things about mainstream ai art generators? And if I may ask, how do you feel about mainstream ai art generators and the way it utilizes others’ art? I apologize if this comes off as rude, I’ve not seen someone train an ai specifically on their own art and I’m curious about your thoughts. Thank you for reading, I hope you have a lovely day. Your world building and art is phenomenal and inspiring.

My opinion is that the only unethical bits stem from how an operator uses a tool, not the tool itself. Stable Diffusion isn't a person, it's isn't good or evil, it is incapable of acting on it's own without a human's input.

I could do some extremely unethical things with oil and canvas if I bothered to dig them up from the closet. I have the skills to theoretically mimic the style of a known artist and then sell it as if it's genuine. I could use the same traditional tools to straight up copy an artwork and claim that I came up with the composition and plot myself.

I then could come up with an original plot and composition in my head and then achieve that with prompts and inpainting using Stable Diffusion. The prompt might have some artist's name in it to achieve a particular style, but the end result won't match anything that artist has drawn before. You can't steal a style after all.

If I did all that it doesn't make oil and canvas evil and an AI good. The only thing that mattered was my intent. If your intent is foul anything you create with any tool can be unethical.

My attitude towards mainstream AI art isn't all that different from that towards normal art. Majority of both is unoriginal, boring, poor quality or all three in that order.

On AI's side it'd be big titty babes just standing around or Midjourney stuff (I hate MJ's style with a passion), on normal art's side it'd be what I call "face in flowers" types of drawings. You'll see that exact type infesting all of Instagram.

Should these artworks not exist? No, they can stay, they have their fans so whatever. I just personally don't find them interesting.

And then a small percentage of both is truly interesting. It has surprising plot, style, other quirks or is just genuinely funny. Good art is memorable regardless of what it's made with. It's just my opinion though.

If you haven't seen anything memorable made with AI yet, I recommend you search for "Will Smith eating spaghetti checkpoint". It's burned into my mind and still causes an ugly laugh each time I remember it exists.

Or "Anime rock paper scissors" for something less meme-y.

Thanks for the compliments btw, nothing is more rewarding than inspiring others.

Avatar
Anonymous asked:

If stable diffusion is what you use, all versions of stable diffusion are trained on their LAION dataset, as far as I know. Stable Diffusion doesnt really have an option for only training on your own dataset as far as I can tell, SD cant unlearn data that its trained on.

I know that. What I trained is a LoRA (which is a type of model) that works as an add-on to SD. And that's the only option, really. No artist can produce enough material in a lifetime to train a full model and have it be coherent.

Avatar

Hi, do you know of any good up-to-date guides on getting stable diffusion/inpainting installed and working on a pc? I don't even know what that would *look* like. Your results look so cool and I really want to play with these toys but it keeps being way more complicated than I expect. It's been remarkably difficult to find a guide that's up to date, not overwhelming, and doesn't have "go join this discord server" as one of the steps. (and gosh github seems incredibly unapproachable if you're not already intimately familiar with github >< )

Avatar

The UI I use for running Stable Diffusion is Invoke AI. The instruction in it's repo is basically all you'll need to install and use it. Just make sure your rig meets the technical requirements. https://github.com/invoke-ai/InvokeAI Github is something you'll need to suffer through, unfortunately, but it gets better the longer you stew in it. Invoke goes with SD already in it, you won't need to install it separately. I do inpainting through it's Unified Canvas feature. The tutorial I watched to learn it is this: https://www.youtube.com/watch?v=aU0jGZpDIVc The difference in my workflow is that I use the default inpainting model that goes with Invoke.

You can grab something fancier off of civit.ai if you feel like it. For learning how to train your own models you'll need to read this: https://github.com/bmaltais/kohya_ss Don't try messing with it before you get acquainted with Invoke and prompting in general, because this is the definition of overwhelming.

Hope this'll be a good jumping off point for you, I wish you luck. And patience, you'll need a lot of that.

Avatar

I'm a sucker for punishment. Challenged myself to achieving exactly what I want purely in AI without cheating by finishing the pic off in a proper drawing program. Behold the struggle of making this fine gentleman.

Here's the base I drew in all it's MS Paint-looking glory.

What I surmised from the about 2 hours of struggle this took me: 1. The statement that "it's as simple as writing some prompts" is a fat lie. 2. AI art has a stage of looking like ass before it starts to look good similar to normal art. 3. The algorithm can be unimaginably, remarkably, monumentally dumb. Artificial Stupidity is a thing and it's pervasive.

Mother of God

Did...did you just draw a dog there?

STOP I'm putting both dog and cat in the negative prompt.

Screw you too, bot, screw you too. Honestly can't imagine what it must be like to try getting your exact vision in SD without having at least some art skills to help the algorithm through it's mental deficiencies.

Avatar

Helping Neuroslug help me

Admittedly it took me an embarrassing amount of time to figure out and start using inpainting, but now that I've had a taste of it my head is spinning with possibilities. And so I'm making this post to show the process and maybe encourage more artists to try their hand at generating stuff. It really can can be an amazing teammate when you know how to apply it. For those who didn't see my first post on this, I've trained an AI on my artworks, because base Stable Diffusion doesn't understand what anthropomorphic insects are. That out of the way, here we go:

I noticed that a primarily character focused LoRA often botches backgrounds (probably because few images of the dataset have them) so I went with generating a background separately and roughly blocking out a character over it in Procreate. Since it was a first experiment I got really generous with proper shading and even textures. Unsurprisingly, SD did it's job quite well without much struggle.

Basically masked out separate parts such as fluff, skirt, watering can, etc. and changed the prompt to focus on that specific object to add detail. There were some bloopers too. She's projecting her inner spider.

Of course it ate the hands. Not inpainting those, it's the one thing I'll render correctly faster than the AI does. Some manual touchups to finish it off and voila:

The detail that would have taken me hours is done in 10-20 minutes of iterating through various generations. And nothing significant got lost in translation from the block out, much recommend. But that was easy mode, my rough sketch could be passed off as finished on one of my lazier days, not hard to complete something like that. Lets' try rough rough.

I got way fewer chuckles out of this than I expected, it took only 4-5 iterations for the bot to offer me something close to the sketch.

>:C It ate the belly. I demand the belly back. Scribble it in...

Image

Much better. Can do that with any bit actually, very nice for iterating a character design.

Opal eyes maybe?

Lol

Okay, no, it's kind of unsettling. Back to red ones. Now, let's give her thigh highs because why not?

Image

It should be fancier. Give me a lace trim.

Now we're talking. Since we've started playing dress-up anyway, why not try a dress too. Please don't render my scribble like a trash bag. I know you want to.

Phew

I crave more details.

Cute. Perhaps I'll clean it up later. ... .. . SHRIMP DRESS

Avatar

Played the beta branch of The Sapling and "documented" some of the species I saw. The first one that crawled on land and it's descendants. Sadly no carnivores or scavengers emerged.

Really loving the arthropod mouths.

Avatar

what made you decide to make these amazing moths, it literally makes me so happy to see them.

also would it be ok if i used one of them with credit to you as my profile picture?

moths 🥰

Avatar

Glad you like them. Sure, go ahead