what are the top websites and are they free and available to the public or what? I suspect that the results are nowhere near as "good" as the cherry picked images you get linked in news articles and posted on twitter.
Comments (21)
sorted by:
I'm not sure many of us are really into AI art so much as we're into laughing at mediocre leftist "artists" throwing shitfits because an algorithm can surpass their creative capabilities and their grift is in jeopardy.
I'm pretty in to it. It's an amazing tool for inspiration but also to help your own art. I've seen some artists use it to finish their backgrounds(which they somewhat outlined), I have finished an old never completed image to satisfaction.
But yes, the laughing at lefties throwing a shit fit is amazing, they ignore the help it would be for their work same as artists who scoffed at people going digital.
Stable Diffusion is more of a git project than a website, but it utilizes your own GPU to produce results, whereas NovelAI site (which isn't free) uses remote resources that it sends to your browser. Above all, your use of Stable Diffusion would need a good AI model, and the NovelAI model actually got leaked, so people are using it for free.
that makes sense.
This is what I used. Websites give shit results IMO, you can eventually get decent stuff but the amount of control you get when running the code yourself is nice, and there's no filter unlike some sites.
https://github.com/AUTOMATIC1111/stable-diffusion-webui
I don't understand GitHub. I can download this as a ZIP, but not as a usable program.
I know I'm doing this wrong. What am I missing? Serious question, this applies to other GitHub stuff I've tried to get too.
It is incredibly confusing at first, if you've never dipped your toes into the making stuff side of computing.
https://github.com/AUTOMATIC1111/stable-diffusion-webui#automatic-installation-on-windows
I don't think I can come up with more clear instructions than that. You might get lost on step 3where it says:
This is something you should type into the "cmd" prompt on windows, once you have correctly installed both python and git. It will make a folder and download stable-diffusion-webui. I haven't used windows since window7 so it's tough for me to provide 100% accurate windows help.
Looks like this is a good video for following along:
https://www.youtube.com/watch?v=lc500CmPjkQ
Saved. Thank you.
Once AI gets good enough at making porn commie degenerates (who are mostly troons) will be put out of business and it will be glorious.
If those AI renaissance anime girls are any indication it won't be long now.
I can’t wait for AI to remove Japanese porn blur
There are Chinese groups doing that already, though the process is apparently even less "AI" than stable diffusion. Just advanced upscaling filters.
https://dream.ai/create
Yes that's pretty much par for the course for all generation algorithms. You give it a prompt and tell it to generate 100 images, then you pick the 3-10 that suck the least. Sometimes the "bad" results are more interesting than what you wanted.
Mid journey is free for the first 20 times. That's the one you see all around.
DallE has gotten better and is free.
Stable Diffusion can do some great stuff, but you need to know how to set the prompt. Also, the paid stuff is nice. You can install it for free, but it takes some extra steps. It also connects to Photoshop, and let you choose our added items.
With Photoshop you can layer images, and put in basic imagery for diffusion to alter. It does some really nice stuff. You can then take that image and make 3D images with it in blender.
It can do some really nice stuff, but you need to learn how to use it as a tool. Most people just tell Mid journey to do stuff and leave it at that.
I'm not very good at it. To get great results you have to have good keywords (both positive and negative), run a lot of iterations, get a good base image, refine with inpainting, etc.
I downloaded some of the training sets and tried out stable diffusion on my local PC. Played with it for a couple of hours. I came up with some pretty cool stuff. It's awesome tech.
I used the Voldy guide someone else linked too: https://rentry.org/voldy
I've been playing with Microsoft Designer, which is free, and uses DALL-E 2 AI engine.
Here are two examples from the "a cartoon dog playing basketball" prompt.
Check any SDG thread on 4chan's g board. Give you guides and how to setup webui (already posted by someone else). One it links to is the guide I followed here. Very easy to setup, you need a somewhat new gpu(2000 series should work alright, friend uses his 1060 still).
i did dabble a bit in stable diffusion. I don't know if I want to call the images "cherry picked". It's an iterative process (at least it was for me). I started with a prompt and generated multiple images. Then I picked the most promising results and used those as a reference to generate even more images.
Over time I reduced the "noise" the generator was allowed to introduce, which led to less and less changes to the image until I was "happy" with the result (= I got bored with the prompt).
But I never really got into it - wanted to but... well time flies. I think you can tell the algo to only reconstruct/change part of the images. That was something I wanted to look into. This reddit thread was also something I was interested in, cause the method/result looks cool / intuitive.
Keep in mind though this is all from when SD was initially released. Things probably changed by now.
Most art isn't as good as the cherry picked images linked anywhere. That's not the slam dunk you think it is.
Its actually quite an interesting process to watch when someone (or a group of people) are collabing on a picture and slowly fine tuning it over multiple interations, because they are usually garbage at the start and require a learned finesse and understanding of keyword usage to create a presentable image. There is a level of "skill" that is necessary beyond what the discussion presents.