I just had a belated shower thought. How was the guy who made the Taylor Swift images able to find an AI model that wasn't locked down? I know ChatGPT censors NSFW stuff in addition unapproved political opinions. I'm really curious now, because I see all kinds of potential for (nonsexual) mischief. Does anyone know how I can get such a model and how to go about using it? I can think of much better uses for it than making porn of a 30+ celebrity.
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (25)
sorted by:
Also, will everyone please just stop using ChatGPT as their fucking gold standard?
Not only is it a poorly competitive AI model, you're just feeding into OpenAI's dominance that is hardly deserved. You can do a LOT better than that, many other people already are.
I feel like we need to have a big-ass 101 discussion here on how this stuff works since so many people don't seem to have the faintest clue how any of this shit works or what's what when it comes to AI options, setup, etc etc.
Please feel free to enlighten us then. If someone points me to something that helps boost my coding efficiency more than ChatGPT does I would be more than happy to use it.
Seeing as no one has been able to do that yet, pardon me while I assume people are not being 100% truthful when they say there's stuff out there that is way better.
You're right, I should be pointing towards specific examples. I might have to put together an actual thread/topic post covering at least a bit of 101 stuff. I don't know why I've always been a little vague in some of my responses on these topics. I mean besides how I'm pretty disorganized.
Anyway, right now ChatGPT is your best bet for generating code and programming solutions, I'll fully fess up to that. But this thread was more focused on image generation, and my comment was pointing to broader AI generated "stuff".
For text generation in general, I can point to a few specific LLM's that I've found a bit solid:
OpenHermes (and hybrid variants), Mistral, and Mixtral (this one is a potential upcoming contender to ChatGPT, but it's still not fully up to snuff on the code-gen). I've heard solid things about Goliath 120-b, Tess, and Nous-Capybara-34B-GGUF, but I cannot verify myself since those were out of my hardware range. There's a lot of solid 7-B and 13-B options, 30-B and up is where it's tougher to find solid options because a lot are sort of "Frankenstein" hybrids. Mixtral is its own thing though, and it's backed with some professional support. You can find most of these just through some basic searches.
A few people I'd mention that are worth following for more info:
https://huggingface.co/TheBloke/
https://www.reddit.com/user/WolframRavenwolf/submitted/
(The former has been a reliable go-to for spitting out quantifized models that are ready to go, the latter is someone who's been at least a semi-useful source for info on potential model leads)
As for front-end and backend for self-hosting, there's a lot of options. There's of course:
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
SillyTavern, and a lot more out there if you look around.
For image generation there's obviously Stable Diffusion. A few different self-hostable frontends and backends, IE:
https://github.com/AUTOMATIC1111/stable-diffusion-webui
https://github.com/comfyanonymous/ComfyUI
And https://civitai.com/ for a lot of your SD resources.
Obviously there's a LOT more AI tools out there than just this, I'm just listing off some of the core starting points.
Much obliged, thanks for typing all that out. Would be very interested in a 101 thread if you ever find the time, and I'm sure plenty of others here would be too.
Thanks, and I'm glad you gave me the nudge I needed to write some of that up. I'll definitely try to put something together for a more comprehensive loose 101 type of thread. I'm sure AlfredicEnglishRules and a few others could add a fair share of info as well.
And I'll preface that I by no means consider myself an expert on the subject. I was just lucky that I knew a person who'd been dabbling with it early on, and came across a few solid guides that helped get me started. Had to do a lot of extra digging from there though since so much stuff is so poorly documented.