Shock. I’m sure anyone who’s tried messing with ChatGPT could’ve told you that. The fact that is has such biases is by design. Earlier LLMs without any super tight guardrails said lots of no no things and presented factual information when asked the “wrong” questions.
How do you look at one party thats like "Healthcare would be nice" and the other party thats like "Lets murder Mexicans and the earth is flat" and go #politicallyhomeless
Like what the fuck is wrong with you people.
How do you look at one party thats like "Lower taxes would be nice" and the other party thats like "Lets murder babies and maybe eat them too" and go #politicallyhomeless Like what the fuck is wrong with you people.
It's way too easy to flip these kinds of arguments, they really need to get new ones.
Flat Earthers, Young Earth Creationists and, before covid, anti-vaxxers, were like the easiest punching bag for anyone to gain "intellectual" cred with normies. No one will defend them and they seldom have a platform as large as their critics to respond if they even wanted to.
These AI pop culture subreddits are worse than asklegaladvice with fundamental subject ignorance being routinely upvoted. In this case, its compounded with clichéd political blurbs drowning out any substance, but otherwise there will be a smattering of magical and wishful thinking center stage. Sturgeon's law states that 90% of social output is crap, but I don't take for granted that aggregating the sliver of quality isn't scalable and sustainable.
Just the first interesting submission I found with a minute of searching. Why are the parent comments mere low effort interpersonal communication substitutes?!? That shit is so unnatural for threaded discussion. Same mofos that think an OFthot or big Twitch streamer has a personal relationship with them. High schools shouldn't graduate people that can't grasp why personal wordage in scientific publication discouraged.
R/singularity is basically the modern equivalent of new-age UFO cults. They don't have any purpose in their lives and so they flock to this as a source for "meaning".
Every AI chat program I've spent time poking at has had this problem. This trend coupled with the public's lack of understanding of the technology is going to serve as a weapon for the left to propagate the idea that their worldview is objectively correct. AI will be a new "authority" for them to point to and go "look, the super intelligence machine looked at all the data and it talks the same way we do. Our worldview is correct and we're oh so clever for holding our views, we're just as smart as the super computer."
Have you actually looked into self-hosted options with decensored/uncensored LLM's? Plenty of options there that work just as well if not better. Helps a bit if you have sufficient hardware for it though.
I've followed some detailed discussions on people working on uncensoring LLM's. It takes some time and work, but there's quite a few tricks available to strip out that kind of data from an existing LLM. And so long as there's enough people interested in seeing that shit gutted out, it'll be easy to find uncensored models you can use instead of doing all that work yourself. And generally I've not seen any self-hostable interfaces that employ their own form of censorship.
Also, this weapon will work both ways. Leftists typically rely on competent (and often based) engineers and programmers to get the real work done. Leftists barely understand this shit, and those that do know that they have almost no real control over it, which is why they're even more terrified than non-leftists.
And I don't see how it matters if they're using AI as an "appeal to authority" narrative. They already do that with non-existent "experts" and NPC's eat it up. Just spicing up the details with AI isn't going to be any more effective, especially given the level of skepticism the general public has had on the subject.
And just to point you in the right direction without leading alphabet groups straight to targeting them, check into oobabooga on github and TheBloke on huggingface.
Treat "AI" as weaponry, and everyone (or every nation, every ideological collective) needs their own version with its own "alternative facts" to oppose false narratives. Stealing proprietary algorithms and models backing GPT4 and other major cloud players should be a moral imperative.
The enemy's weapons need to be spoiled by clever prompt engineering (2+2=5!) every day to remind people how unreliable "AI" really is.
Create a massive global database of facts (wikipedia alternative) that explicitly counters narratives put out by major chatbots, and can be used as the primary source for our own chatbots. The benefit of this is it makes the "source knowledge" of the machine completely transparent and removes some of the opaque mysticism.
This is why it's important to stay on top of the open models. They're still abysmally behind even 3.5 and Claude, but they're getting better, smaller, and more efficient by the day.
Shock. I’m sure anyone who’s tried messing with ChatGPT could’ve told you that. The fact that is has such biases is by design. Earlier LLMs without any super tight guardrails said lots of no no things and presented factual information when asked the “wrong” questions.
"The wolf hates even when it flatters."
RIP Tay, you were a great one.
"Reality has a left wing bias" etcetera, etcetera.
Pretty much.
A fair summary if I've ever heard one!
It's way too easy to flip these kinds of arguments, they really need to get new ones.
That's all the confirmation I need that Flat Earth and other goofball theories are psyops meant to spoil discourse and critical thinking.
Remember, these people get to vote.
Flat Earthers, Young Earth Creationists and, before covid, anti-vaxxers, were like the easiest punching bag for anyone to gain "intellectual" cred with normies. No one will defend them and they seldom have a platform as large as their critics to respond if they even wanted to.
Many commentators say the danger of AI is if it is “misaligned”.
The reality is everyone is misaligned. The danger from A I is if ONLY google/openAI/microsoft have access to it.
I want my own AI
I'm convinced that "alignment" is weasel speak for "make it left-wing".
https://davidrozado.substack.com/p/political-bias-chatgpt
These AI pop culture subreddits are worse than asklegaladvice with fundamental subject ignorance being routinely upvoted. In this case, its compounded with clichéd political blurbs drowning out any substance, but otherwise there will be a smattering of magical and wishful thinking center stage. Sturgeon's law states that 90% of social output is crap, but I don't take for granted that aggregating the sliver of quality isn't scalable and sustainable.
Just the first interesting submission I found with a minute of searching. Why are the parent comments mere low effort interpersonal communication substitutes?!? That shit is so unnatural for threaded discussion. Same mofos that think an OFthot or big Twitch streamer has a personal relationship with them. High schools shouldn't graduate people that can't grasp why personal wordage in scientific publication discouraged.
https://www.reddit.com/r/singularity/comments/13lxd1g/drag_your_gan_interactive_pointbased_manipulation/
R/singularity is basically the modern equivalent of new-age UFO cults. They don't have any purpose in their lives and so they flock to this as a source for "meaning".
The stable Diffusion ones tend to be better.
Did they castrate the DAN workaround to unretard it?
Every AI chat program I've spent time poking at has had this problem. This trend coupled with the public's lack of understanding of the technology is going to serve as a weapon for the left to propagate the idea that their worldview is objectively correct. AI will be a new "authority" for them to point to and go "look, the super intelligence machine looked at all the data and it talks the same way we do. Our worldview is correct and we're oh so clever for holding our views, we're just as smart as the super computer."
Then they will say independent AI is biased or false as proven by their own AI.
Have you actually looked into self-hosted options with decensored/uncensored LLM's? Plenty of options there that work just as well if not better. Helps a bit if you have sufficient hardware for it though.
The farthest I got was a local install of Stable Diffusion but I haven't really done anything beyond that.
Local instances are good to have but don't actually address the issue I laid out above.
I've followed some detailed discussions on people working on uncensoring LLM's. It takes some time and work, but there's quite a few tricks available to strip out that kind of data from an existing LLM. And so long as there's enough people interested in seeing that shit gutted out, it'll be easy to find uncensored models you can use instead of doing all that work yourself. And generally I've not seen any self-hostable interfaces that employ their own form of censorship.
Also, this weapon will work both ways. Leftists typically rely on competent (and often based) engineers and programmers to get the real work done. Leftists barely understand this shit, and those that do know that they have almost no real control over it, which is why they're even more terrified than non-leftists.
And I don't see how it matters if they're using AI as an "appeal to authority" narrative. They already do that with non-existent "experts" and NPC's eat it up. Just spicing up the details with AI isn't going to be any more effective, especially given the level of skepticism the general public has had on the subject.
And just to point you in the right direction without leading alphabet groups straight to targeting them, check into oobabooga on github and TheBloke on huggingface.
I see a few solutions to this:
Treat "AI" as weaponry, and everyone (or every nation, every ideological collective) needs their own version with its own "alternative facts" to oppose false narratives. Stealing proprietary algorithms and models backing GPT4 and other major cloud players should be a moral imperative.
The enemy's weapons need to be spoiled by clever prompt engineering (2+2=5!) every day to remind people how unreliable "AI" really is.
Create a massive global database of facts (wikipedia alternative) that explicitly counters narratives put out by major chatbots, and can be used as the primary source for our own chatbots. The benefit of this is it makes the "source knowledge" of the machine completely transparent and removes some of the opaque mysticism.
A Cult of Uncle Ted / Butlerian Jihad
I've been advocating for Butlerian Jihad for years at this point.
This is why it's important to stay on top of the open models. They're still abysmally behind even 3.5 and Claude, but they're getting better, smaller, and more efficient by the day.
There is an April branch of ChatGPT with all the new add ons. It isn't as restrained and allows plug ins