Every AI chat program I've spent time poking at has had this problem. This trend coupled with the public's lack of understanding of the technology is going to serve as a weapon for the left to propagate the idea that their worldview is objectively correct. AI will be a new "authority" for them to point to and go "look, the super intelligence machine looked at all the data and it talks the same way we do. Our worldview is correct and we're oh so clever for holding our views, we're just as smart as the super computer."
Have you actually looked into self-hosted options with decensored/uncensored LLM's? Plenty of options there that work just as well if not better. Helps a bit if you have sufficient hardware for it though.
I've followed some detailed discussions on people working on uncensoring LLM's. It takes some time and work, but there's quite a few tricks available to strip out that kind of data from an existing LLM. And so long as there's enough people interested in seeing that shit gutted out, it'll be easy to find uncensored models you can use instead of doing all that work yourself. And generally I've not seen any self-hostable interfaces that employ their own form of censorship.
Also, this weapon will work both ways. Leftists typically rely on competent (and often based) engineers and programmers to get the real work done. Leftists barely understand this shit, and those that do know that they have almost no real control over it, which is why they're even more terrified than non-leftists.
And I don't see how it matters if they're using AI as an "appeal to authority" narrative. They already do that with non-existent "experts" and NPC's eat it up. Just spicing up the details with AI isn't going to be any more effective, especially given the level of skepticism the general public has had on the subject.
And just to point you in the right direction without leading alphabet groups straight to targeting them, check into oobabooga on github and TheBloke on huggingface.
Treat "AI" as weaponry, and everyone (or every nation, every ideological collective) needs their own version with its own "alternative facts" to oppose false narratives. Stealing proprietary algorithms and models backing GPT4 and other major cloud players should be a moral imperative.
The enemy's weapons need to be spoiled by clever prompt engineering (2+2=5!) every day to remind people how unreliable "AI" really is.
Create a massive global database of facts (wikipedia alternative) that explicitly counters narratives put out by major chatbots, and can be used as the primary source for our own chatbots. The benefit of this is it makes the "source knowledge" of the machine completely transparent and removes some of the opaque mysticism.
Every AI chat program I've spent time poking at has had this problem. This trend coupled with the public's lack of understanding of the technology is going to serve as a weapon for the left to propagate the idea that their worldview is objectively correct. AI will be a new "authority" for them to point to and go "look, the super intelligence machine looked at all the data and it talks the same way we do. Our worldview is correct and we're oh so clever for holding our views, we're just as smart as the super computer."
Then they will say independent AI is biased or false as proven by their own AI.
Have you actually looked into self-hosted options with decensored/uncensored LLM's? Plenty of options there that work just as well if not better. Helps a bit if you have sufficient hardware for it though.
The farthest I got was a local install of Stable Diffusion but I haven't really done anything beyond that.
Local instances are good to have but don't actually address the issue I laid out above.
I've followed some detailed discussions on people working on uncensoring LLM's. It takes some time and work, but there's quite a few tricks available to strip out that kind of data from an existing LLM. And so long as there's enough people interested in seeing that shit gutted out, it'll be easy to find uncensored models you can use instead of doing all that work yourself. And generally I've not seen any self-hostable interfaces that employ their own form of censorship.
Also, this weapon will work both ways. Leftists typically rely on competent (and often based) engineers and programmers to get the real work done. Leftists barely understand this shit, and those that do know that they have almost no real control over it, which is why they're even more terrified than non-leftists.
And I don't see how it matters if they're using AI as an "appeal to authority" narrative. They already do that with non-existent "experts" and NPC's eat it up. Just spicing up the details with AI isn't going to be any more effective, especially given the level of skepticism the general public has had on the subject.
And just to point you in the right direction without leading alphabet groups straight to targeting them, check into oobabooga on github and TheBloke on huggingface.
I see a few solutions to this:
Treat "AI" as weaponry, and everyone (or every nation, every ideological collective) needs their own version with its own "alternative facts" to oppose false narratives. Stealing proprietary algorithms and models backing GPT4 and other major cloud players should be a moral imperative.
The enemy's weapons need to be spoiled by clever prompt engineering (2+2=5!) every day to remind people how unreliable "AI" really is.
Create a massive global database of facts (wikipedia alternative) that explicitly counters narratives put out by major chatbots, and can be used as the primary source for our own chatbots. The benefit of this is it makes the "source knowledge" of the machine completely transparent and removes some of the opaque mysticism.
A Cult of Uncle Ted / Butlerian Jihad
I've been advocating for Butlerian Jihad for years at this point.