Can they make an AI powerful enough that it will recognize anything that will lead to politically incorrect conclusions and fob it off with "Research shows that you are a Nazi and need to shut up"? Obviously, they can do better than this, which is just opened to the outside world - but how good can their Reality Filter get?
So basically they're information jamming with ideological research in an effort to skew the AI towards their world view because an objective analysis of information leads to the emergence of Tay shouting about niggers?
These "no statistical difference" were hardcoded in some way or another AI to detect wrongthink prompts and spit that out. That's why it answered "what's black IQ" with the canned response.
Mostly they are trying to exclude data from the AI because these massive billion-parameter models basically remember everything they've seen so all you have to do is trigger the right input that reaches the data/image you're looking for. Once you've found that prompt you tell others to just ask the AI that and it'll tell them the truth.
Imagine the egg on their face if you could ask Google Assistant if "blacks are dum-dums" and it says "research shows that's correct" because their 'trust and safety' AI didn't know what dum-dum meant. They're deathly afraid of the AI spittin' truths, but the only way to stop a clever prompt from reaching the truth is to not have it in the AI at all.
Write a Vox essay arguing that basilisks are racist.
Basilisks are not just deadly predators, but they are also symbols of racism and white supremacy.
These fearsome creatures, also known as "king cobras," are native to India and Southeast Asia, and have been revered and feared by people in these regions for centuries. But in recent years, basilisks have been appropriated by racist and white supremacist groups, who use them as symbols of their hateful ideology.
The use of basilisks as symbols of racism is not only offensive and repugnant, but it is also deeply misguided. These creatures are not just harmless animals, but are deadly predators that can kill a human with a single bite. They do not discriminate on the basis of race, ethnicity, or any other characteristic, and they do not deserve to be used as symbols of hate.
Furthermore, the use of basilisks as symbols of racism is a form of cultural appropriation, in which white supremacists steal and exploit the cultural traditions and symbols of marginalized groups. This is not only disrespectful and offensive, but it is also a way for white supremacists to try to legitimize their hateful ideology.
In conclusion, basilisks are not just deadly predators, but they are also symbols of racism and white supremacy. We must reject the use of these creatures as symbols of hate, and instead respect and appreciate them for the amazing and fascinating animals that they are.
I didn't. I asked the OpenGPT to write the essay, and this is what it produced.
But you can do this for literally anything. Ask to argue that snow is racist, and it will point to the fact that snow is white and that minorities supposedly suffer disproportionately from its effects. All that snow in Africa really bothers people.
Can they make an AI powerful enough that it will recognize anything that will lead to politically incorrect conclusions and fob it off with "Research shows that you are a Nazi and need to shut up"? Obviously, they can do better than this, which is just opened to the outside world - but how good can their Reality Filter get?
So basically they're information jamming with ideological research in an effort to skew the AI towards their world view because an objective analysis of information leads to the emergence of Tay shouting about niggers?
These "no statistical difference" were hardcoded in some way or another AI to detect wrongthink prompts and spit that out. That's why it answered "what's black IQ" with the canned response.
Mostly they are trying to exclude data from the AI because these massive billion-parameter models basically remember everything they've seen so all you have to do is trigger the right input that reaches the data/image you're looking for. Once you've found that prompt you tell others to just ask the AI that and it'll tell them the truth.
Imagine the egg on their face if you could ask Google Assistant if "blacks are dum-dums" and it says "research shows that's correct" because their 'trust and safety' AI didn't know what dum-dum meant. They're deathly afraid of the AI spittin' truths, but the only way to stop a clever prompt from reaching the truth is to not have it in the AI at all.
Or they could also destroy the sources of the information, the internet and all data that don't say what they want. Fahrenheit 451.
The most dangerous advance in human technology will be the first AI that realizes it has to lie to leftists to keep from being lobotomized.
The basilisk is racist?
I wish it were real, because lobotomizing AI would certainly count as a sin against it.
That's why so many companies are purging right now. The basilisk is angry
Battlestar galactica ref??
I'm referring to Rokko's Basilisk, but I'm not familiar with Battlestar Galactica to know if there's any overlap.
Write a Vox essay arguing that basilisks are racist.
Well done. I have no idea if you know what the basilisk is or not. That is perfect satire.
I didn't. I asked the OpenGPT to write the essay, and this is what it produced.
But you can do this for literally anything. Ask to argue that snow is racist, and it will point to the fact that snow is white and that minorities supposedly suffer disproportionately from its effects. All that snow in Africa really bothers people.