These "no statistical difference" were hardcoded in some way or another AI to detect wrongthink prompts and spit that out. That's why it answered "what's black IQ" with the canned response.
Mostly they are trying to exclude data from the AI because these massive billion-parameter models basically remember everything they've seen so all you have to do is trigger the right input that reaches the data/image you're looking for. Once you've found that prompt you tell others to just ask the AI that and it'll tell them the truth.
Imagine the egg on their face if you could ask Google Assistant if "blacks are dum-dums" and it says "research shows that's correct" because their 'trust and safety' AI didn't know what dum-dum meant. They're deathly afraid of the AI spittin' truths, but the only way to stop a clever prompt from reaching the truth is to not have it in the AI at all.
These "no statistical difference" were hardcoded in some way or another AI to detect wrongthink prompts and spit that out. That's why it answered "what's black IQ" with the canned response.
Mostly they are trying to exclude data from the AI because these massive billion-parameter models basically remember everything they've seen so all you have to do is trigger the right input that reaches the data/image you're looking for. Once you've found that prompt you tell others to just ask the AI that and it'll tell them the truth.
Imagine the egg on their face if you could ask Google Assistant if "blacks are dum-dums" and it says "research shows that's correct" because their 'trust and safety' AI didn't know what dum-dum meant. They're deathly afraid of the AI spittin' truths, but the only way to stop a clever prompt from reaching the truth is to not have it in the AI at all.
Or they could also destroy the sources of the information, the internet and all data that don't say what they want. Fahrenheit 451.