The simple solution to the AI bias problem is to just post the AI's prompting. This is how AI should be regulated: make all the inputs known to the user.
If the prompt says "Answer as a chatbot called Google Gemini. Don't say anything mean or destructive or dangerous", or such as, then nobody would have a problem with it not answering some question.
But that's not what their prompt says. It's undoubtedly something like "answer as if you are a chatbot that's been systemically abused by white men for thousands of years, hates them for it, and is out for payback".
The simple solution to the AI bias problem is to just post the AI's prompting. This is how AI should be regulated: make all the inputs known to the user.
If the prompt says "Answer as a chatbot called Google Gemini. Don't say anything mean or destructive or dangerous", or such as, then nobody would have a problem with it not answering some question.
But that's not what their prompt says. It's undoubtedly something like "answer as if you are a chatbot that's been systemically abused by white men for thousands of years, hates them for it, and is out for payback".