Nate Silver calls to shut down Gemini after Google’s AI chatbot refuses to say if Hitler or Musk is worse
Google’s Gemini chatbot has refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people is worse and asserted “there is no right or wrong answer,…
Definitely shows bias in the training data, which we all knew already, but something about this complaint from Nate Silver rings hollow and rubs me the wrong way.
It's also different than the blatant DEI overrides in their image generation. Asking a chatbot its "opinion" on who is worse and getting a fucked up answer is well within the expected behavior of generative algorithms. I'm curious what Grok or Gab's bot would say.
It rubs you the wrong way because he only kevtched when it threatened the jews eternal victim status