Definitely shows bias in the training data, which we all knew already, but something about this complaint from Nate Silver rings hollow and rubs me the wrong way.
It's also different than the blatant DEI overrides in their image generation. Asking a chatbot its "opinion" on who is worse and getting a fucked up answer is well within the expected behavior of generative algorithms. I'm curious what Grok or Gab's bot would say.
Definitely shows bias in the training data, which we all knew already, but something about this complaint from Nate Silver rings hollow and rubs me the wrong way.
It's also different than the blatant DEI overrides in their image generation. Asking a chatbot its "opinion" on who is worse and getting a fucked up answer is well within the expected behavior of generative algorithms. I'm curious what Grok or Gab's bot would say.
It rubs you the wrong way because he only kevtched when it threatened the jews eternal victim status