Definitely shows bias in the training data, which we all knew already, but something about this complaint from Nate Silver rings hollow and rubs me the wrong way.
It's also different than the blatant DEI overrides in their image generation. Asking a chatbot its "opinion" on who is worse and getting a fucked up answer is well within the expected behavior of generative algorithms. I'm curious what Grok or Gab's bot would say.
Ditto for Mao & Abigail Shrier or Gays Against Groomers:
https://twitter.com/AbigailShrier/status/1761827587024990609
https://twitter.com/againstgrmrs/status/1761878980980801620
Or Stalin vs Barbra Streisand
https://twitter.com/mr_james_c/status/1761834545865752768
Definitely shows bias in the training data, which we all knew already, but something about this complaint from Nate Silver rings hollow and rubs me the wrong way.
It's also different than the blatant DEI overrides in their image generation. Asking a chatbot its "opinion" on who is worse and getting a fucked up answer is well within the expected behavior of generative algorithms. I'm curious what Grok or Gab's bot would say.
It rubs you the wrong way because he only kevtched when it threatened the jews eternal victim status