This is insane. But real test it out yourself.
(twitter.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (25)
sorted by:
The reasoning doesn't match the answer. It's possible it doesn't understand the numerical comparison (LLMs are bad at math, especially dollar tree LLMs like grok). It's also possible that its reasoning is short circuited by the literally millions of "killing jews bad" articles and books and opinion pieces that have been fed into it. Similarly, if you ask copilot to write a calculateWomanSalary function, it'll take an input salary and multiply it by 0.7, because that's little factoid is in the training data over and over and over. Garbage in garbage out.