I swear to god you mouth breathing faggots are too stupid to be allowed on the internet.
Large Language Models don't "think", they don't "know". They do a whole bunch of math to predict the next character in sequence of letters based on weighted probabilities.
No shit, but I doubt people are taking exception on the basis of this being "thought". It either reflects training data bias, or more likely, is imposed through explicit LLM alignment.
Disturbing, considering how many "mouth breathing faggots" do attribute too much value to LLM generated responses, and that not all instances will be so on the nose.
I swear to god you mouth breathing faggots are too stupid to be allowed on the internet.
Large Language Models don't "think", they don't "know". They do a whole bunch of math to predict the next character in sequence of letters based on weighted probabilities.
No shit, but I doubt people are taking exception on the basis of this being "thought". It either reflects training data bias, or more likely, is imposed through explicit LLM alignment.
Disturbing, considering how many "mouth breathing faggots" do attribute too much value to LLM generated responses, and that not all instances will be so on the nose.