This has nothing to do with model training, this is just the model deciding The Onion was the most credible result for that question. The guy who made this post on Gab is trying to sound smarter than he is.
The model doesn't decide anything by itself it's programmed to from online sources which is what the learning is about and then it grabs things based on keywords entered by the end user. Possibly the biggest mistake programmers have made with this is calling machine learning, machine learning. As it implies sentience where there is none because you normies are taking abstract descriptions/terminology literally which makes having any real discussion about AI extremely painful to watch. You won't be told though, even when you have people who actually study this trying to explain.
This has nothing to do with model training, this is just the model deciding The Onion was the most credible result for that question. The guy who made this post on Gab is trying to sound smarter than he is.
The model probably shouldn't have been trained in such a way that it would decide that The Onion was ever a credible result.
If you rather meant "relevant result," it still shouldn't have given it as a response if it were being asked a sincere question.
That's on whoever wrote the algorithms that decide which article to pull up for a particular phrase.
"Real models have never been tried!"
The model doesn't decide anything by itself it's programmed to from online sources which is what the learning is about and then it grabs things based on keywords entered by the end user. Possibly the biggest mistake programmers have made with this is calling machine learning, machine learning. As it implies sentience where there is none because you normies are taking abstract descriptions/terminology literally which makes having any real discussion about AI extremely painful to watch. You won't be told though, even when you have people who actually study this trying to explain.