AI trained in science papers spews misinformation
(archive.ph)
Comments (22)
sorted by:
Indeed. Seems like it was just copy-pasting any results related to the query.
Honestly, given how often research conclusions can end up contradicting conclusions from other bits of research, the very idea of a bot for this kind of thing seems like an inevitable exercise in futility
Given how often research conclusions end up contradicting the research they're supposedly concluding, I think this was a fool's errand from the start.
so just like regular "science" does now....
Looks like it was giving ambiguous answers because it saw all the papers arguing with one another over this and that. When one scientist makes a claim and writes a paper, it's the job of other scientists to try to debunk him, basically. So this is what the computer sees, and it gives garbage "yes and no at the same time" answers.
I do not understand why they are surprised at this result.
Whaaaat? Language and knowledge are too complex to expect AI to interpret, understand, and summarize information for laypeople to understand technical qriting with no background? Who could have forseen this?
But seriously, just the concept being anything more than an organizer for studies and directing people to the actual source papers should have been seen as a red flag.
In my experience with the latest Stable Diffusion releases it's actually possible to get a computer to understand highly specific concepts such as "anime girl with huge boobs doing yardwork" or "anime girl with huge boobs buying gatorade at a gas station" or "anime girl with huge boobs on a golf course".
There's a common theme with your examples, but I can't quite put my hands on it...
Shaq couldn't put his hands on some of these honkers.
They're all very wholesome.
Honestly even if they could craft an AI up to the task language comprehension wise, they'd still have a problem trying to parse a definitive answer from a database of articles that frequently overstep their scope and make inconsistent and even contradictory claims of fact to each other.
Peer review has really dropped the ball on correcting the verbiage of people overstating one implication of the data as a singular fact in their conclusions for a while now.
I think they shut it down so fast as they realised it would make 95% of academia obsolete..
That's all AI is or ever will be.
I don't know. That's basically what human NPCs do, but they still fool people sometimes.
Just because the human brain is designed to recognize and respect the human face. Have a bot spew NPCisms and it wouldn't get far.
Yeah, it's being fed random bullshit.
Garbage inputs yield garbage outputs, but because the dogma is that the inputs are good they'll use this to reinforce their already held beliefs.
Not surprising. AI is making a prediction about what word comes next. It's not really capable of thinking about whether the paper as a whole makes sense.
Sounds like the AI was poorly trained
Here's a thread of some funny results of the ai https://nitter.pussthecat.org/lporiginalg/status/1593055205465391104 https://archive.ph/nzfV9
I kind of figured it would be nonsense and too much sense.