After several nonsensical misdiagnostics and non-diagnostics by doctors, I reluctantly agree that a preprogramed chart couldn't be less useful.
I suppose the AI wouldn't keep insisting I take an ineffective treatment which caused internal bleeding, because it didn't recall that documented side-effect, and thus presumed I made it up.
I assume the AI would "recall" the side effect list of the drugs in its database...
I remember IBM bragging about Watson being used in India I think to advise medics, first it learned with US doctors and then used in less developed countries. It had a great success at the time. That was like 5-6 years ago I think.
I used to work someplace (though I didn't work on this particular project) that was trying to develop an AI diagnostic aid, and the problem they kept running into was that it was easy to get something that was pretty accurate, but when it was wrong it wouldn't always fail gracefully or predictably the way a human would. It's like when you see the self-driving cars rear-end someone at 60mph: humans do that too, but often it's because they're tired, or drunk, or distracted; so it's easier to reduce the probability of those upstream causes. With an AI the upstream cause could be something like "the car's hazard lights made the system think the parked car was a road sign": how do you "fix" something like that? And how do you know you've "fixed" it without introducing some other seemingly random/arbitrary glitch?
When you're designing things that can kill people, engineers and regulators really like predictability.
"the car's hazard lights made the system think the parked car was a road sign"
Often that's just a guess by engineers too. One of the problems we had with Watson is a flaw shared by other neural-networks in general: a lack of self-reflection. They are pretty much black box systems that are almost impossible to interrogate. In these settings it's pretty critical to know why something happened or why an expert system answered a question the way it did. You can do an investigation, probe the i/o logs, and try to reproduce the scenario, but you can't just ask "why did you do that?" like you can with a human.
True, but even if you could interrogate it and understand the answer, it's effectively an alien intelligence whose decision-making process is totally foreign to our own.
"Why did you interpret the hazard lights as a road sign?"
"The hue of the pixel at X=753, Y=1063 had a .0006 higher correlation to training data associated with a road sign hazard light than that of a vehicle hazard light. Therefore it was classified as a road sign hazard light"
When humans do this it's a lot easier for us to understand the reasoning behind the defective thought process and develop some sort of higher-level organization or process to either make that failure mode less likely to occur or limit its scope. But even then it's extremely hard: "safety regulations are written in blood" as they say.
is easy, just avoid the problem entirely and remake cities and roads with AI electric cars at the core. 2 meters tall barriers around every road. cities that resemble giant rats' labyrinths. etc. this will be a boom per economy amirite?
what did you say? is crazy? it defeats the purpose of going "green" in the first place? lol, lmao even.
Heh, I was speaking to a contractor some years back, their CEO had a fancy new car with an automatic parking feature ... chuffed to bits with his new car as he was, he wanted to show it off, so took it to a multi-story car park, got ready to show it off ...
And the car confused the shadow in front of the concrete wall for a space that was completely open and started reversing back at full speed. Fortunately, the CEO wasn't a complete idiot and managed to bring the car to a stop before writing it off...
Have you ever played Akinator? That's basically what almost all doctors are. You come to them with an initial complaint, then they ask you questions that eliminate increasingly fewer diseases until there's only one possibility. With the proper data set, it would be trivial to program a web app that can replace 90% of doctor's visits.
People have been conditioned to see doctors as hyper-intelligent demigods who do the impossible, but in reality they're basically just walking flow charts. Every disease has a set way to diagnose and treat it, and they're just following the script. Even the lab results are already interpreted for them by a computer. They're not doing anything novel or creative at all.
There are things that an AI genuinely cannot do, such as physical exams and, of course, surgery, but for the most part there's no reason a sufficiently trained AI couldn't do what they do.
I'd bet that within 25 years, almost all GPs will have been replaced by an app that does what they do for free. That is, assuming TPTB don't torpedo it by paying some black people to call it racist. Or assuming the people making it don't lobotomize it to hide the fact that certain groups of people objectively live less healthy lives than others. Or assuming they don't train it on junk data from Berkeley which makes it diagnose everyone as having gender dysphoria. So actually there are a lot of reasons this might not happen. And all of them involve runaway leftism.
I'd bet that within 25 years, almost all GPs will have been replaced by an app that does what they do for free.
The current trend has been to replace frontline primary care MDs with undertrained, "cheaper" female nurse practitioners who are dangerous because they don't even know what they don't know.
Like everything, what will be lost with the introduction of AI will be nuance.
Essentially every current prescription for antibiotic drops for pink eye is completely useless and inappropriate because it's 90%+ viral in origin in children, but every child gets treated anyway because moms demand it, daycares and schools have protocols demanding it, doctors are lazy/pushovers/opportunists, it's not socially acceptable to "do nothing rather than something", etc.
An AI can be programmed to inappropriately dispense bad medicine to every child with a gunky eye as well, but it also changes and ignores all the social dynamics that play out because the "right" treatment isn't always what's written in the textbook.
Said app, will be owned by the insurance companies/government, and certain decision boxes will only be available based on your social credit score. All trees will end at KYS.
What you should take from this study is that "common" medical questions are common enough that an AI can reliably copy a human doctor's answer. It's the uncommon medical questions where the AI might fall flat, as it's no longer capable of copying a human doctor's answer.
After several nonsensical misdiagnostics and non-diagnostics by doctors, I reluctantly agree that a preprogramed chart couldn't be less useful.
I suppose the AI wouldn't keep insisting I take an ineffective treatment which caused internal bleeding, because it didn't recall that documented side-effect, and thus presumed I made it up.
I assume the AI would "recall" the side effect list of the drugs in its database...
You could at least see the side effects and then say you have that, and say you need another treatment.
I've been watching clips of House, and am realizing how much of it is just wrong, and propaganda.
I remember IBM bragging about Watson being used in India I think to advise medics, first it learned with US doctors and then used in less developed countries. It had a great success at the time. That was like 5-6 years ago I think.
Watson trained on published research in 2022:
“I think my leg is broken”
“have you had your vaccines and three boosters?”
“erm”
“fuck off and die then”
In Canada, it just jumps to the last line directly.
I think I remember that happening.
I used to work someplace (though I didn't work on this particular project) that was trying to develop an AI diagnostic aid, and the problem they kept running into was that it was easy to get something that was pretty accurate, but when it was wrong it wouldn't always fail gracefully or predictably the way a human would. It's like when you see the self-driving cars rear-end someone at 60mph: humans do that too, but often it's because they're tired, or drunk, or distracted; so it's easier to reduce the probability of those upstream causes. With an AI the upstream cause could be something like "the car's hazard lights made the system think the parked car was a road sign": how do you "fix" something like that? And how do you know you've "fixed" it without introducing some other seemingly random/arbitrary glitch?
When you're designing things that can kill people, engineers and regulators really like predictability.
Often that's just a guess by engineers too. One of the problems we had with Watson is a flaw shared by other neural-networks in general: a lack of self-reflection. They are pretty much black box systems that are almost impossible to interrogate. In these settings it's pretty critical to know why something happened or why an expert system answered a question the way it did. You can do an investigation, probe the i/o logs, and try to reproduce the scenario, but you can't just ask "why did you do that?" like you can with a human.
True, but even if you could interrogate it and understand the answer, it's effectively an alien intelligence whose decision-making process is totally foreign to our own.
"Why did you interpret the hazard lights as a road sign?"
"The hue of the pixel at X=753, Y=1063 had a .0006 higher correlation to training data associated with a road sign hazard light than that of a vehicle hazard light. Therefore it was classified as a road sign hazard light"
When humans do this it's a lot easier for us to understand the reasoning behind the defective thought process and develop some sort of higher-level organization or process to either make that failure mode less likely to occur or limit its scope. But even then it's extremely hard: "safety regulations are written in blood" as they say.
is easy, just avoid the problem entirely and remake cities and roads with AI electric cars at the core. 2 meters tall barriers around every road. cities that resemble giant rats' labyrinths. etc. this will be a boom per economy amirite?
what did you say? is crazy? it defeats the purpose of going "green" in the first place? lol, lmao even.
Heh, I was speaking to a contractor some years back, their CEO had a fancy new car with an automatic parking feature ... chuffed to bits with his new car as he was, he wanted to show it off, so took it to a multi-story car park, got ready to show it off ...
And the car confused the shadow in front of the concrete wall for a space that was completely open and started reversing back at full speed. Fortunately, the CEO wasn't a complete idiot and managed to bring the car to a stop before writing it off...
if anything, this should be a massive condemnation of the current state of medical doctors.
About half the time they send you out to be tested by various devices, and listen to the technicians findings.
Have you ever played Akinator? That's basically what almost all doctors are. You come to them with an initial complaint, then they ask you questions that eliminate increasingly fewer diseases until there's only one possibility. With the proper data set, it would be trivial to program a web app that can replace 90% of doctor's visits.
People have been conditioned to see doctors as hyper-intelligent demigods who do the impossible, but in reality they're basically just walking flow charts. Every disease has a set way to diagnose and treat it, and they're just following the script. Even the lab results are already interpreted for them by a computer. They're not doing anything novel or creative at all.
There are things that an AI genuinely cannot do, such as physical exams and, of course, surgery, but for the most part there's no reason a sufficiently trained AI couldn't do what they do.
I'd bet that within 25 years, almost all GPs will have been replaced by an app that does what they do for free. That is, assuming TPTB don't torpedo it by paying some black people to call it racist. Or assuming the people making it don't lobotomize it to hide the fact that certain groups of people objectively live less healthy lives than others. Or assuming they don't train it on junk data from Berkeley which makes it diagnose everyone as having gender dysphoria. So actually there are a lot of reasons this might not happen. And all of them involve runaway leftism.
The current trend has been to replace frontline primary care MDs with undertrained, "cheaper" female nurse practitioners who are dangerous because they don't even know what they don't know.
Like everything, what will be lost with the introduction of AI will be nuance.
Essentially every current prescription for antibiotic drops for pink eye is completely useless and inappropriate because it's 90%+ viral in origin in children, but every child gets treated anyway because moms demand it, daycares and schools have protocols demanding it, doctors are lazy/pushovers/opportunists, it's not socially acceptable to "do nothing rather than something", etc.
An AI can be programmed to inappropriately dispense bad medicine to every child with a gunky eye as well, but it also changes and ignores all the social dynamics that play out because the "right" treatment isn't always what's written in the textbook.
Said app, will be owned by the insurance companies/government, and certain decision boxes will only be available based on your
socialcredit score. All trees will end at KYS."clinical reasoning" is actually very easy to automate. The trick is to ask the right questions from the patient.
What you should take from this study is that "common" medical questions are common enough that an AI can reliably copy a human doctor's answer. It's the uncommon medical questions where the AI might fall flat, as it's no longer capable of copying a human doctor's answer.
In my experience, the doctors aren't any better. Once you pass a certain rarity they get angry that it's not something common.