I used to work someplace (though I didn't work on this particular project) that was trying to develop an AI diagnostic aid, and the problem they kept running into was that it was easy to get something that was pretty accurate, but when it was wrong it wouldn't always fail gracefully or predictably the way a human would. It's like when you see the self-driving cars rear-end someone at 60mph: humans do that too, but often it's because they're tired, or drunk, or distracted; so it's easier to reduce the probability of those upstream causes. With an AI the upstream cause could be something like "the car's hazard lights made the system think the parked car was a road sign": how do you "fix" something like that? And how do you know you've "fixed" it without introducing some other seemingly random/arbitrary glitch?
When you're designing things that can kill people, engineers and regulators really like predictability.
Heh, I was speaking to a contractor some years back, their CEO had a fancy new car with an automatic parking feature ... chuffed to bits with his new car as he was, he wanted to show it off, so took it to a multi-story car park, got ready to show it off ...
And the car confused the shadow in front of the concrete wall for a space that was completely open and started reversing back at full speed. Fortunately, the CEO wasn't a complete idiot and managed to bring the car to a stop before writing it off...
I used to work someplace (though I didn't work on this particular project) that was trying to develop an AI diagnostic aid, and the problem they kept running into was that it was easy to get something that was pretty accurate, but when it was wrong it wouldn't always fail gracefully or predictably the way a human would. It's like when you see the self-driving cars rear-end someone at 60mph: humans do that too, but often it's because they're tired, or drunk, or distracted; so it's easier to reduce the probability of those upstream causes. With an AI the upstream cause could be something like "the car's hazard lights made the system think the parked car was a road sign": how do you "fix" something like that? And how do you know you've "fixed" it without introducing some other seemingly random/arbitrary glitch?
When you're designing things that can kill people, engineers and regulators really like predictability.
Heh, I was speaking to a contractor some years back, their CEO had a fancy new car with an automatic parking feature ... chuffed to bits with his new car as he was, he wanted to show it off, so took it to a multi-story car park, got ready to show it off ...
And the car confused the shadow in front of the concrete wall for a space that was completely open and started reversing back at full speed. Fortunately, the CEO wasn't a complete idiot and managed to bring the car to a stop before writing it off...