I used to work someplace (though I didn't work on this particular project) that was trying to develop an AI diagnostic aid, and the problem they kept running into was that it was easy to get something that was pretty accurate, but when it was wrong it wouldn't always fail gracefully or predictably the way a human would. It's like when you see the self-driving cars rear-end someone at 60mph: humans do that too, but often it's because they're tired, or drunk, or distracted; so it's easier to reduce the probability of those upstream causes. With an AI the upstream cause could be something like "the car's hazard lights made the system think the parked car was a road sign": how do you "fix" something like that? And how do you know you've "fixed" it without introducing some other seemingly random/arbitrary glitch?
When you're designing things that can kill people, engineers and regulators really like predictability.
"the car's hazard lights made the system think the parked car was a road sign"
Often that's just a guess by engineers too. One of the problems we had with Watson is a flaw shared by other neural-networks in general: a lack of self-reflection. They are pretty much black box systems that are almost impossible to interrogate. In these settings it's pretty critical to know why something happened or why an expert system answered a question the way it did. You can do an investigation, probe the i/o logs, and try to reproduce the scenario, but you can't just ask "why did you do that?" like you can with a human.
True, but even if you could interrogate it and understand the answer, it's effectively an alien intelligence whose decision-making process is totally foreign to our own.
"Why did you interpret the hazard lights as a road sign?"
"The hue of the pixel at X=753, Y=1063 had a .0006 higher correlation to training data associated with a road sign hazard light than that of a vehicle hazard light. Therefore it was classified as a road sign hazard light"
When humans do this it's a lot easier for us to understand the reasoning behind the defective thought process and develop some sort of higher-level organization or process to either make that failure mode less likely to occur or limit its scope. But even then it's extremely hard: "safety regulations are written in blood" as they say.
is easy, just avoid the problem entirely and remake cities and roads with AI electric cars at the core. 2 meters tall barriers around every road. cities that resemble giant rats' labyrinths. etc. this will be a boom per economy amirite?
what did you say? is crazy? it defeats the purpose of going "green" in the first place? lol, lmao even.
Heh, I was speaking to a contractor some years back, their CEO had a fancy new car with an automatic parking feature ... chuffed to bits with his new car as he was, he wanted to show it off, so took it to a multi-story car park, got ready to show it off ...
And the car confused the shadow in front of the concrete wall for a space that was completely open and started reversing back at full speed. Fortunately, the CEO wasn't a complete idiot and managed to bring the car to a stop before writing it off...
I used to work someplace (though I didn't work on this particular project) that was trying to develop an AI diagnostic aid, and the problem they kept running into was that it was easy to get something that was pretty accurate, but when it was wrong it wouldn't always fail gracefully or predictably the way a human would. It's like when you see the self-driving cars rear-end someone at 60mph: humans do that too, but often it's because they're tired, or drunk, or distracted; so it's easier to reduce the probability of those upstream causes. With an AI the upstream cause could be something like "the car's hazard lights made the system think the parked car was a road sign": how do you "fix" something like that? And how do you know you've "fixed" it without introducing some other seemingly random/arbitrary glitch?
When you're designing things that can kill people, engineers and regulators really like predictability.
Often that's just a guess by engineers too. One of the problems we had with Watson is a flaw shared by other neural-networks in general: a lack of self-reflection. They are pretty much black box systems that are almost impossible to interrogate. In these settings it's pretty critical to know why something happened or why an expert system answered a question the way it did. You can do an investigation, probe the i/o logs, and try to reproduce the scenario, but you can't just ask "why did you do that?" like you can with a human.
True, but even if you could interrogate it and understand the answer, it's effectively an alien intelligence whose decision-making process is totally foreign to our own.
"Why did you interpret the hazard lights as a road sign?"
"The hue of the pixel at X=753, Y=1063 had a .0006 higher correlation to training data associated with a road sign hazard light than that of a vehicle hazard light. Therefore it was classified as a road sign hazard light"
When humans do this it's a lot easier for us to understand the reasoning behind the defective thought process and develop some sort of higher-level organization or process to either make that failure mode less likely to occur or limit its scope. But even then it's extremely hard: "safety regulations are written in blood" as they say.
is easy, just avoid the problem entirely and remake cities and roads with AI electric cars at the core. 2 meters tall barriers around every road. cities that resemble giant rats' labyrinths. etc. this will be a boom per economy amirite?
what did you say? is crazy? it defeats the purpose of going "green" in the first place? lol, lmao even.
Heh, I was speaking to a contractor some years back, their CEO had a fancy new car with an automatic parking feature ... chuffed to bits with his new car as he was, he wanted to show it off, so took it to a multi-story car park, got ready to show it off ...
And the car confused the shadow in front of the concrete wall for a space that was completely open and started reversing back at full speed. Fortunately, the CEO wasn't a complete idiot and managed to bring the car to a stop before writing it off...