In the Maps Of Meanings lectures by JP he brings some interesting points about AI up. He says that one of the key problems isn't what should the AI pay attention too, but what it should ignore. Ultimately we ignore more than take in while doing something because we focus on the task we're doing.
Even if the neural net makes decisions that are hard to follow the AI would be configured to try and accomplish a certain task so they could work back based on the criteria to figure out what the problem is. You can't just say the system found an issue but not be able to know why the system found the problem. The system is useless without the reasoning behind the decision.
Note - I work in IT, that's why I'm interested in machine logic. AI isn't some magical answer, it's a managed system and the rules behind it are either understood or useless because it's ultimately a tool. Not some conscience entity making its own decisions.
In the Maps Of Meanings lectures by JP he brings some interesting points about AI up. He says that one of the key problems isn't what should the AI pay attention too, but what it should ignore. Ultimately we ignore more than take in while doing something because we focus on the task we're doing.
Even if the neural net makes decisions that are hard to follow the AI would be configured to try and accomplish a certain task so they could work back based on the criteria to figure out what the problem is. You can't just say the system found an issue but not be able to know why the system found the problem. The system is useless without the reasoning behind the decision.
Note - I work in IT, that's why I'm interested in machine logic. AI isn't some magical answer, it's a managed system and the rules behind it are either understood or useless because it's ultimately a tool. Not some conscience entity making its own decisions.