I rejected my immediate dismissal of the article based on the title and thought it might be a unicorn and have value. Pondering it over, I was curious if it'd cover graphical pattern or visual nuance that could conceivably be harder to see in skin tones with higher melanin. That's not impossible and would be an interesting topic. However...
The results revealed that radiologists may have literal blind spots when it comes to reading Black patients’ x-rays.
Obermeyer’s new study showing how algorithms can uncover bias comes with a catch: Neither he nor the algorithms can explain what the algorithms see in x-rays that doctors miss. The researchers used artificial neural networks, a technology that has made many AI applications more practical, but is so tricky to reverse engineer that experts call them “black boxes.”
Standard SJW nonsense, regret skimming the article.
In the Maps Of Meanings lectures by JP he brings some interesting points about AI up. He says that one of the key problems isn't what should the AI pay attention too, but what it should ignore. Ultimately we ignore more than take in while doing something because we focus on the task we're doing.
Even if the neural net makes decisions that are hard to follow the AI would be configured to try and accomplish a certain task so they could work back based on the criteria to figure out what the problem is. You can't just say the system found an issue but not be able to know why the system found the problem. The system is useless without the reasoning behind the decision.
Note - I work in IT, that's why I'm interested in machine logic. AI isn't some magical answer, it's a managed system and the rules behind it are either understood or useless because it's ultimately a tool. Not some conscience entity making its own decisions.
I rejected my immediate dismissal of the article based on the title and thought it might be a unicorn and have value. Pondering it over, I was curious if it'd cover graphical pattern or visual nuance that could conceivably be harder to see in skin tones with higher melanin. That's not impossible and would be an interesting topic. However...
Standard SJW nonsense, regret skimming the article.
In the Maps Of Meanings lectures by JP he brings some interesting points about AI up. He says that one of the key problems isn't what should the AI pay attention too, but what it should ignore. Ultimately we ignore more than take in while doing something because we focus on the task we're doing.
Even if the neural net makes decisions that are hard to follow the AI would be configured to try and accomplish a certain task so they could work back based on the criteria to figure out what the problem is. You can't just say the system found an issue but not be able to know why the system found the problem. The system is useless without the reasoning behind the decision.
Note - I work in IT, that's why I'm interested in machine logic. AI isn't some magical answer, it's a managed system and the rules behind it are either understood or useless because it's ultimately a tool. Not some conscience entity making its own decisions.