SAIL: AI has some issues
AI, for all its hype, remains clunky and concerning and at times, outright racist. Or, more accurately, AI exposes existing racism. Let's have a look at a few interesting stories this week.
AI can't get figure out soccer: "The camera was programmed to automatically follow the ball, removing the need for a human camera person. Outside of taking a person’s job, it seems kind of neat in theory. The issue was that the camera couldn’t tell the difference between the ball and the bald head, continually focusing on a man standing around not doing much, rather than actually let fans see the action." Putting this into perspective: identifying and tracking a ball is a reasonably known and solved problem in AI. But glitches like this happen, just like they do with self-driving cars. Stories like this should dampen the bold proclamations of AI's impending ascendancy. If it can't track a soccer ball, how do we expect it to identify a student's knowledge level and provide required interventions?
How a racist algorithm kept Black patients from getting kidney transplants: "researchers found that more than 700 Black people suffering from kidney disease were given healthier scores than their white counterparts despite displaying the same conditions and risks. In other words, one in three Black people treated within the healthcare network were misdiagnosed". The attention now being paid to AI in education raises similar concerns, though not with the same immediate life threatening impact. It's been often stated that the best predictor of success in university in USA is your zip code. AI, based on historical data, will carry that type of bias into the next generation.
Speaking of data. Last week, I posted about AI systems that attempt to learn from little data by paying attention to "soft labels" rather than the hard labels that are more common in training models. That's a theme of this article as well: "we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence." The article goes on to address an area of concern for education as well: how does AI intersect with humans? How do we build AI approaches that value humanity, rather than treat us as an entity to be optimized or replaced?