SAIL: Ethics, Dangers of AI, Brains and AI
Welcome to the inaugural "issue" of Sensemaking, AI, and Learning (SAIL).
The goal of this weekly newsletter is to make sense of important developments in AI and how they impact learning and education.
AI is a complex, integrated, and rapidly evolving space. It ranges from extremely complex technical papers to general ethics and bias concerns that influence all aspects of modern life. Our interest is the intersection where humans and machines overlap in their shared roles and where questions arise about "what does it mean to be human in a digital age".
The format of the newsletter will evolve, but will generally focus on research and its implications through share academic articles and general news. We have a number of interviews planned with leading AI experts that will help us make sense of how AI specifically impacts knowledge processes such as learning and sensemaking.
What we found interesting this week:
On the subject of "is AI like the human brain". AI is usually described as having significant overlap with the human brain in terms of how it learns. Some of the most innovative work now ongoing, in companies like DeepMind or OpenAI, relies on neural networks which draw inspiration and language from neuroscience. Some researchers argue that the "mental model of AI being like a human mind, or a human mind being like AI, is fundamentally flawed." The real issue is in perpetuation of existing bias, which is a by-product of current power structures.
Other research presents surprising examples of shared functionality: "a "spooky correspondence" between the brain—a product of evolution and lifetime learning—and AlexNet—designed by computer scientists and trained to label object photographa "spooky correspondence" between the brain—a product of evolution and lifetime learning—and AlexNet—designed by computer scientists and trained to label object photograph"
Learning from little data discusses an area where humans vastly exceed AI. We have domain transference - when we learn something new, we can rapidly apply this insight into a range of different settings. This attribute is often described as "general intelligence". AI, in contrast, is primarily about domain specific and narrow intelligence. Even within those narrow domains, enormous amounts of data are needed in machine learning. Researchers are trying to reduce the data need, using "soft labeling" to captured shared attributes of images (a bit like transference, but only in the sense that soft identifications enable attributes to be identified rather than an entire image).