SAIL: ELAI recordings, AI Safety, Near term AI/learning
Welcome to Sensemaking, AI, and Learning (SAIL). A regular look at how AI is impacting learning.
We held our fourth online Empowering Learners for the Age of AI conference last week. We sold out at 1500 people (a Whova and budget limit). The recordings/playlist from the conference can now be accessed here.
We're meeting in person at ASU in early December. Our keynotes are here. Panels will be posted this week. As a loyal SAIL reader, you get a 25% discount code when you register: ELAIdiscount
Universities have been making progress toward some flavor of modern architecture, moving, slowly, from legacy systems to cloud-based. AWS, IBM, Google, Oracle, and Microsoft are all offering flavor of cloud-based services that enable capabilities that were unimaginable a decade ago. Modernizing the technology infrastructure is a needed, but not sufficient, condition for the AI wave that is washing across the university sector. The emerging AI/ML stack (ops?) built on this modern infrastructure will be what really drives organizational change. Modern infrastructure + AI/ML layer = universities entering the AI conversation.
AI & Learning
Fei Fei Li (Stanford HAI) had an excellent interview recently. She's alarmed at the lack of university presence in the AI conversation "I actually wonder, if you combined all the compute resources of all universities in USA today, can we train a ChatGPT model". And later "When ChatGPT came...my first knee jerk reaction was "my God, this should be the biggest moment in education sector".
Google coming for Duolingo? If all the world is digital, every domain is a competitive space for big tech. Any tech/AI product can sit in the cross hairs of Google...
Canva is making a big education play: "a comprehensive teaching and learning platform for every kind of classroom."
Near term impact of AI on education: "Our current instructional design approaches assume that access to expertise is scarce, expensive, and delayed. That’s why we “capture” disciplinary expertise in “content” – so we can economically provide access to expertise to learners. But what if access to expertise was abundant, cheap, and immediate? If your students have access to the internet, that’s the world your students are now living in. How should that fact change the design of your instruction?"
AI Tech
All AI is politics. It's too important and too powerful to not be a state-level threat and concern.
AI and creativity is going to be bonkers. Actors and artists who have long passed will yet act in a new blockbuster or release another chart topper. How do y'all feel about this?
We're all kinda waiting for Google's Gemini. Though with Google's plodding execution that eager wait may turn to disappointment. Some early glimpses of the tool (almost looks like a platform).
AI Safety
Growing recognition of AI risks, and the threat of regulation, has resulted in organizations trying to get in front of AI risks. Here's Google's latest - an extension of SAIF and new open source frameworks.
Researchers at Stanford have released a foundation model transparency index. "Foundation models like GPT-4 and Llama 2 are used by millions of people. While the societal impact of these models is rising, transparency is on the decline. If this trend continues, foundation models could become just as opaque as social media platforms and other previous technologies, replicating their failure modes."
OpenAI has announced a new focus on AI preparedness: "We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks." This new (well, updated) initiative will manage "catastrophic risks".
UN has a new advisory board on AI: "dedicated to developing consensus around the risks posed by artificial intelligence and how international cooperation can help meet those challenges."
UK sets up world's first AI safety institute: Details are limited, but they will "advance the world's knowledge of AI safety and it will carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of, exploring all the risks from social harms like bias and misinformation through to the most extreme risks of all"
AI risk should be treated as seriously as climate change. Others agree with the idea of a CERN for AI safety.