SAIL: All AI, All The Time (A3T2)
Welcome to Sensemaking, AI, and Learning (SAIL) - a weekly (soon to be more frequent) look at AI in learning and education.
Currently, it seems like decades happen in weeks in AI development. A daily list of innovations is disorienting even to the most ardent followers of AI trends. We are at an inflection point that feels like a "before AI and after AI" moment. A recent NY Times opinion piece, This Changes Everything captures the urgency of the moment. To my thinking, there is nothing more important for us, collectively and specifically as educators, to make sense of and to which to plan holistic responses. How does this change the human condition? What are we becoming? And building on that, what and how should we teach?
On Monday, in conjunction with the Learning Analytics Conference at University of Texas Arlington, we kicked off the first of what will be many workshops over the next 12 months where we will actively work to understand AI and prepare for its anticipated impact on the education sector. If you are interested in hosting a sensemaking session on your campus, please let me know. We (GRAILE) have a rough template and approach and we'll coordinate planning, data collection, and promotion.
For the foreseeable future, it's All AI, All the Time as we figure out how to orient ourselves to these rapidly developing technologies.
Learning
Khan Academy is going hard in AI
White Paper: Embracing AI for student and staff productivity
In AI, is bigger always better? "To improve further, however, even these more energy-efficient LLMs seem destined to become bigger, using up more data and compute. Researchers will be watching to see what new behaviours emerge with scale. “Whether it will fully unlock reasoning, I’m not sure,” says Bubeck. “Nobody knows.”"
Presentation by Alex Bowers: Unpacking the Caveats of ChatGPT in Education: Addressing Bias, Representation, Authorship, and Plagiarism
AI Progress
GPT-4 was released yesterday. It's both amazing and not. ChatGPT Plus subscribers can engage with the model. Apparently it has been underpinning Bing Chat, explaining some of the variability between ChatGPT and Bing.
China's Baidu launches ChatGPT rival. For broader background of China's intent to be the global leader in AI by 2030, see this translation of their AI Development Plan.
Important reflections on open datasets. "The AI companies that make profits will be ones that either have a competitive moat not based on the capabilities of their model, OR those which don't expose the underlying inputs and outputs of their model to customers, OR can successfully sue any competitor that engages in shoggoth mask cloning." Those reflections triggered by this announcement . Given the ease of retraining large models, it's possible understandable why OpenAI made the (fully worthy of criticism) decision to release roughly no details about GPT-4:"Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar."
Impact
I shared some of this last week, but an anticipated "productivity elevation" is expected with generative AI. Here's three early papers that support this.
Big tech companies are going all in. Microsoft has a vision to reinvent productivity.
So does Google.
And, the evidence of AI accelerating programming outputs is increasingly clear.
Regulation:
Large language models need regulation: This is worth tracking. "LLMs that are developed for, or adapted, modified or directed toward specifically medical purposes are likely to qualify as medical devices." In UK, at least, these LLMs will fall under existing medical devices legislation. The process for this will be fascinating to see unfold, largely because the people who provide oversight of these types of regulations lack understanding of newer technologies (consider roughly any congressional testimony from a tech company executive, where it is clear that politicians have at best a rudimentary understanding of new technologies).
EU's regulation aspirations are challenged with new generative AI tools. "In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale."
Deep Mind is a much more important player in the future of AI than OpenAI currently is (that can change). They recently announced several papers to their stance on ethics. Aligning AI with human values and understanding actions and transparency and ensuring that AI doesn't cause harm.
Microsoft lays off its entire AI ethics team.
USA Chamber of Commerce calls for AI regulation.