SAIL: AI Index, Future of Education, Survey of LLMs, Italy doesn't like CGPT
Welcome to Sensemaking, AI, and Learning - a weekly look at AI trends and trajectories and their impact on learning and education.
Since the founding of AI as an idea, it has been pitted against humans. At some point in the future, it will overtake our intelligence and then the world. Or so goes the general narrative around artificial general intelligence. The real question is whether it actually matters. This is one of the more informative assessments I've seen on this. Short view: the global economy and big businesses are already so complex that they are beyond the control of any one individual. AI will amplify that power, but the reality is we are already in systems where we don't have control. The market effects and appeals to general shareholder value have already created a type AGI - the effects of these systems produce both opportunity and inequality. Here are a few thoughts on AI vs AGI, with an emphasis that we are intelligent in networks, not as individual agents.
AI and Learning
Large language models challenge the future of higher education: "This will require a forward-looking vision, substantial investments and the active involvement and lobbying of educational institutions and their funders." For a Nature publication - this is weak - more like a summary of the conversation that have been ongoing in various social media sites. I believe AI will fundamentally restructure universities. That's not reflected here. The focus in this article is more about "should we use generative AI in our teaching?". For me, the question is "which parts of universities will no longer be relevant as AI functionality advances?".
Turnitin is one of the edtech companies that is likely to be significantly impacted by Generative AI. They've created a resource page and an annotated bibliography (though these will get out date quickly). Disclaimer: I have had one discussion, with another planned in a few months, with Turnitin on their AI strategy with about a dozen other academics.
In the future, we'll take chatbots, not courses. Specialized trained bots, such as BloombergGPT are the future.
How to use AI to do practical stuff: We will be increasingly working and thinking with AI. This is a basic intro into getting started with practical things now.
AI Development
Stanford just released its annual AI index report. Education receives a chapter, but mainly on education for AI, but includes a focus K-12 education and data literacies. This report and State of AI are the two must reads to track aggregate trends.
A Survey of Large Language Models - this is a good overview. Figure 1 is informative, but is one of the more ridiculous timeline images I've seen.
Italy bans ChatGPT. We're busy banning tiktok. Might as well add more to the mix.
GPT-4 is a reasoning engine, not a knowledge base. Think Wolfram Alpha vs Google. An interesting point made in the article is the value that people who use "container apps" to read articles and track their own thinking (Notion, Obsidian) will have a contained knowledge resource (personal knowledge graph) once AI advances to more personally trained models. As always, better data=better models.
Computation used to train notable AI models Incredible to think that AlphaGo Lee in 2016 was trained using 1.9m petaFLOPS...compared with GPT-4 at 22 billion.
Things to worry about (well, things to confront and change)
AI will replace 300m jobs globally. So says Goldman Sachs. The report is here.
Duolingo releases its statement of responsible AI use. JAMA has a piece on this as well, calling for an AI code of conduct in medical settings. Ethics, transparency, accountability, reliability, fairness, public conversation - those seem to be the integrative concepts across these types of urgently needed conversations.
A large group of random humans, including billionaires and scientists, have called for a pause to large language model development. It's unlikely to have any effect. Andrew Ng states why: "There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in."
Several other organizations and self-described important people have gotten into the game as well in response, advocating for their own views. Google, Microsoft, and others have over the last few years articulated their visions of AI ethics. None really matter. The key issue is, as noted in the video above, perverse competitive rewards that ensures we will run this AI game until its final end, even if we know it will cause harm. See why here - we can always blame Moloch.
Reminder: Clearview likely has you in its database and you can be identified by roughly any agency that uses their service. And it is frequently being used.