SAIL: Escalating Cognition, AI Lies, AGI, The Law is Coming for GAI
Hi all - welcome to another edition of Sensemaking, AI, and Learning (SAIL) - a weekly look at AI developments that matter for education.
I've had several discussions over the last few weeks about the utility of the "is AI sentient" question. Common thought experiments to provoke dialogue on "what does it mean to be aware or to know" are the Chinese Room Argument and Mary's Room. The question both experiments try to unpack is whether there is something more of substance involved in human knowledge work than the transactions of "read and reply" (Chinese Room) and "describe but not experience" (Mary's Room). The underlying assumption of each is that there is something more that happens when we communicate or there is a type of qualia that exists when we experience the world as brains in bodies. I'm sympathetic to this argument. However, I'm more motivated by what an object moves us to do, not what it is by itself. A book is hardly a sentient entity. As I read and interact with it, it can change my life. To this end, my interest is in the utility of anything to motivate humans toward cognition. AI in this view doesn't need to be sentient to know and experience the world as we do - it just needs to interact with us in deepening layers of cognition, motivating us to greater knowledge.
AI and Learning
What happens when AI competes with and outperforms humans in many cognitive tasks? What should we teach in our classrooms? GRAILE is hosting an excellent webinar on this topic: Cognitive Escalation. Free registration.
I've referenced this article more frequently than any over over the last year: "In this paper, we describe how cognitive psychologists can make contributions to [explainable AI] XAI. The human mind is also a black box, and cognitive psychologists have over 150 years of experience modeling it through experimentation." On the surface, it seems naive, but the logic of using experimental methods well developed to understand the human mind to also understand the inner workings of neural networks appears to be solid.
OpenAI is launching developer tools. For the low cost of "$78,000 for a three-month commitment or $264,000 over a one-year commitment." I'd sign up in a heartbeat if I was leading a university. Get in the game early and get in aggressively. My logic is simple: if what we are now seeing as the early stages of consumer facing AI, an investment of a few 100k will produce capacity, if accompanied by vision, to leap frog the innovation game. I'll make small monetary bets with anyone willing to the effect that "those universities that now invest heavily in AI will be well ahead of peers that don't in terms of students, completion, and quality of learner experience in five years"
General AI & Technology
Are Generative AI systems legally responsible for their outputs? Possibly, according a recent Supreme Court discussion "as search engines begin answering some questions from users directly, using their own artificial intelligence software, it’s an open question whether they could be sued as the publisher or speaker of what their chatbots say."
Want AI to work with your content (say blog or writing). Ask.AI is one of what will be many options launched this year. It will be nice once we have a simple tool that allows us to analyze and interact easily with content of specific conferences (such as LAK for those of you that will be in Arlington, Tx in a few weeks).
We're going to have a messy relationship with AI - it gets the right answer and makes up citations in medical diagnosis.
OpenAI decides to post Silicon Valley Stretch Goals: Planning for AGI "AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity. On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right." Given the potential risks are so great, the parameters and scope of this work should hardly be under the watchful self-interest of a single company.
Meta releases an LLM "Even with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models. This restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation. Smaller models trained on more tokens — which are pieces of words — are easier to retrain and fine-tune for specific potential product use cases. We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens." Which resulted in surprisingly positive general feedback online. Meta is more active in the AI space than is often recognized.
Effects of GAI
Not surprising, but tools of easy content creation will dramatically undermine existing creative work. Clarkesworld is one of the first: "The science fiction and fantasy magazine Clarkesworld has been forced to stop accepting any new submissions from writers after it was bombarded with what it says were AI-generated stories."
How much does it cost to build an LLM? Not cheap according to this Source Without Citation - in the range of $4m-$27m
Framing Change
The scope of change is simply stunning when zoomed out.