SAIL: ChatGPT, AGI, Ways of Being,Copyright
Happy 2023 everyone! This is the first issue of Sensemaking, AI, and Learning (SAIL) for 2023.
As noted in several emails last year, 2022 was all about generative AI. Another way to look at it is to focus on last year as the first year where many people had a direct, conscious, interaction with AI. We've all been experiencing AI, behind the scenes, in our interactions on social media, navigating traffic, applying for credit, etc. AI, rather obviously, isn't a future technology. It's here. However, seeing AI in action, through ChatGPT, suddenly brought people's awareness of AI up to speed with the progress AI has been making over the last decade.
In 2023, through GRAILE, we'll be accelerating our reports and webinars to accelerate the support that we provide to our charter member universities. The reality of AI as a transformative force in education seems to be on the front of many conversations, including the most senior levels of universities. Here are a few things of note over the last week:
AI and Education
Bryan Alexander recently hosted a discussion on what ChatGPT means for education. Transcript and recording here. It's interesting to note the ways that educators are gathering and arranging conversations to make sense of "what does this all mean". For me, I'm focused on two things: 1. What are the longer term systemic implications of AI (spoiler: transformative and systemically disruptive) and 2. Who will own the AI infrastructure and models (spoiler: not universities).
I'm sure you're all tired of screen grabs of what ChatGPT wrote. This one is still worth sharing: McKinsey vs ChatGPT produced tables of impact.
Tony Bates, a prominent academic in distance education, shares his reflection on playing with ChatGPT. He's scared.
How can AI support learning? An article posted late last year suggests three areas of impact: "AI can be used to overcome three barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations."
Updating your syllabus for ChatGPT
General AI
Sam Altman, CEO of OpenAI (Co-Pilot, DALLE, ChatGPT) shares a thread on the anxieties we will experience as we move to artificial general intelligence, making "ChatGPT look like a boring toy"
Stanford has a short self-promotion video on their researcher's contributions to AI. Spans the evolution from GOFAI to our current deep learning era.
Will ChatGPT kill Google search? No. Not yet. "Large language models are not databases. They are glommers-together-of-bits-that don’t always belong together...they are text predictors, turbocharged versions of autocomplete. Fundamentally, what they learn are relationships between bits of text, like words, phrases, even whole sentences. And they use those relationships to predict other bits of text." One redditor, however, says it's not an equivalent comparison. "I see it as an "inspiration machine" as opposed to "information provider"".
Intelligence
The comparison point of AI is human intelligence. I recently read James Brittle's book on Ways of Being. He makes a compelling argument that humans may be uniquely intelligent in some domains. But these domains are surprisingly small ones and ones that ignores the majesty of nature and complex ecosystems. Our intelligence is one of many intelligences. And AI is simply one more variant to add to the mix.
Emergent Analogical Reasoning in Large Language Models: "The recent advent of large language models — large neural networks trained on a simple predictive objective over a massive corpus of natural language — has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training on those problems. In human cognition, this capacity is closely tied to an ability to reason by analogy."
Copyright/Ethics
LLMs require large data sets. These data sets are scraped from the web and include images that artists and creators did not explicitly give permission for use. Have I Been Trained allows search options to determine if prominent models (Laion-5b and Laion-400m) have made use of your work.
Can AI-generated work enjoy copyright protection? No.