SAIL: All AI, In Everything, Learning Co-Pilot
Welcome to Sensemaking, AI, and Learning (SAIL). Education as a system has been slow to respond to the rather significant opportunities of generative AI (GAI). I'm confused as to why. Numerous universities have now set up some variation of committees to unpack GAI from the lens of assessment, impact on teaching, and education in general. None that I've heard about have tackled it from a "what is now possible angle". The dialogue appears to be about protecting existing systems and processes. We need people to think above their positions and roles and think about institutional impacts and opportunities.
I've been thinking about parallels with digital education. In late 1990s, as some universities started adopting computers for instruction, it quickly became obvious that the education system was in for change. Some schools, such as Penn State, UTA's Nursing program, ASU, and others built out focused online and digital learning support infrastructure including learning designers to help faculty move online. Many universities ignored the stunningly obvious impact that the online environment would have on learning. Universities who lacked vision to develop their own internal capabilities to create and deliver online learning paid the price for inactivity and had to turn to online program managers. The cost of poor vision and leadership? Online laggard universities are now paying something in the range of 40-70% of tuition to OPMs in exchange for them to develop and market online courses.
Here's my question: What types of AI services will universities who fail to create an AI vision for teaching and learning today have to buy in the future?
AI in Education
Useful list of AI tools to create educational content
This article will be (should be) one that will be referred to throughout the year: The Age of AI has Begun. Bill Gates covers a number of sectors that will be impacted. Regarding education: "I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback."
I believe chatbots understand part of what they say. This is a good video...we understand in systems. And ChatGPT is (can be) part of our knowledge system.
AI Technology
The theme of the week is to add AI into roughly every technology.
Adobe launches Firefly, a generative AI tool: "Firefly will mix the power of our applications with the promise of generative AI in ways that empower you to express your creative ideas"
Bing launches an image creator.
Canva goes hard with generative AI
Google launches its ChatGPT competitor Bard - unusually late for a company used to leading.
OpenAI launches ChatGPT plugins. I'm mean, seriously. What does it mean? Essentially, an app store for ChatGPT. Think personal AI assistant. Or in education, the foundation for a Learning Co-Pilot. Or edubot. Or something like that. It's getting nuts out there y'all. How big is ChatGPT plugins? Big. Like big big.
GPT-4 launched last week. Here's a good summary of the LLM, notably around early "no brakes" release and the process of making it more palatable to a broader audience.
See also: Generative AI wars.
Mozilla is launching a trustworthy and open AI initiative.
The really important stuff
Ok, it's time to freak out about AI "If an LLM, by emulating our patterns of speech, can manifest some of our beliefs, couldn’t it also manifest our motivations—like seeking love, or seeking respect, or seeking power?"
Six Human-Centered Artificial Intelligence Grand Challenges "We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition."
Oof. "Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work
tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their
tasks impacted"Sparks of Artificial General Intelligence: Early experiments with GPT-4 . Gary says no...well, actually he says this could lead to the death of science "We must demand transparency, and if we don’t get it, we must contemplate shutting these projects down."