SAIL: AI & Education Groundhog's day
Welcome to Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting education and learning.
Over the last year, after dozens of conferences, many webinars, panels, workshops, and many (many) conversations with colleagues, it's starting to feel like higher education, as a system, is in an AI groundhog's day loop. I haven't heard anything novel generated by universities. We have a chatbot! Soon it will be a tutor! We have a generative AI faculty council! Here's our list of links to sites that also have lists! We need AI literacy! My mantra over the last while has been that higher education leadership is failing us on AI in a more dramatic way than it failed us on digitization and online learning. What will your universities be buying from AI vendors in five years because they failed to develop a strategic vision and capabilities today?
AI & Education:
Assessing AI tools is a key issue that universities face when buying AI tech. We've been playing with a rubric to get started. Thoughts appreciated!
Reminder: our Empowering Learners for Age of AI conference at ASU in December has panels posted. Registration is open.
General AI Tech
Microsoft just announced development of its own chips to enable the "building the infrastructure to support AI innovation"
Last month, Stability AI launched a new service - AI generated music. Google just got in the game as well.
Textbooks are all you need I've referenced this before, but returning to it as it was a point of discussion at a conference in Boston yesterday. Models improve by either improving computation or size of the neural network. The authors here decided to instead focus on quality of data. They used "text book quality data" to train a model (in four day!) that outperformed existing open models, even though the model was 10x's smaller and 100x's smaller in data set. There is an important relationship between compute, network size, data quality.
Llama is the open source LLM. What are Meta's future intentions? A focus on multi-modality, safety, and community. The Llama models have an impressive 30 million download count.
OpenAI made a range of announcements at their developer conference last week. GPTs and assistants were central to the offerings announced. Basically, with plain language, you can start to create tools that do things for you (summarize meeting notes, plan recipes, etc. The impact uptake has been significant. So much so that OpenAI has had to halt access.
Musk is back in the AI game with his own LLM: Grok.
AI & Impact
EY has launched an AI platform. You'll need to take some time to try and figure out what it actually does - experience designers weren't actively involved it appears. Apparently it's a unifying platform (human & AI & business objectives)
BCG released a report in September where they emphasized the capability of genAI to raise everyone's performance...but the greatest impact was on current "bottom performers".
Those AI Doomers are getting pushback.
AI will impact 45% of the workforce in the next three years. What does it mean to be impact? Automation, business process improvement, sector disruption (such as coding), and a broadscale transition to an "AI economy".