AI is Sentient, Worst AI Ever, Foundation Models, Race, Special Issue
Hi,
It's been a busy week in the world of AI. I try to constrain this newsletter to AI advancements that will have a practical impact on knowledge processes such as learning, sensemaking, and decision making. A few interesting developments:
The big news this week was a Google Engineer declaring an AI chat agent had achieved sentience. Responses on social media were quick and brutal. The insanity starts here, but also AI Theatre, Eliza effect, the futility of AI consciousness discussions from late May, and this image from a few years ago. Gary Marks responds: "Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language."
The Economist has an article on huge models stating that 80% of AI work is now in foundation models. What are Foundation Models? They are a base for application in new settings, with limited "tuning" required. Figure 2 in this influential article (from 2021, but "influential" in AI is now measured in months, not years) captures how foundation models are repurposed for multiple uses. What are the foundation models in education? Who has a large enough data set for this type of model building? Where will schools and universities exist in their development?
The worst AI ever. A model trained on 4chan and released back into its forums. It's a good video detailing how many AI models are backward facing/mirrors of ourselves (and capture bias we might not be aware of) and the unfolding interactions of humans and machines in conversational spaces - the little things that "feel off" to humans.
AI is rife with problems. Lack of nuance is one. The experiences of disabled students under the watchful gaze of AI (which I assume has been trained on an able-bodied population) is yet another example to add to the long list of AI achieving the opposite of intended goals. Or more accurately, displaying harmful bias.
GAO says Congress should review how predictive analytics are used in higher education.
The machine knows race even when humans don't. "Even when you filter medical images past where the images are recognizable as medical images at all, deep models maintain a very high performance. That is concerning because superhuman capacities are generally much more difficult to control, regulate, and prevent from harming people" and "This paper should make us pause and truly reconsider whether we are ready to bring AI to the bedside [healthcare]." There are many educational settings where the same pause is warranted.
EDUCAUSE released a special issue on AI in education.
We're gearing up for our 3rd annual AI and education conference. The first few years are archived here. If you or someone you love would like to a) assist in organizing or b) sponsor the event, please let me know.
Thanks to Pete, Shane for links. Feel free to send interesting resources my way.