SAIL: Robots, EdTech, Future of Ed, Expertise
Robots
When most people think advanced technologies and AI, some variation of robots or physical technologies (rather than algorithms and large data sets) takes centre stage. Google recently shared its robot research that begins to work with the complexities and nuance of human language: "This effort is the first implementation that uses a large-scale language model to plan for a real robot. It not only makes it possible for people to communicate with helper robots via text or speech, but also improves the robot’s overall performance and ability to execute more complex and abstract tasks by tapping into the world knowledge encoded in the language model."
Gary Marcus wonders if this work should be done at all and counters: "It’s not just that large language models can counsel suicide, or sign off on genocide, or that they can be toxic and that they are incredibly (over)sensitive to the details of their training set - it’s that if you put them inside a robot, and they misunderstand you, or fail to fully appreciate the implications of your request, they could cause major damage."
The Market
While AI is a future thing (though already embedded in many core programs that we interact with on a daily basis), edtech is here and now. Notably, it has had enormous investment over the last few years, that is now slowing somewhat. I see edtech as the thin edge of the wedge. The data required to build the triad of true educational innovation (learner profiles, computed curriculum targeting individual profiles, and ongoing labour/skills market analysis to ensure relevant skills and knowledge is taught) will come from today's edtech. A group of us have been exploring AI and learning grants. The focus always returns to quality of data.
AI and the Future of Education
AI has promised to transform many fields and many aspects of our daily lives. Healthcare was one field, but the results of been sub-stellar. Other than comically over-confidence in predications, the challenges are at least partly systemic: "AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it yet. And the government is just beginning to grapple with its regulatory role." Practical and successful implementations are not common - similar to what we're (not) seeing in education settings.
The US Office of Educational Technology has released a series of listening sessions in the form presentations/panels that assess AI's anticipated impact on teaching, learning, assessment, and research.
Transparency
With most of the algorithms that direct our lives being the property of large corporations, ongoing concerns of fairness and trust are understandable. Governments are responding with calls for recognition of fundamental rights and researchers and activists are focusing bias and fairness.
China, in contrast, has taken the most aggressive approach that I've seen to date: "Late last week, Chinese regulators publicly shared details on 30 algorithms that power some of the country’s most widely used apps and websites, an unprecedented measure that marks a new escalation in Beijing’s years-long campaign to rein in the power of big tech."
Social Considerations
The last several years have raised attention to the experiences of individuals who don't fit the traditional "average" profile in society and education is certainly no exception. This "prison to college pipeline" is promising. If an individual's biggest failings can't be forgotten, the future is always tethered to the past.
The Democratisation of Expertise: "it doesn’t matter how smart the smartest people on Earth are if the general public puts its trust in a chatbot that was trained on data created by the general public. "