SAIL: Data Collection, Ethics, Community
Welcome GRAILE members!
This weekly email will focus on sensemaking, AI, and learning. The intent is systemic change and transformation. Today the language driving that change is AI. In the past it has been some variation of social media, Web 2.0, 4th industrial revolution, and so on. In the future it likely will have a metaverse angle. The core issue facing institutional leaders is one of aligning knowledge institutions, such as schools and universities, with the learning opportunities (affordances) made available by digital networks.
One of the driving challenges for AI in learning relates to access to quality data. An early promise of platforms such as Coursera was the opportunity to use data from millions of learners to gain insight into educational processes and how to better design and deliver learning. Unfortunately, these platforms largely mimic existing classroom settings, so their data answers insights into current pedagogical models. In a world where calls for innovation are the norm, most of the data needed for future innovation is held outside of universities and schools. Big tech is the primary holder of this data. Secondary holders are Instructure and Coursera. The big questions around key areas in the development of AI innovations, such as learner profiles and computed curriculum, relates to who keeps and controls the data sets used in developing the next generation learning systems. And as importantly: will those data and models be open and available for scrutiny by external researchers.
A few topics of note this week:
Virtual learning and tracking: “Of the 164 products reviewed across 49 countries, Human Rights Watch found 146 (89%) appeared to engage in data practices that "risked or infringed on children's rights."” Based on this report.
AI and AI ethics are increasingly mainstreamed. Time recently include Timnit Gebru in its Top 100 list. After her public and acrimonious departure from Google, she now focuses on “justice-oriented technology design”, increasing the diversity of voices in AI.
A Robot Paints the Queen ”Ai-Da was created by a team of programmers, roboticists, art experts and psychologists. The robot was completed in 2019, and is updated as AI technology improves. Last month, she held her first solo exhibition at the 2022 Venice Biennale.” Not everyone is impressed because Ai-Da doesn’t possess “independent consciousness”.
Implementing next generation privacy and ethics research in education technology: “This paper describes a collaborative research project that seeks to overcome the technical and procedural challenges of running a data-driven collaborative research project within an agreed set of privacy and ethics boundaries.” Expect this to grow in importance. Researchers, and edtech companies, need data to build models, conduct research, and, in some likely far away future, drive innovation of the education sector through AI. Getting there requires much greater attention to ethics and the social dimensions of active surveillance.