AI Commons #1
Welcome to the AI Commons. This is a project funded by the Institute for Teaching and Learning, with the aim of building our capacity to realise the potential of AI in education while navigating the risks that come with it.
The project is run by Mark Carrigan (Senior Lecturer, Manchester Institute of Education) and Eva Parr (Student Research Partner, ITL). We'll be sending out a newsletter every two to four weeks over the next 18 months. The aim is to help colleagues stay up to date, make connections, and learn from innovative practice taking place across the university.
Each issue will feature a case study, recommended reading and prompts to consider in your own AI practice. If you’d be interested in contributing or have suggestions for future topics please contact mark.carrigan@manchester.ac.uk
📝 In our first case study, Erla Thrandardottir (Global Development Institute) reflects on using AI to help students interrogate the relationship between available sources and the claims we can make from them:
Designing a new PGT unit in development studies recently challenged me to introduce Rwanda's complex historical context within a two-hour lecture. I wanted students to acquire more than factual information; to experience how the availability of information, or lack thereof, affects our perspective and shapes interpretations of development actors and their power relations.
I used Padlet's AI timeline tool to generate a framework for an in-class research activity. I anchored the framework in a Rwandan history book written by Rwandan authors, published in Rwanda, and not widely available in UK bookshops or online. This was a deliberate epistemic choice in a field where Western narratives can dominate what students assume is "the" story.
During the lecture I treated the Padlet as a small research collaboration. Students worked in groups of four, each assigned a historical period. Their task was to add sources to the shared timeline, including a correctly formatted bibliography. Using AI for research was expected, but I constrained the sources to academic publications, UN material, and Rwandan official documents. The aim was to make a pedagogical point: what we can "know" depends on what is available and legitimised as knowledge. As literature on Rwanda is uneven, this necessarily has implications for analysis.
On reflection, visualising the timeline was a welcome shift from more abstract discussions in prior weeks. Students weren't receiving a narrative but contesting what gets over-emphasised and why. What could have worked better was the optional "Reflection Post". If students struggled to find sources, they were to document what they were looking for, what they found elsewhere, or that they found nothing, and add a sentence on what that absence might indicate. With hindsight I'd say this was underutilised. I intend to make it mandatory next time and perhaps dedicate a separate tutorial to reflection, so the timeline becomes a springboard for deeper questions about the lure of false mastery.
More broadly, this activity reinforced a tension I suspect many colleagues recognise: AI can quickly produce structure, but it can also tempt us into mistaking coherence for meaning. One way of reinforcing good principles could be to habituate reflection in the use of AI, encouraging students to develop their academic judgement. Many of us are trying to navigate how to respond to AI when it is already embedded in students' practice. I've found the AI Commons a useful space to share situated practice and open up productive dialogue.
📚 Recommended reading:
IBM is tripling the number of Gen Z entry-level jobs after finding the limits of AI adoption
AI shatters the pretence that academic polish was ever anything but gatekeeping
Claims that AI can help fix climate dismissed as greenwashing
💭 Something to think about:
Rex McKenzie argues that AI might be making assessment more equitable, given how our existing model has prioritised stylistic polish over intellectual substance:
The traditional model fuses the what (idea) and the how (writing). Assessment unconsciously rewards code-fluency over intellectual originality. This systematically disadvantages anyone not already socialised into academic register: working-class students, first generation students, non-native speakers, those from non-Western educational traditions.
And here is where the class dimension becomes unavoidable: wealthier students have always had access to human “AI” – private tutors, professional editors, writing coaches. The university’s AI policy effectively punishes working-class students for accessing the free version of what wealth has always bought.
From link #3: https://wonkhe.com/blogs/ai-shatters-the-pretence-that-academic-polish-was-ever-anything-but-gatekeeping/
We hope you enjoyed this first issue of the AI Commons. If you found it valuable, would you consider forwarding this newsletter to your colleagues? Comments, suggestions and questions always welcome.