AI Commons #2
Welcome back to the AI Commons. If you missed our first issue, you can read it here. As a reminder, we're a project funded by the Institute of Teaching and Learning, sharing practical insights about AI in education from colleagues across the university.
Each issue features a case study, recommended reading and prompts to consider in your own AI practice. If you'd be interested in contributing or have suggestions for future topics please contact mark.carrigan@manchester.ac.uk
📝 In this issue, Pauline Prevett (Manchester Institute of Education) reflects on what became possible when she started collaborating with an AI assistant to build research tools:
Six months ago, I couldn't write any code. Today, I've co-produced over 150,000 lines of interactive educational software comprising three digital textbooks and a suite of qualitative analysis tools, all in the Beta testing phase and heading toward commercialisation. No one could be more surprised than me. My collaborator? Claude, an AI assistant. This isn't AI replacing human expertise. It's what becomes possible when theoretical vision meets computational capability.
Our process centres on what I call the "discuss first" protocol. Before any code gets written, we talk through methodology. What should this tool accomplish theoretically? How do we preserve researcher interpretive control? Only when foundations are solid do we implement. This keeps me as the methodologist in the driver's seat. I work through prompts, explaining what I need analytically rather than technically. Claude generates code; I test it through actual use and browser console verification. When functions produce unexpected outputs (an 8500% tension calculation, for instance) we interrogate together until we find the bug. Increasingly, I make simple code amendments myself: cut-and-paste fixes, parameter adjustments, button names and colours. I am "vibe coding". It's the Collins Dictionary word of 2025.
What makes this collaboration productive is genuine theoretical depth meeting computational capability. I don't just want tools that "work"; I want tools embodying methodological principles. When I explain why stance must be independent from construct position, or why tensions occur between critical incidents not within them, I'm teaching Claude my framework. Apart from having my own programmer on call 24/7, Claude offers me safety. No question is ever too small. By sticking with it, revision upon revision, I have built tools I feel proud of but this is not a soft option. At several points I could have given up when my technical skills felt inadequate. One time I was in despair that I couldn't cut and paste simple coding instructions in Notepad. Gradually that became easy; I began to identify where functions begin and end and when the "divs" looked off. Recently, I learnt to work with Claude embedded in VSCode, yet another game-changer.
The CISA (Critical Incident Semantic Analysis) suite grew from a spreadsheet idea to a comprehensive ecosystem. I was excited when I tentatively asked whether Claude could make that worksheet writeable and yes, it was delivered. From there my demands grew ever more ambitious, and I amassed interactive workbooks, often for my classes instead of PowerPoints. From there I progressed to making research-grade analysis tools. For researchers curious about AI collaboration: the technology isn't magic, and it won't replace your expertise. But if you're willing to discuss first and implement second, you might be surprised what you can build together.
I'd love to continue this conversation: pauline.prevett@manchester.ac.uk
👋 Preparing for Microsoft Copilot
As you may be aware, Microsoft Copilot is being rolled out at the University of Manchester soon. There are actually two distinct packages with the same name: Copilot Chat (a chatbot like ChatGPT) and Copilot 365 (automation built into Office software). We've had Copilot Chat since last year, and many people find it most convenient to access through Microsoft Teams. If you haven't tried it, it's worth exploring. I was personally sceptical of the previous version but it's been hugely improved. It's now operating at a level close to ChatGPT, and is actually using OpenAI's GPT 5.2 under the hood.
Copilot 365 will be the bigger change for many colleagues, as it introduces automated functionality across the full range of Office software. There's good evidence this can support productivity, particularly with routine administrative tasks. Its ability to safely and conveniently use resources within our ecosystem can be enormously helpful. But it's easy to see how it could raise challenges for teaching and learning, given how proactively it offers to complete tasks on the user's behalf. It's worth understanding what this software does so we can think through the implications for our students and advise them when they ask.
This will be a big theme in the AI Commons over the coming months but this video from Microsoft gives a quick introduction to what is going to be switched on for colleagues and students later this year
💭 Something to think about:
There are a range of ways in which AI can operate through a web browser, for example OpenAI's Operator or Anthropic's Claude plugin for Chrome. These enable users to give the AI instructions which it can then act on in real time through real websites. This video illustrates an agentive browser being used to autonomously take a quiz on Canvas, masquerading as a student. This is a huge shift in what AI can do and we urgently need to understand how these tools might be used at the University of Manchester. There's more reading about this in the links below.
📚 Recommended reading:
We hope you enjoyed this second issue of the AI Commons. If you found it valuable, would you consider forwarding this newsletter to your colleagues? Comments, suggestions and questions always welcome.