AI Commons #3
Welcome back to the AI Commons. This fortnightly newsletter is funded by the Institute for Teaching and Learning and run by Mark Carrigan and Eva Parr. We're exploring how colleagues are integrating AI into teaching and research, with the aim of supporting learning across the university so we can approach this change in thoughtful, creative and collegial ways. If you'd be interested in contributing or have suggestions for future topics please contact mark.carrigan@manchester.ac.uk
📝 In this issue, Wennie Subramonian (Chemical Engineering) talks about her experience of using AI tools to help students prepare for laboratory work:
Laboratory work is central to chemical engineering education, yet many students struggle with the transition from lectures to hands-on experimentation. Issues such as dense terminology, unfamiliar equipment, and unclear links to industrial practice create barriers, particularly for students with limited lab experience or for whom English is not a first language. Our student co-creation project explored how generative AI can lower these barriers and support more inclusive learning. Developed through a staff-student partnership, the initiative focused on using AI tools to improve preparation for a first-year experiment in pipe flow design. Pre-laboratory activities were designed, trialled through focus groups, and refined based on student feedback.
The project followed a three-stage process. First, student feedback from surveys and discussion groups identified three main challenges: unfamiliar terminology, difficulty visualising apparatus, and limited understanding of industrial relevance. Next, a student-academic partnership designed AI-supported pre-lab activities that compared multiple AI platforms and reliable learning sources. Each task directly addressed an issue identified by students. Finally, focus groups were then used to evaluate impact and refine materials for wider use across laboratory units.
In practice, students used several AI tools to support different aspects of their preparation and learning. For language support, students tested AI-based translation and text simplification tools to make laboratory materials more accessible; DeepL was preferred because it preserved the formatting of translated lab manuals, while ChatGPT and Claude were particularly useful for languages not supported by DeepL, such as Hindi and Urdu. To improve understanding of the experimental setup, students used AI to generate schematic diagrams and clarify the functions of equipment, with Perplexity AI producing the most accurate visualisations and retrieving reliable supporting documentation.
AI was also used to help students connect the laboratory activity to real-world engineering practice, such as flow measurement and valve design in industry; ChatGPT provided concise summaries of industry applications, whereas Claude tended to offer more detailed explanations alongside the industry examples provided. Finally, students used AI to help interpret experimental results by generating example graphs and illustrating common anomalies, allowing them to practise troubleshooting before the laboratory session; ChatGPT produced the most reliable visual outputs, which helped increase students’ confidence prior to carrying out the experiment.
Three key lessons emerged for colleagues exploring generative AI in teaching. First, co-creation with students ensures AI tools address real learning needs rather than novelty. Second, comparing multiple AI platforms helps build critical AI literacy by highlighting strengths and limitations. Third, positioning AI as a tool for accessibility, not just efficiency, enhances inclusion and educational value. Overall, this student-centred model offers a practical and transferable approach for integrating generative AI into laboratory education.
👋 Building an academic culture for Copilot 365
There are now over a thousand colleagues using Copilot 365 across the university. It's a significant change in our digital environment, and it will be rolled out to everyone later this year. The fact Microsoft calls everything "Copilot" creates some ambiguity about what exactly will be changing. For this reason, we'd love to hear from colleagues who are already using it. What do you like? What don't you like? What do you think others need to know? We're particularly interested in how colleagues are thinking about this in relation to students. How do we ensure teams, schools and departments are ready for students getting access to this functionality? What guidance might they need? If you'd like to discuss these issues in a future issue of the AI Commons, please get in touch: mark.carrigan@manchester.ac.uk
☕️ Anticipating the AI-integrated University, April 29th 10am to 4pm
The rise of generative AI has brought with it many challenges, alongside new opportunities for creativity and productivity. For universities, in particular, there are well-rehearsed questions about its impact on foundational ideas of academic integrity, authorship, and intellectual exchange. The speed at which the generative AI landscape changes can make it difficult to keep up with the present, let alone imagine the future of artificial intelligence in the university. The ideas and models that we, as lay people, use to make sense of these new technologies are rapidly out-dated, limiting our capacity to appreciate the practical and ethical challenges each new frontier model presents. Universities are often fire-fighting, coping as best they can with what can seem like an unending set of changes.
This session aims to provide a space to address this challenge, offering a chance to begin thinking together about how LLMs and other generative AI systems might change over time, and to imagine new ways of thinking about the relationship between generative AI and society beyond the well-established binary of 'innovation' and 'regulation'. How will existing generative AI systems age or decay? What past features of academic practice might take on new importance? How might pedagogic relationships change? How might AI agents contribute to their own ethical development? And, in the event of a market crash, what might happen to the data infrastructures left behind?
If you’d like to register for the workshop please contact mark.carrigan@manchester.ac.uk by March 27th
💡 Three questions to ask yourself when using AI
It can be difficult to know where the line is between using AI thoughtfully and over-relying on it. These three questions might help:
Would it offend you if a colleague did this? If so, don't do it.
If you're using AI to do something faster, can you explain why the speed matters?
Are you actually thinking about what you're doing as you work with it, or just accepting what it gives you?
There's no universal answer to what counts as responsible use, but asking these questions honestly is a good place to start.
We hope you enjoyed this third issue of the AI Commons. If you found it valuable, would you consider forwarding this newsletter to your colleagues? Comments, suggestions and questions always welcome.