What I’ve learned from teaching introductory Excel, SQL, and Python workshops in the age of LLMs
The first time I taught an Excel workshop on formulas, I taught the functions I use every day, including VLOOKUP and its newer counterpart, XLOOKUP. After that class, a student approached me. “So, if we have XLOOKUP,” she said, “why do we need to know VLOOKUP?” This question instantly crystallized a “duh” moment for me: anyone learning Excel today really doesn’t need VLOOKUP. (Thank goodness!) Thanks to that student, I dropped it from the curriculum I teach — and I’m so grateful for her boldness in asking the question. It even broke me out of my old habit of using VLOOKUP in my own work. Good feedback is a gift.
Her question led me to reflect more widely on all the content I teach. What else am I teaching because it’s a habit or it’s what I learned a decade ago? How can I offer students what they actually need right now?
The podcast ReThinking with Adam Grant poses the question, what’s something you’ve re-thought lately? My answer to this question appeared immediately: when students work on exercises in my class, many of them have ChatGPT or another AI assistant open on their screens. Those tools didn’t exist when I was learning these topics. I took that as my assignment to weave them into my curriculum: put myself in a new learner’s shoes, try out my class exercises as prompts, and see what results and issues students are likely to run into.
My biggest finding so far? When AI tools make mistakes, they tend to be fundamentally different kinds of mistakes than students typically make themselves. Here’s an example:
Exercise: This dataset contains every line of dialogue from the TV show Seinfeld. Write a SQL query to find the number of lines of dialogue for each character.
Answer: Something like:
select character, count(*) as number_of_lines from transcripts_data group by character
Human mistake: Forgetting the “group by” clause, or forgetting to add a comma between column names.
AI mistake: Making up a fictional table name and querying (probably correctly) from that instead of the real table.
Both of these mistakes cause SQL to return an error message. For anyone learning to code, error messages can be scary and overwhelming. To solve them, you need a baseline understanding of what’s wrong and why. AI tools can also be used to help troubleshoot errors, but here’s the kicker: while they’re decent at helping resolve human errors, in my experience so far, it’s not as easy to get their help resolving errors they caused. From our example above:
Ask AI to help fix a human error (a forgotten comma): The AI will find a simple syntax issue like this right away and give you a corrected version.
Ask AI to help fix an AI error (a fictional table name): To identify the correct table name, the AI will need some prompting about the context of the datasets you’re working with.
The latter is not a straightforward fix, especially for a beginner. Maybe in a few months or years, when everyone is better at writing prompts and as the tools get more integrated into our working environments, it will be, but for now it’s not. More importantly, when working through both kinds of errors with students, I’ve noticed that AI-generated errors are often less intuitive for students to understand, and require a little more coaching to help students resolve them.
These are new kinds of mistakes for learners to navigate. Just as coding students have always needed debugging skills, learners now need a new set of skills for identifying and resolving mistakes made by AI. Instructors are doing learners a disservice if we don’t equip them for that reality. I’ve done some digging into AI literacy frameworks to teach around this, and while I haven’t yet found one that addresses this issue specifically (hmu if you know of one!), I do love the guidance in this excellent essay on learning to code in the age of LLMs: “in data science, our confidence in our conclusions depends almost entirely on our confidence in how our results were generated, and that can only come from reading, testing, and understanding our own code.”
The mistake I made in my first round of teaching these workshops was sticking to my own old habits. My aim now, in my third semester of teaching, is to teach students how to approach and understand Excel, SQL, and Python with all the learning tools available to them, including AI assistants. I’m probably making plenty of new mistakes as I learn how to do that. If I’m lucky, one of my students will stick around after class to tell me what’s working, what’s not, and what they need.
Making new kinds of mistakes is where the learning happens, whether you’re the teacher or the student. Plus, it’s much more interesting than making the same mistakes over again. Here’s to more new mistakes 🥂
If you liked this email, please forward it to a friend! If this email was forwarded to you, subscribe here: