AI is not Efficient, it Means More Work for Teachers and Students
I want to expand upon the claim I made in my last email that, rather than offering efficiency gains for educational institutions, whatever potential AI has to improve teaching and student learning will demand that we slow down and teach with more rigor and attention than we ever have before.
At the widest level, it has been true for my entire teaching career that the work expected of students and teachers has, contrary to tech and edtech marketing, been vastly expanded with the introduction of new technology into the classroom. In a few senses, this is obvious, and not as counterintuitive as it might seem. Mastery of effective use of an LMS, Zoom, email, and digital research tools in teaching requires far more extensive training for teachers. Compare that suite of software and quality metrics to the old world of a typewritten syllabus dropped off at the campus duplication center alongside a packet of material to be copied and sold to students (still a normal practice during my undergraduate years, but gone by the time I was in grad school). Class prep, similarly, is more demanding. Instead of the expectation that expert research faculty are themselves simply valuable to share knowledge with students in a lecture and then having grad students take up breakout discussion and study sessions on Fridays, the teachers I know are far less likely to have TAs and far more expected to implement a diversity of teaching methods during a class session, rarely relying on mere lecturing, because subject expertise is no longer seen as adequate. Add to this a whole host of teacher excellence initiatives, lesson plan rubrics, exploding syllabi, and how often we are asked by students to provide tech support for all the digital tools we more or less have to use, and you can see there is nothing efficient about the high-tech campus, at least not for teachers and learners. Penny-pinching admin might feel different, having cut all those TA positions and closed down the expensive duplication center, but I think even they would also insist their workloads have seen exponential increases (if only to justify raising their own salaries and administrative bloat).
When ChatGPT was released in November 2022, university faculty, on the whole, were justly taken aback. "We already have too many students, it is already so hard to grade effectively, deal with grade inflation and the student grade grubbing that comes with it, the growth of paper mills and other forms of digital cheating, and now you are telling us there's a free website that will just write decent papers for students? How are we supposed to deal with this?" Most saw immediately that this meant a huge workload escalation, not efficiency. And most understand intuitively that using AI to grade papers was deeply unethical, a complete abdication of basic responsibility, and likely a way to make ourselves irrelevant and expendable. We were simply going to have to spend much more time figuring out what to do with AI papers and whether it's possible to identify them. It's not, and there is a growing cottage industry of lawyers advising students how to respond to false AI accusations because teachers have no standing or resources, and the stakes are genuinely very high for students accused of academic misconduct. We are over a barrel here, and it inspires strong reactions.
I can put this in terms of my own experience. Before AI I could teach my American Literature survey course and primarily assess students through a series of take-home writing assignments. It was always possible for a student to finish a writing assignment without completing the assigned reading, but there were tell-tale signs that a student had not spent much time thinking about the material before writing: a lack of insight, vague summaries instead of analysis, and mistakes in the details of character and plot. I could more or less use take-home essays as a proxy for testing in class. I usually supplemented this with in-class participation grades and at least one required one-on-one meeting with students to discuss ideas. Now, almost no one reads everything in a survey; that's been true since I was an undergraduate, but you could see pretty clearly if students had followed some major themes of the course and had at least closely examined a handful of examples. This wasn't a perfect assessment, but I felt confident I could determine if a student was leaving the course having grasped key texts and concepts. And the advantage of papers is that they also help students rehearse writing and research skills, alongside demonstrating content engagement. All important goals for English majors, or even general education students taking their only humanities class. Plagiarism and citation problems existed, but I could usually catch them and ask for a rewrite.
College essays, overnight, ceased to work as an assessment mechanism. Even without having the AI write for you, students could get enough out of the AI to fake engagement. The essay might not be the strongest piece of writing I've ever received, given the known problem chatbots have with chains of inference, but it could do enough that a student just didn't have to read anymore to produce something passable. This is a huge problem, because as we know, students are not reading. The collapse of reading is probably the biggest, least-addressed crisis in university education. And no one knows how to solve it. Chatbots make it exponentially worse.
Someone may argue, why not just do more in-class testing? But there's a reason we moved away from those methods in the humanities at the university level. For one, they are not nearly as effective as you might assume at motivating students, but beyond that, building effective humanities tests is time-consuming. Grading effective humanities tests is time-consuming. Getting students to accept the grades handed out from a humanities test is time-consuming. They expect easy As in non-major courses; they will go over your head to get one, and universities don't support non-tenure-track faculty. One of the only times administrators have ever spoken to me one-on-one in over a decade at one university was to question my judgment about a student's grade. I think this experience is common for contingent and adjunct faculty. And bad student course evaluations can get you fired. It says right in my contract that I have to maintain "better than average" student course evaluations to get reappointed every three years. If someone fails a test, you will get blamed in the comments. That's not even going into the amount of excuses universities hand out to students for missing days of class and the need to offer extensive make-up options.
Just as an aside: the stakes for students are extremely high, and they are getting into devastating debt to attend college. When I was in school, your GPA only mattered if you planned on grad school—that is not the case anymore. The job market is terrible, and employers use GPAs to screen applicants. Students are set up to fight for grades in this way, and I don't blame them. I blame the administrators who don't support teachers and who keep raising tuition without funding instruction.
The introduction of AI to the learning ecosystem has damaged teaching in other ways, too. As I've mentioned before, teaching good research and information literacy is just exponentially harder. Good sources are buried in mountains of slop. But none of this so far is about teaching with AI, just about teaching in a world where chatbots exist. It is harder to teach effectively today than it was in 2022. Every single year of my teaching career since 2014 has been more work than the previous one, even as my skills have grown quite a bit and I've come to develop syllabi and assignments. I often can't do much repeat of methods from year to year because the situation of students changes so much. I'm always prepping anew.
But the promise of AI is that, once we adapt and integrate it, it will make things easier, more efficient, less time-consuming. Let me pause and say, AI probably is the tool that will allow us to keep up with the workload demanded by the introduction of AI, but I don't expect it to get any easier. First of all, every teacher now needs to master commonly available chatbots. We need to know what they can produce, what they can't produce, what errors they introduce, where students can abuse them. And this is not a year-to-year bit of prep we can do in summer. We have to keep up with the relentless pace of new model releases. If you are a teacher and you haven't tried the new-improved "Deep Research" tool on Google's Gemini 3.0, you have an outdated sense of how sophisticated AI output can be, just for free users. Google just absolutely bodied Claude and ChatGPT with a product that was instantly available on every single Android smartphone.
We also live in a grifter information ecology. Just sign onto LinkedIn, and you will see so many slop posts of people promising they can train teachers how to unlock the potential of AI. Many teachers are behind the curve of AI adoption, and at least half of the "AI in education" consulting space is just preying on that lack of knowledge to sell uncritical analysis and weak assignment ideas to university departments. It's a gold rush, everyone wants in on it, and you have to shut a lot out to find critical thinkers on the topic. Edtech has generally been a predatory industry (why are non-profit higher ed dollars going to for-profit firms instead of developing in-house talent?), and it's getting worse. Wading through the slop takes time, and departments are wasting money on substandard training sessions hosted by clueless consultants because no one knows anything. I cannot express to you how bad these sessions are, how much money and time they waste. Teachers are better off just experimenting on their own with the tools, although maybe instead of paying consultants, universities could get faculty Claude subscriptions.
Once a faculty member gets on top of the latest AI tools and begins implementing them in the classroom, does this finally save any time? Not at all. As I said in my last email, I've found some space for speeding up class prep and formatting documents on the back end of my prep, but now we need writing classes and subject area classes that devote weeks of the semester to using AIs responsibly and with rigor. Students don't even know that is possible! They enter in with the idea that's been sold to them: this will save you time on papers, and it might be cheating so don't admit it and figure out how to evade detection software. That is an enormous barrier for teachers to overcome. And the only way to do it is teach it step by step, slowly in class, what responsible use looks like. My colleague Sean Pears recently published an excellent article on where AI might improve writing instruction in particular and how to implement that in the classroom. His thinking has influenced me a great deal in these posts. Notice how much of the writing course now needs to be about AI, and how many iterative assignments there are to evaluate and grade. This is an excellent model, but it represents enormous time commitments.
Now, lest anyone think I'm complaining about having to work, I'm not. I'm actually excited about this work, and see real opportunity here. But we have to push back on administrators who are buying the sales pitch that AI represents an increase in efficiency. To do this well, we need to maintain a commitment to what writing and humanities teachers are good at: one-on-one attention to students, crafting compelling narratives around interpretation and intellectual development, an ear for language, and a great capacity for rigorously attending to claims and evidence outside of the bounds of binary true/false propositions and mathematics (is this a good interpretation? does it respond to the conversation well? have you contextualized your evidence in a compelling narrative?). Teaching loads outside the tenure track (where most teaching happens) are already too large, and faculty development is already under-supported or diverted to useless consultants. My fear is administrators are only looking at AI as a solution to budget problems and not as an educational pitfall and opportunity. We need more time.