Our Children Deserve Real Teachers
AI does not belong in our classrooms
(I’m Henry Snow, and you’re reading Another Way.)
The Washington Post reports that the Trump administration is planning to advance AI in K-12 education by executive order. This planned initiative would be lead by Michael Kratsios, director of the Office of Science and Technology Policy, who has called on Americans to “give themselves to scientific discoveries that will bend time and space.” Do you trust someone like that with our children’s education?
I think AI has the potential to dramatically worsen education, and so, as a historian of labor and automation technologies, and an educator, I’ve written a short piece meant as a resource for parents and teachers who want to understand arguments against AI in the classroom. Please share widely! Note that while I have written “children,” this very much applies to adult students in higher ed as well– it’s just that keeping AI out of higher ed won’t accomplish anything if it’s allowed to do harm at earlier stages.
THREE REASONS AI HURTS OUR SCHOOLS
1- Generative AI is unaccountable and uncontrollable.
2- Generative AI makes it harder for students to learn.
3- Generative AI makes it harder for teachers to teach.
I’ll discuss each in order.
GENERATIVE AI IS NOT ACCOUNTABLE TO US
To understand each of these, we have to start by talking about how Generative AI works. You have probably heard that it “learns” by absorbing large amounts of information from books and images. This is not exactly true. Here’s a very rough outline of what happens:
-A large amount of information, such as images or text, are converted into numbers to work with them more easily.
-A computer identifies patterns in this set of “training data.” For example, the frequency with which certain words appear next to each other– the word the is almost never next to itself. This is an extremely simple example, of course. It isn’t learning rules programmed into it, but broader and deeper patterns.
-We produce software that runs this process backward, in effect: it produces new content (images or text) that fits the same patterns.
What makes generative AI generative is that it does not just reproduce stored information– that would just be a database. AI models are a mathematical tool for aiming at a pattern and just slightly missing. If you miss too much, you get useless or incomprehensible words or images. But AI has to miss for two reasons. First, the AI model doesn’t actually store all of the information it was trained on. Rather, it contains those patterns it identified in that information. Second, if you don’t miss, you are not “generating” anything new– you just have an expensive and inefficient database system.
Instead it produces new material that looks like the old material. Before we get into anything about student-teacher interactions, this already leaves us with a system that harms students in several ways. Some of the patterns in AI training data are useful to identify: grammar, for example, is the reason it produces readable sentences. But there are plenty of patterns that are unhelpful or even harmful in the classroom. For example, the Boston Tea Party is a famous event in the lead-up to the American Revolution. This means it is well-represented in training data, and it regularly appears in AI-plagiarized essays. But the Tea Party’s popularity does not accurately reflect its importance: Americans did not discuss “the Tea Party” as a central national event until well after it happened, or even use that name for decades, and many historians would place greater emphasis on other events instead or in addition to it.
Teachers and textbook writers can make decisions about what to teach based on their expert judgment: what is true? What do students need to learn best? AI does not know the answer to these questions. We cannot simply train it only on “accurate” data either. There are not enough textbooks in the world to train an AI on them alone, so an educational AI that prioritizes patterns in that data set will still be influenced by the underlying language model. And because AI aims and misses, even an AI trained only on a hypothetically perfect data set would produce errors, because what it does is make things up.
Particularly grave problems come from using AI like a database or body of content. I’ve seen lessons where AI companies promote “talking” to historical figures. They lie, even on the most basic topics, but often in subtle and confusing ways. As an example, I tested an AI Ben Franklin bot by asking it how Ben felt about slavery. It wasn’t all wrong, and pointed out that his position changed over time. But it mentions his 1754 “Plan for Settling Two Western Colonies” as an example of abolitionist material he wrote later in life, after being inspired by abolitionists like Olaudah Equiano.
Bur Equiano was 9 when that plan was written, Franklin wasn’t an abolitionist yet and had not met him, and the plan does not have anything to do with abolitionism. It certainly does not argue against slavery, as the AI answer claims. Written decades before the AI answer implies, it tells us not about the post-independence abolition era but the pre-independence era of colonization.
To a non-specialist this might seem unimportant. But students who read Franklin’s actual western settlement plan, or worked through it with a teacher, could learn about all kinds of things: British-French competition leading up to the revolution, the ambitions of future revolutionary leaders to steal land from Indigenous people, Indigenous communities’ own diplomacy and politics. A teacher who read it could bring all these things and more into their lessons. Teachers who worked with textbooks or materials developed by other educators and academics could do the same. But if they consulted ChatGPT or similar AI chatbots instead, thinking these are the substitute that AI marketers claim they are, both they and their students would be worse off for it.
Traditional curriculum development does not include making stuff up. AI allows us to produce false and misleading information and answers without even realizing we’re doing it— because it doesn’t know either. Sometimes AI tells the truth by accident, and other times it produces something that sounds like truth but very much isn’t. Because of how AI works, it cannot know the difference. This is a problem for every use of AI, from the seemingly harmless like automating emails to impactful decisions like course material development.
We cannot fix AI’s problems by teaching it to do better the way we would teach a human being, either. AI cannot explain its decisions to us or itself, because it does not know why it has made them. You can ask it to explain, and you might get something that sounds like an explanation, but remember, AI is a mimicry machine. This communication is also a one-way street. Teachers talk to each other, administrators, and parents about what to teach. AI cannot do this: users cannot change the model or the training data, only the “prompt” they give it.
Because of how opaque it is, AI is fundamentally unaccountable to users. We cannot trust it with any goal. Not safety, not accuracy, not responsibility. Chatbots produce plausible-sounding but false answers in every subject. And as more money in education goes to chatbot development, fewer resources are devoted to developing fact-based curricula and methods that teach true information and build real skills. The stakes can be far higher than that, even: the Google-backed company CharacterAI has already been involved in the suicide of a 14 year old. When chatbots go wrong, the only recourse is legal action after the fact. Schools, parents, and teachers can’t afford to sue after problems arise, and money can’t bring back the learning that AI denies to our children.
GENERATIVE AI MAKES IT HARDER FOR STUDENTS TO LEARN
Education always involves training tasks that don’t produce something useful on their own. A carpentry teacher might have students make birdhouses, for example, not because they actually need dozens of birdhouses, but because students learn skills by producing them. The same is true for writing and math. Mathematics helps us make sense of the world around us, from understanding the economy to gravity. Writing does too: by understanding how to express, compare, and connect complex ideas, students learn to form their own.
AI disrupts this process. Having access to a chatbot that can produce a passably written (but dishonest) paragraph on almost any subject gives students the impression that writing has been automated. Luckily both students and teachers have a tried-and-true method to handle this– we have done it in math with calculators. You learn skills like basic multiplication first, and then sometimes use tools that automate the basics.
But calculators and AI work differently, because writing and math are different skills. Long division by hand is a simple process that produces one answer. Automating this– once students have learned to do it– so that we can move on to more complicated tasks makes sense. There are no choices in basic arithmetic: 2+2=4. We teach students these skills so they can move on to complicated tasks where they have to make choices: what parts of this word problem are useful to me? As they get more advanced in mathematics: what methods will be useful here?
Even writing one sentence is a process of choices– more like solving whole math problems than doing individual operations. Writing is a complicated process that will produce different results when done by different people. Spellcheck is like a calculator! A program that writes your sentences for you is more like hiring someone to solve math problems for you altogether. In fact, this is how many students are starting to use math-based AI tools. This does not build students’ skills. AI in the classroom misleads our students into thinking that key skills are a waste of their time.
It also encourages them, and us, to lower our standards. AI is a tool for making up convincing but fake and ill-informed answers. If we use that tool in our classrooms, we signal to students that making up fake and ill-informed answers is what school is about: not gaining new skills that will help them in the future, not learning to understand the world around them, and not growing as people. It tells students that everything is busywork.
Finally, advocates of AI in schools sometimes praise it for engaging and exciting students. I can understand why interacting with a digital “Ben Franklin” might be more exciting for a middle school student than reading something he wrote. But student engagement only matters if what they’re engaging with is valuable— a misleading chatbot isn’t. The costs here are higher than losing out on individual lessons. Difficult, at times uninteresting work helps students build skills. If we prioritize engaging but shallow and/or meaningless lessons with AI, we short-circuit students’ ability to ever engage more deeply.
GENERATIVE AI MAKES IT HARDER TO TEACH
Right now, there are tech companies trying to sell AI to your schools. They have not done any studies on whether this produces better outcomes for students– they are running those experiments on students right now, and trying to get rich in the process. A charter school planning to open Arizona will have no teachers at all- just “guides” who encourage students while a webcam measures their “emotional feedback” to automated lessons. Those talking up these technologies are being taken advantage of by businesses that have one interest in mind: making money. This is, legally, what a business must prioritize– not student well-being.
Magic School AI, which claims to be “the fastest-growing technology platform for K-12 educators and students,” says on the “mission” section of its website that more than 40% of students feel burned out. Teaching is a hard job. Ed tech companies are encouraging teachers and administrators to believe AI can help, by making it easier to do tasks like developing teaching materials. MagicSchool says it “can save a teacher up to 10 hours per week.” Schools don’t have to make a profit, but they do have to use public resources effectively. Some districts might simply be happy their teachers are not working long hours after school on student materials. But not all districts are willing– or financially able– to pay teachers the same amount for less work: as employers, they naturally want the best deal they can, and since they are competing with other local priorities for funding, they have to get that deal. This is a significant reason why teachers already are overworked.
And this is what AI companies want. In June 2024, Magic School AI raised millions of dollars from venture capital firms, most notably Bain Capital. What Bain does is bet on businesses it hopes will massively grow. Mere profit isn’t enough: Bain bought Toys R’ Us, made it take on billions of dollars in debt, and then ran it into the ground when it didn’t expand enough. Private equity firms make risky bets hoping for big rewards, and often get them by destroying the business they have purchased. Their business model has done harm everywhere it does business, from restaurant chains to dentistry– one private equity firm pushed dentists to perform unnecessary root canals on children.
Now these same companies want to replace teaching with chatbots. If Magic School remains a small company that helps teachers, they will not get their investment back. What they are counting on and demanding is rapid and vast growth, whatever that means for our schools. That money will have to come from somewhere. Bain Capital is betting that it will come from our school budgets.
If AI becomes more widely used and actually does save time– by replacing the hard work of teachers with unreliable and automated computer output– many local communities already facing hard funding choices (independent of AI’s costs itself) will try to hire fewer teachers to do more work. Because AI lies, and because it is unaccountable, the more we use it in schools, the more time teachers will have to waste correcting its errors– even as AI’s rising role in schools means they have less time to actually devote to this task.
With enough time, teachers can do just about anything better than an AI model can. But if teachers are expected to use these tools, they increasingly will not be given enough time to employ their own judgment or skills. The end result is a classroom where teachers can only pretend to teach, and students can only pretend to learn.
These are extreme predictions for a technology mostly being used in minor, seemingly harmless ways. But AI is threatening because it is weak and wasteful, not because it is powerful and world-changing. The AI bubble is likely to burst sooner or later, for all the reasons discussed above. It might be hard to believe that a technology with so much invested in it could be so wasteful and ineffective, but financial value doesn’t tell the truth– the 2008 financial crisis, which came from speculative bets on housing, caused enormous harm simply by betting on the value of an asset that actually is valuable. AI could do much more harm. There is no way of knowing how long the AI bubble will last, but for every day it lasts, more and more money will be dedicated to extracting value from our schools.
IS AI INEVITABLE?
Advocates of AI say this technology will inevitably be a very important part of the future. This is one of the most important arguments they have, and it deserves a few responses.
Unlike earlier automation technologies, like the power loom and the computer, AI produces random and unreliable results. Businesses don’t want to replace administrators with unaccountable computer programs that might randomly threaten customers or mislay data. Right now the most profitable use of AI has been helping students cheat on homework. It is unclear what role AI will have in the future of our economy. And if it is a prominent role, maybe we will need “AI” classes. But this is not an argument for throwing resources into making existing education worse.
Even if AI is going to be a major part of the future economy, we still should not use it for teaching. There has been little time to study its effects on education generally. We already have reason to believe that it harms motivation, limits our ability to assess student learning, and promotes cheating and plagiarism. Just because a technology might be important in the future doesn’t mean integrating it immediately into our educational system is a good idea.
Using AI in schools doesn’t necessarily even help students use AI better, either. Consider computers. While most students now use computers instead of paper notebooks, this hasn’t improved students’ understanding of how to actually use computers. The user interfaces on tablets and Chromebooks are designed to be friendly and easy— like ChatGPT— by hiding complicated functions from users. This means students aren’t learning how to use a computer— they’re learning how to use a specific set of applications designed by a specific company. By the time students reach college, many of them don’t understand basic things about computers, like where a file is stored or what that even means.
If you think AI is an exciting new technology for the future, you should be encouraging AI classes where interested students work to understand and play around with raw AI language models— not having them interact with simplistic consumer-grade chatbots that prioritize accessibility over power.
We have kept other technologies out of the classroom, and even out of our societies altogether, when we felt that would help students. Many schools are moving to ban cell phones from schools, for example, because they are a distraction. I don’t know what the right decision is there, but I do know it should be a decision we actually make. The same is true here. Whatever decisions we make about AI should be decisions we make together– not a series of individual decisions made because we have limited resources and time, or decisions made for us by consultants. Those who use AI tools often do so because they aren’t given sufficient time or resources to do everything they have to. We need to fix that problem together– not take this shortcut. Our schools need more resources, and our teachers need higher pay. AI companies don’t.
Education has always been a subject of public debate. But we can all agree decisions about our schools should be made by human beings, not by computer programs or the multi-billion dollar businesses that own them. Public education is a powerful commitment to prepare our children for the future together, by democratically coming together. We can do that now by prioritizing our students’ education over the riches of AI companies seeking to make a quick buck.