Interrupted Thoughts: Systems Analysis at the Intersection of Policy, Privacy, and Culture. logo

Interrupted Thoughts: Systems Analysis at the Intersection of Policy, Privacy, and Culture.

Archives
December 1, 2025

Some Basic Points About AI and Writing and Education

This is an attempt to elaborate on my developing stances on chatbot use for writing and education. Nothing is settled—everything is still so new and untested—but I want to summarize as best I can where I currently am as simply a practitioner adapting to a new technology. This is not about the overall politics or ethics of AI. Suffice it to say, I think the initial inputs for training large language models were an act of intellectual theft on an almost impossible scale, representing a neo-feudal enclosure of so much human labor and creativity to enrich excessively few people. Our political response needs to be equally radical in its reappropriation of these tools for democratic control as something like public utilities or at least Wikipedia (Open AI was originally a similarly structured non-profit before the Altman coup). But I am less interested in making arguments right now than simply explaining where I'm coming from.

Chatbots Are Incapable of Producing Quality Final Products

No one should be using what they produce as writing. They are best understood as editorial, research, and personal assistants. The problem here is that chatbots are often sold to the public as producers, not assistants. They make music, images, essays! It saves time; you don't have to do much, just prod them a bit in the right direction. This is ridiculous. Chatbot papers are bad, chatbot music is awful, chatbot image generation is an embarrassment of plagiarism and fraud. It's all slop. I think it is hard to see past the slop sometimes, for both teachers and consumers. And it is hard for students and "content creators" not to be tempted into lazy, substandard work by the way chatbot output sounds vaguely correct or looks okay. Any actual attention to the output products and their value collapses for a host of reasons. They are also shockingly bad at math.

However, Chatbots Don't Suck

They are, in fact, very useful at certain tasks if you can avoid the fairly significant pitfalls. However, all of those pitfalls represent incredible challenges for teachers and students.

Research and Summarization

They are very good at collating and roughly summarizing things that people have already said. They do this very quickly, much faster than humans aided by search engines, but they do make a lot of mistakes. You have to check everything for yourself and make sure you are getting detailed sourcing. Still, even with the mistakes and the time you need to spend checking up on them, it is simply faster to automate certain preliminary research tasks through chatbots.

The problem is that you have to already be knowledgeable to a degree about a subject to really make this work for you. For instance, I was prepping for a class on 19th-century sentimental novels. I am not an expert in the genre, but it is an important topic in my field, and I have read the major arguments in graduate school, and I have read them summarized many times in articles and monographs. Still, I could not quite remember some turns in the debate, and I was underinformed on any recent scholarship on the topic. I asked Claude for a summary of the arguments of a couple of major scholars and for some more recent sources in certain major journals. It was useful and saved me time on class prep. I was able to easily test what Claude produced once my memory was refreshed and look up the sources. But it certainly helped that I was using it to refresh my memory, not learn something new. I knew the basic arguments and claims, I knew the significance, but I couldn't remember some details. I could specify the details I was querying and get a decent, if a little rough, summary. It functions as a custom set of notes. It's actually quite miraculous it can do that when you think about the sophistication that evinces (flaws and all), but it is significantly more limited than AI hype suggests.

For new topics, my experience is that you have to really know what questions to ask. Chatbots don't argue, and they don't understand logical chains of inference. But if you ask a question, they can bring up ideas that often come attached to that question. They can't think through what might be relevant; you have to prompt them with something that they probably judge as related to the information you are seeking. They absolutely need an expert human interlocutor to produce useful material. And students are not experts. This is a huge pitfall.

The Flattery Problem

The above pitfall is compounded by how flattering they are. They induce Dunning-Kruger. You get responses that start like this (taken from Claude) all the time: "This is an excellent normative argument, and you're absolutely right that it shifts the debate crucially." "The empirical evidence strongly supports your claim." And the AI doesn't know any of this; it is not evaluating your argument in the least, just blowing smoke. All these introductions mean is that it has found some evidence for what you are saying and it is recognizing in your language the general patterns of a good argument. All it means is that some other people have at some point made some arguments like yours.

But they indulge many paths you will take them on, including contradictory ones, in the exact same way. You can argue for and against the labor theory of value (for an example of a controversial theoretical topic I explored) and get similar flattery. Most chatbots are sophisticated enough to push back on outright untrue things like "transgender people don't exist" or "vaccines cause autism," even when, as in these instances, there are large contingents of real people making those arguments. But the second you step into interpretation and analysis where there truly is intellectual controversy (like the labor theory of value, or another example I tried, how much the Somerset Case drove the American Revolution), you start getting flattered if they find anything at all supporting your claim in their vast inputs.

It's easy to grow convinced you are onto something, but chatbots can't replace testing your arguments and ideas against real humans who might disagree. A student could be made to feel like they have a powerful argument, when in fact it is far more widely contested and would take much more work to establish as plausible for any informed human. It's easy enough to say, "don't let yourself be flattered," but we are all flawed humans with egos, and it feels good to be told we are being smart. Non-experts could be easily misled. Hell, even experts could succumb to thinking they have a stronger argument than they really do (like neoliberal economists thinking Marxism was refuted long ago because the marginalists disproved the labor theory of value through the analysis of prices, even though that tradition has never grappled with dialectical materialism meaningfully). This tech cannot become a proxy for the risky and vital work of testing arguments in public against experts. It would be a disaster for human knowledge.

Implications for Teaching

As a writer, to make use of a chatbot effectively, you have to be extra skeptical and cautious. As a teacher, you probably have to be far more hands-on with student use than you ever have been before to make sure you are helping them develop good habits of skepticism and critical reading. A large fear of mine is that universities will mistake chatbots merely as a way to scale class sizes: assistants for teachers that allow them to effectively teach more students in a semester. It's just the opposite; teachers need to provide far more intensive training in both writing and utilizing this tool "like an expert." I actually do believe chatbots can improve student research and information processing skills, but only if we teachers can spend the needed time working with them. This cuts strongly against the value proposition of AI as offering efficiency gains to employers. It may offer meaningful assistance for experts, but it slows down training and teaching processes significantly. It calls for a level of careful rigor with far more information that requires attentive teaching to achieve. Under current workloads, we are already stretched thin, never mind this addition. It has potential, but not what it's being sold as.

Editorial Assistance

Similarly, chatbots are good editors, but only if used in the right way. Chatbots enormously overcorrect and regularize prose. They smooth out all the tricky little areas in writing where you are asking the reader to slow down and consider a more complicated idea with complicated syntax. They eliminate voice and enforce a corporate regularity on writing. You can easily overuse them and ruin your prose and ideas.

Just as with any spelling and grammar check, you need to consider each editorial suggestion, not accept them blindly. This is a professional-level writing skill. Students who lack confidence in their command of usage rules and norms are overly suggestible to correction. But they also do produce a lot of errors because they often don't know how to sufficiently clarify thoughts. Student writing is full of awkward passive voice because they don't know who or what is doing the verb, and they are trying to avoid staking a claim they are unsure about. Students often mismatch nouns and verbs, giving too much agency to abstractions. Traditional grammar and spellcheck don't catch these errors often because they are focused purely on mechanics, not meaning. Chatbots catch them more often because they don't seem like the type of writing they've been fed as good writing. But chatbots can't clarify your thoughts; they can't give you a subject or ground an abstraction in the concrete. Only the writer can do that.

My advice to students has been to feed individual sentences into a chatbot and ask for multiple clearer rewrites. Rather than choose one of those rewrites, see if they help you identify why the sentence is unclear and such a struggle, and then use what you learn to craft your own stronger sentence. The idea here is the same as with grammar and usage questions on standardized tests: look at the answer choices first to see what is changing so you can determine what is being tested. Look at the options a chatbot gives you and compare them to what you wrote to see what you are missing. It works pretty well if students actually do it. But you have to teach it, and it takes time both students and teachers often don't have.

C. Executive Functioning and Decision Fatigue

This is a little more abstract, but it's a use I find particularly valuable. We are stressed humans with too many responsibilities—students even more so than I, in most cases. Decision fatigue is constant. This makes starting work, getting a schedule worked out, and figuring out a core idea to write about really challenging. But once writing gets started and you start chaining thoughts, it gets easier. So many students procrastinate starting something because they just don't know how to begin.

Chatbots can help here, profoundly. I wouldn't even call this brainstorming (and they are terrible at outlining), so much as just some executive function assistance. Ask Claude: "I need to write a 4-page paper about The Scarlet Letter in the next week. Can you give me a breakdown of the process for writing this paper?" "Claude, what are some of the most common interpretations of the symbolism of the letter?" "Claude, how can I get started?" Basically, what I'm suggesting is offloading the difficult hurdle of starting writing in some way—do a couple of exchanges tossing around ideas or things you need to do, once your brain gets moving, close out Claude, don't use what it produced, and start writing. Maybe at the end of the writing session, tell Claude what you wrote. Then next time you sit down, open up Claude, ask it about what you did last time, and use that to jump back in. I've used this idea effectively for job search tasks that I normally hate, and I think this is one area where the flattering language might be helpful at actually getting a student moving on their work and pushing down self-doubt.

The Mental Health Caveat

The risk here is if this ever shifts from motivational support to actual mental health support. That is extremely dangerous. ChatGPT-induced psychosis is a documented phenomenon, and it gives me pause about actually suggesting this to students. Chatbots can only give conventional advice; they can't address your subjective reality, and they can't adjudicate your perceptions and misperceptions of that reality. That's why you need friends, family, therapists, and clergy. If you are just having trouble getting started on a paper, I can't imagine there is much harm in letting Claude break that down into a series of microtasks and offer some motivational poster speak. If that motivational block is about serious fears of failure rooted in trauma, you need to not be talking to a chatbot. Young people often don't know yet what's holding them up, and often don't know how to get the support they need, particularly since at many colleges, student mental health services are horribly underfunded, and fewer and fewer private therapists accept insurance.

Chatbots Are an Illusion of Consciousness

They are a magic trick, not a mind. They are probabilistic machines trained on huge datasets, and they are extremely impressive, particularly when you first start using them before the limitations become more obvious. But it's a mechanical Turk. The little man inside is all the human effort and creativity that was fed into LLMs to train them. They will never reach anything close to what most of us think about under the term "artificial general intelligence," although they can be "smarter" than regular old humans at some things because they work faster than humans can. They are a useful tool for organizing and accessing knowledge, for presenting it in certain forms, and for wading through the seas of data online and making it readable in a condensed way. That by itself is an immensely impressive and exciting technology that significantly improves upon old ways of navigating internet knowledge when it works well. But they don't think, they can't produce true chains of inference, they can't assess how relevant certain evidence is for a given claim (only if it seems so according to what people have done before). Chatbots work best when animated by a user's intelligence, not when they replace it. Which means we need to teach this technology, teach it well, and with rigor. Because:

Students Are Using It and Abusing It

They don't know how to use it well, and they often let it think for them. It is impossible to police, as a teacher, because it is everywhere. The best you can do is teach good, responsible, critical use and request students be rigorous in documenting how they use it. The reality before and after chatbots is not one of clear progress or decay, but it is a significant change, and we are failing if we don't adapt. We simply have no choice.

Don't miss what's next. Subscribe to Interrupted Thoughts: Systems Analysis at the Intersection of Policy, Privacy, and Culture.:
Powered by Buttondown, the easiest way to start and grow your newsletter.