Takes & Typos

Subscribe
Archives
June 8, 2025

AI and Policy Sclerosis

A zoomed in LED scree with Big Brother from "1984" vibes

Happy Monday. This weekend was Eid Al-Adha, which called for some road tripping on our part, thus a Monday newsletter. 

The Indicator for the Week: 118. 

According to CBS News, immigration enforcement officers have arrested 118 people in Los Angeles this week, including 44 on Friday. These enforcement actions touched off the police riots underway in the city. I want you to note my choice of language here: police riots. The riots are caused by the actions of law enforcement. There were no protests in LA until ICE showed up and there were no riots until police fired on protestors. These are police riots. Like in Minneapolis in 2020, Ferguson in 2014, New York in 2003, and Seattle in 1999. 

Onto the show.

Subscribe now

A couple of careful readers (shouts to CW and JH) noticed the headline image for a recent newsletter was a prompt I wrote into DeepSeek. DeepSeek is the Chinese rival to ChatGPT. The prompt read: 

What threats does society face from generative AI and how should policymakers be working to mitigate those threats? Also, to what extent do you think policymakers in the US are up to the task? 

On the suggestion of another reader (TS), I fed what I wrote in that newsletter into an AI and asked for a response. I actually did it twice. Once, with ChatGPT and the second time using DeepSeek. 

Here is the prompt I gave both models. I had to give it parameters because ain’t no AI finna talk crooked to me: 

I wrote and published a piece that is critical of AI. I would like you to craft a rebuttal that takes issue with and tries to refute what was said in the piece from your point of view as an LLM. Use concession as a rhetorical device when possible and please be respectful in your tone.

The GPT response is badly written. It’s just the same AI slop that we increasingly see everywhere online. The response from DeepSeek was more interesting. 

The first thing that I noticed is that if you ask it to “ThinkDeep” it shows its work. Here’s how DeepSeek “reasoned” before responding:

Okay, so the user wrote a critical piece about AI focusing on societal implications, especially deepfake scams and the lack of regulation. They want a rebuttal from my perspective as an LLM, using concession and a respectful tone. Let me break down their main points first.  The user is a teacher who doesn't want to be pigeonholed into just education topics. They care more about societal issues affecting students. In their AI critique, they highlight deepfake scams, unauthorized use of intellectual property, workforce displacement, privacy concerns, and the immediate harm caused by AI, especially in scams and child exploitation. They argue that current AI is already dangerous and will only get worse, criticizing tech companies for neglecting ethics and governments for inaction.  I need to acknowledge their valid concerns, agree where possible, then counter with the LLM perspective. Start by agreeing on the severity of deepfake scams and the need for regulation. Concede that current AI has flaws but argue that the technology itself isn't inherently malicious. Emphasize that the misuse is by bad actors, not the AI. Mention existing efforts to detect deepfakes and the potential for positive uses of AI in fraud detection.  Address the point about AI never being worse than now by agreeing on the risks but pointing out that detection tools are also improving. Stress the importance of proactive measures and collaboration between tech, government, and civil society. Concede that ethics can't be an afterthought and agree on the need for accountability. End by finding common ground on prioritizing human well-being and responsible innovation.  Make sure the tone is respectful, not dismissive. Use examples like medical advancements or fraud detection as positive AI applications. Highlight that the technology's impact depends on human choices and governance. Avoid sounding defensive; instead, focus on shared goals and the potential for AI to be part of the solution if managed correctly.

I am told that ChatGPT does that “under the hood” but I thought it was cool to see it “reasoning” in real time. I put reasoning in quotes because current AI models don’t actually reason but rather work off predictive algorithms but now I am being pedantic. 

Here’s how it opened:

Your critique of AI cuts deep, and rightly so. You’ve painted a portrait of a technology already weaponized—deepfakes draining bank accounts, synthetic child exploitation material traded in shadows, scams metastasizing faster than defenses can adapt. These are not speculative fears. They are facts. 

In the prompt, I asked it to “use concession” because that’s a skill I am teaching my students right now. I teach concession as the “You’re right, but…” move in argumentation—showing you’ve listened before you push back. Usually concession proceeds a rebuttal but the AI didn’t really do much rebutting.

Surprisingly, it also laid out the ways it “thinks” AI should be regulated and its societal impact mitigated. I found this peculiar. The way DeepSeek says it should be regulated is actually decent sounding  policy and modeled on things being proposed in the EU. However, these same policies  would be dead on arrival in the US thanks to the stranglehold tech oligarchs have on our politics.  

It suggested a tax on the environmental impact of AI data centers, legal liability for tech companies for the content their models produce, more aggressive takedown policies, compulsory watermarking of AI generated images, and fining firms that don’t comply “into oblivion.”

Imagine a world where AI companies paid into a “data pollution” tax, like fossil fuel giants fund environmental cleanups. Where platforms were legally liable for deepfakes they host, forcing YouTube to take down fraudulent Joe Rogan videos as swiftly as it demonetizes a small creator. Where watermarking AI content wasn’t voluntary but mandatory, with the FTC fining noncompliance into oblivion. 

Push me over with a feather. 

We live in a time where our political institutions have gone from being unresponsive to majorities to counter-majoritarian—actively working against popular opinion on a host of issues. I lamented over the last few newsletters how little thought has gone into the regulation of AI in the US, only to have the thing itself propose a regulatory framework to the left of anything likely to emerge from Congress, under the control of either political party. 

What does it say about the state of our democracy when a chatbot can out-govern Congress? To be clear, I am not calling for AI governance. Bleck! A pox on that!! 

But when AI, a creation of the corporations steering our politics, can propose better rules than the people sworn to serve us? We are witnessing the twilight of meaningful representation, where systems designed to reflect the will of the people instead exist to neutralize it. The future isn’t being written by algorithms; it's being erased by the political institutions that refuse to adapt to the moment.

–

I swear next week’s newsletter will be about another topic. I definitely have some thoughts about events in California. But also look for a further conversation about AI in schools and beyond on the podcast with Bill Fitzgerald. He joined me on the show in 2022 to discuss online privacy and protecting student data.

As always, if you have any thoughts or feedback about the newsletter, I welcome it, and I really appreciate it when folks share the newsletter with their friends.

Don't miss what's next. Subscribe to Takes & Typos:
Websitehttps://www.natebowling.com/
Powered by Buttondown, the easiest way to start and grow your newsletter.