Flying Blind with AI

If you read Takes & Typos, you likely know of me through my teaching or related advocacy. But I really don’t enjoy writing about “education.”
Specifically, I bristle at the idea of being viewed as an “education writer” because it feels limiting. No shade to folks that do teacher blogs but the interdisciplinary Greener in me just can’t be pigeon holed into classroom practice or policy.
Like, cool. You found a new way to teach about map projections. That’s swell. But did you know that half the kids in your class are going to lose their health insurance if Congress passes this bill? I am not trying to yuck anyone’s yum but the latter of those conversations is clearly more important.
What happens in the classroom is downstream of what’s happening in society and to communities. I care about best practices but I care more about the social trends that undercut student well-being and achievement on the aggregate.
This week I want to talk with y’all about AI but not in the way you may be expecting a teacher to do.
Yes, I have listened to that Ezra Klein conversation about AI in schools. Yes, I have read that Atlantic article about kids going to elite universities functionally illiterate. But teachers have been screaming about the literacy levels of these iPad babies for a decade — this is all very predictable chickens coming home to easily foreseen roosts.
Now more than any time in my career I have my head around what's within my control and what is outside of it as a practitioner. We have a policy around AI use in my classroom.
Did the students write it? Yes.
Do they understand responsible use cases of AI and that taking credit for work that is not yours is functionally plagiarism? Yes and yes.
Do they follow the policy they articulated at the beginning of the year? Results vary, honestly.
That’s why 90% of the graded assessments in room 308 are pencil & paper. I do not put my pants on everyday and drive to work to read and grade AI slop.
I refuse. No, thank you.
So that’s the AI in the classroom conversation.
Did the little diatribe above involve more thinking about the impact of AI than the US government has done? Unfortunately, yes.
Instead when it comes to AI, I tend to focus on its societal implications:
It amplifies and spreads disinformation and misinformation
It relies on the unauthorized use of intellectual property to train large language models (LLMs)
It threatens to displace millions of working-class people from the workforce
It raises serious privacy concerns, especially as scammers weaponize AI tools
For today, I want to once again focus on number four.
I previously wrote about a deepfake scam, reported by 404 Media. A deepfake is synthetic media created using artificial intelligence to realistically replace a person’s face, voice, or actions with someone else’s, often making it appear as though they said or did something they never actually did.
The scams in that instance used images of Joe Rogan, Taylor Swift, Oprah, and other celebrities posted on YouTube to get people to sign up for fake government benefits programs (see below).
If you think about how much info the feds generally ask of you when applying for a program and the value of data like that to scammers and hackers, why this scam is lucrative is obvious.
404 is now back noting that improvements in AI technology have further increased the power of scammers. Instead of recording YouTube videos, the technology now allows them to produce convincing live video. The videos are good enough to fool your mother-in-law, uncle, or grandparent into wiring money to fraudsters in Nevada, Nigeria, or Nepal.
The equivalent voice-only technology is good enough to fool employees at major financial institutions. The Business Insider article linked above notes “a report from Deloitte predicts that fraud losses in the US could reach $40 billion by 2027 as generative AI bolsters fraudsters.” That figure is double the VA’s budget for mental health services or roughly 5% of the bloated Pentagon budget, whichever figure suits you.
Something that is stuck in my head is the quote that “AI will never be worse than it is right now.” It will only get better and more convincing. This means the tools deployed by scammers will rapidly get better and harder to detect.
They’re already good enough that, according to CNN, someone used a deepfake to nab $25,000,000 from a Hong Kong investment bank last year.
The tools are already effective enough that, according to Variety, deepfakes scammers had already amassed $200,000,000 in ill gotten gains by April of this year.
The images they can render are realistic enough that, according to the Pennsylvania Attorney General, some of the worst people on Earth are using AI to create sexually explicit images of minors to sell & trade online.
We’re not in a testing phase. We’re in the midst of deployment.
The scammers have already arrived and they’re not experimenting — they’re cashing out. The fantasy that markets self-regulate or self-correct is just that: fantasy. And the tech companies building these tools have made it clear that ethics are an afterthought, something to be drafted in a press release after the quarterly earnings call.
If this is the worst AI will ever be — and it's already convincing your relatives to wire money across oceans, already siphoning hundreds of millions into the pockets of fraudsters, already fueling child exploitation, then what happens next?
We can’t afford to treat this as a curiosity. We have to start asking harder questions:
Who benefits from this “progress”?
Who’s left to clean up the damage?
And what kind of society are we building if this is the baseline?
I don’t know but I am interested in having these conversations.
–
Oh! I almost forgot the Indicator for the Week: 80.
Actually, it’s 80 million… no actually it's 80,000,000. As I noted on Bluesky, I think it’s important to write those large numbers out so people can see the scale of them.
80,000,000 — that’s how many people in the US receive their health care via Medicaid. A program the House voted to slash this week by $700,000,000,000 ($700 billion) over the next decade. This will lead to benefit cuts and low income and disabled people dying and is a partial offset to the $4,500,000,000,000 ($4.5 trillion) tax cut bill the administration proposed.
That sound you hear is hundreds of rural hospitals preparing to close and the wealthiest among us hoovering up more and more wealth.
As always, I welcome your thoughts.
See you in seven.
As always, if you have any thoughts or feedback about the newsletter, I welcome it, and I really appreciate it when folks share the newsletter with their friends.