AI is starting to secretly edit your files
One of the scarier headlines I’ve recently comes from the BBC: YouTube secretly used AI to edit people’s videos (August 24, 2025). This is just as I predicted on Techtonic a few weeks ago in my episode on three emerging dystopias (July 28, 2025), in which I warned that Big Tech AIs would – at some point in the future – start modifying the primary sources that we rely on as sources of truth.
I never guessed that it would happen so soon.
The BBC article talks how Google’s YouTube has begun modifying videos that people upload – without their knowledge or consent:
Rick Beato’s face just didn’t look right. “I was like ‘man, my hair looks strange’, he says. “And the closer I looked it almost seemed like I was wearing makeup.” Beato runs a YouTube channel with over five million subscribers, where he’s made nearly 2,000 videos exploring the world of music. Something seemed off in one of his recent posts, but he could barely tell the difference. “I thought, ‘am just I imagining things?’”
It turns out, he wasn’t. In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission. Wrinkles in shirts seem more defined. Skin is sharper in some places and smoother in others.
Google built, tested, and launched this new feature without telling the users. Which raises two questions. First, why the secrecy? And second, why modify people’s videos at all?
As for the first question, Google’s secrecy suggests that – from leadership that has consistently shown contempt for users – there was enough awareness to know they were doing something unethical, and thus worth hiding.
The more relevant question, though, is why? What benefit is it to Google to reskin (literally) people in videos to appear smoother, sharper, whatever? Google claimed in a statement that it’s “working on ways to provide the best video quality and experience possible” – indicating that whatever users post to YouTube, Google claims it can make better by letting an AI modify it first.
But that’s not the real reason. (I know, shocker, Google is being disingenuous in its press release.) The reality is that Google doesn’t know, and doesn’t care, whether the AI makes the video quality better. The value of a two-trillion dollar is not going to be noticeably affected by noodling with the smoothness of people’s skin in uploaded videos.
Instead, there’s a more strategic motivation at work. Google needs to find reasons for its AI to exist. Big Tech companies are desperate to show some sort of return on their monumental investment in the AI bubble. As Ed Zitron points out (July 21, 2025):
If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years.
Half a trillion dollars poured into AI, with no substantial return? No wonder Google decided to jam some AI in its YouTube-upload processing. Content creators didn’t ask for it, users don’t want it, and it only serves to erode the trust that people have with a video repository (for anyone, that is, who still somehow maintains a nonzero amount of trust in Google). But it’s a use for AI, and that’s good enough. Google, just like the other Big Tech companies, is trying AI on anything and everything, whether it makes sense or not.
This is what it feels like as a user:

I said at the beginning that the BBC story is scary, but so far, this story doesn’t come across as terrifying. Sure it’s misguided, deceptive, and unethical – but that’s been Google for years now. So far it’s all just an AI tuba in the user’s face.
The truly scary possibility is what happens when modifying users’ content, without their knowledge or consent, actually does provide a benefit.
Let’s broaden our view a bit. Once Google has normalized the practice of “AI tweaking user content,” where else might Google want to put this in action? Do you really think Google will limit AI edits to video uploads?
→ Support my work and by joining the Creative Good community. We’re on a journey to create the good in tech.
Remember that the web started, and Google started, with a user experience based on text. Consider places where user-generated text flows through Google. If you’re following the argument, you know where this leads.
Gmail. Google Docs. Google Workspace.
Anywhere users communicate with each other through Google – whether email, documents, or a team space – is a candidate for some “improvement,” secretly, by Google’s AI.
Can you think why any powerful interest might, at some point, want to slightly tweak the wording of a message or two?
Do I have to spell it out? It’s already happened, albeit in more publicly viewable ways. Read this Time article (February 11, 2025) about why Google Maps displays an incorrect name for the Gulf of Mexico. The “improvement” came immediately after a key partner demanded the disinformation to appear.
Now take that attitude by Google leadership – the “growth at any cost” ethos free of any ethics or integrity – and, as the bros say, “scale it up.” Multiply it by millions, by billions, by tens of billions of emails and documents and texts and posts and messages that run through the grasp of Google’s AI.
Google needs a reason to use its AI, and powerful actors – here and abroad – have an interest in editing, tweaking, nudging, censoring what people are telling each other.
It may be time to go back to postal mail.
(In all seriousness: given the accelerating decline of the internet it is well worth our time to study networks of the past. I highly recommend my Techtonic interview with Lori Emerson, author of Other Networks, a book on this very topic. Stream the show / see episode page.)

P.S. If you’d like to support my work, please join the Creative Good community. You’ll get access to our members-only Forum, where we list resources, tools, games, and other gifts for the journey.
Until next time, keep creating the good –
-mark
Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon