AI Week logo

AI Week

Subscribe
Archives
September 29, 2025

AI Week Sep 28: AI Safety vs the Antichrist

This week's focus is on AI/ML applications that have been in the news, from the weird and wonderful to the woefully awful. Plus, Peter Thiel thinks that AI regulation will bring on the Antichrist.

Hi! Welcome to this week's AI Week. This week's focus is on AI/ML applications that have been in the news, from the weird and wonderful to the woefully awful, one of which is sufficiently awful to get kicked to the AI safety department.

In this week's AI week:

  • How we're applying AI/ML: Workslop, AI-generated viruses, Medicare, missing research, Zombie Charlie Kirk and "Woman Shot A.I."
  • AI Safety: Violent, inciting content
  • Comic: AI chatbots and cognitive decline
  • Longread: Love in the time of AI

But first: Multibillionaire Peter Thiel says that if you're in favour of AI regulation, you're hastening the arrival of the Antichrist.

He spilled Peter Thiel’s Antichrist secrets. Now he’s banned from the lectures

The off-the-record lecture series has been shrouded in mystery. But notes leaked by tech worker Kshitij Kulkarni reveal details.

Thiel allegedly argues that because we are increasingly concerned about existential threats, the time is ripe for the Antichrist to rise to power, promising peace and safety by strangling technological progress with regulation. Thiel has previously suggested (seriously) that Greta Thunberg could be the Antichrist, but attendees last week didn’t recall her name coming up.

Good to know Greta Thunberg's off the hook. Other Antichrist candidates nominated by American Christians have included Hillary Clinton, Osama bin Laden, and multiple presidents.


How we're applying AI/ML this week

Wasteful: Workslop

"AI Slop" has become the term for low-quality content created with AI. It's content produced for the sake of having content. It's what happens when the ethos of "ChatGPT, write a five-page essay on symbolism in A Midsummer Night's Dream" escapes out of the classroom and onto the internet, landing as entirely-AI-generated webpages, bizarre Facebook posts, AI-generated bands and fake AI slop obituaries.

It's a tremendous waste of everyone's time, attention and resources. And of course it's in the workplace.

AI-Generated “Workslop” Is Destroying Productivity

Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appears polished but lacks real substance, offloading cognitive labor onto coworkers. Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboratio...

As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work.

Has someone wasted your time at work with AI-generated workslop? Let me know in the comments.

Interesting: AI-generated viruses

Researchers at Stanford trained a large language model on viral genomes instead of on language, and got it to spit out genomes for bacteriophage viruses (which can only infect bacteria.) They then printed these DNA strands, which as a non-biologist I can say is impressive all by itself, and found that some of them actually did infect bacteria.

Stanford and Arc Institute scientists used AI to design new viruses that killed bacteria in the lab

A research team in California has used artificial intelligence to design working viruses that kill bacteria, in what they describe as the "first generative design of complete genomes." The project marks an early step toward AI-designed life forms, according to a report in MIT Technology Review.

Worrisome: AI medicare denial bot

https://www.morganlewis.com/pubs/2025/07/cms-is-getting-wiser-about-medicare-waste-but-at-what-cost-to-providers

I'm glad to see that human supervision is in the mix:

while AI technology will support the review process, all final denials must be reviewed and approved by a licensed clinician.

However, I've heard enough horror stories about "licensed clinicians" at HMOs denying claims in areas outside their expertise to be skeptical.

Depressing: How much research are we missing?

The current US administration has cancelled nearly two billion dollars in health research. We'll never know what science we're missing out on because of cancelled funding, but Nature staff wanted to give us a sense of the scale and impact. They trained a machine-learning algorithm on grants cancelled by the Trump administration, used that to pretend-cancel grants from a decade ago, and looked at the impact those cancellations would've had.

What research might be lost after the NIH’s cuts? Nature trained a bot to find out

We used machine-learning tools in an attempt to recreate the method for cutting funding, and then applied it to past US National Institutes of Health grants to reveal the broad-reaching consequences of such action.

Unscrupulous: Zombie Charlie Kirk

At least one church is using AI to put words in Charlie Kirk's mouth. (If you don't know who Charlie Kirk is, I envy you that, but the tl;dr is that he was an American activist, commentator, and victim of gun violence.)

Just Plain Bad: AI-generated violence against women

404 Media exposed a YouTube channel of AI-generated videos of women getting shot. The channel, subtly named "Woman Shot A.I," had nothing but AI-generated videos of women begging for their life before being shot by men.

AI-Generated YouTube Channel Uploaded Nothing But Videos of Women Being Shot

YouTube removed a channel that posted nothing but graphic Veo-generated videos of women being shot after 404 Media reached out for comment.

Youtube took the channel down after 404 Media reported on it.

AI Safety: Violent, inciting content

One video of a woman being shot in the head, in the context of a story? Art. A channel of nothing but videos of women being shot? That's hate speech. That's incitement to violence against women.

The videos were generated with Google Veo. Generating violent, inciting content with AI is one of the risks called out by the US National Institute of Standards and Technology (NIST) (https://doi.org/10.6028/NIST.AI.600-1) in the AI risk profile they put together under the Biden administration.

(Note: If AI Safety interests you, you may wish to consider downloading NIST's pre-Trump AI Risk Management Framework now, as this administration plans to rewrite it.)

When Google Veo 3 was released this spring, reviewers called it a startling leap in realism, and warned that it meant the end of being able to believe what you see in online videos. This level of realism has inherent safety issues: propaganda, fake videos of famous people, putting words in politicians' mouths, etc. But allowing the creation of a channel's worth of violent, inciting content is in another category altogether, and it's a clear failure of Veo's guardrails.

Users aren't supposed to be able to use Veo for harmful content. Some quotes from Veo's homepage:

  • We built Veo with responsibility and safety in mind. We block harmful requests and results...
  • It's crucial to introduce technologies such as Veo in a responsible way...
  • Veo outputs will undergo safety evaluations...

This isn't the first time that Google Veo has been used to make hate speech easier, either. This summer, Media Matters reported on racist Veo-generated videos going viral on TikTok:

(Content warning: This is a compilation of racist AI slop. Really not worth 2:22 of your time.)

It's clear that whatever Google has in place to keep Veo from being abused is wholly inadequate. Currently, there's no American AI regulation forcing Google to do better, which I guess is fine because, you know, wouldn't want to hasten the Antichrist.

If you'd like to try Google Veo for yourself, it's currently free for new users for one month at labs.google. It's on you not to use it for evil, though, because Google isn't on the job there.


Something fun: Comic

Madeline Horwath on AI chatbots and cognitive decline – cartoon | Madeline Horwath | The Guardian

Every thought and action is sacred. Or rather, they were before AI


Longread: Love in the time of AI

ChatGPT is having an impact on online dating. People are using it to write dating profiles, draft opening lines, even to respond to messages.

One user said, “I’m gonna be honest with you: Once you use ChatGPT, you don’t want to think for yourself anymore.”

Sherry Turkle, a sociologist at MIT who has been researching technology’s effect on interpersonal relationships for decades, was struck by how anecdotes that seemed extreme or shameful when she first began her research are now “like, so what?” In the past few years, she said, there’s been a “real change in people’s willingness to say, ‘I am simply going to delegate my most intimate’” communications. She has found that people have come to think of themselves almost as cyborgs. “It’s you plus your chatbot,” she said. “People felt that they couldn’t engage in kind of text conversations as ‘just themselves,’” an expression that came up over and over again in her interviews. “People clearly feel that the ‘just myself’ category is off the table.”

What happens when two ChatGPT users meet online? An endless loop of bot-to-bot flirting:

One [man] had used AI to send clever messages to women, thinking he was getting to know them, but instead got mired trading endless witticisms back and forth.

https://www.thecut.com/article/ai-is-making-online-dating-even-worse.html

Related: AI-fuelled divorces

ChatGPT Is Blowing Up Marriages as It Goads Spouses Into Divorce

Across the world, marriages are being destroyed as spouses use AI like OpenAI's ChatGPT to attack their partners.


That's it for this week's AI week! Thanks for reading. As always, comments are open on the website.

Read more →

  • Sep 15, 2025

    AI Week Sep. 14th: Accessibility, a game-changing new standard, & AI psychosis

    Hi! Welcome to this week's AI week. I'm doing something new this week. I wanted to make it easier for you to share the stories in this newsletter...

    Read article →
  • Jul 14, 2025

    AI Week Jul 13: Artificial friends, and other AI helpers

    Hi! In this week's AI week: Redefining human relationships Let a thousand app flowers bloom Goldman Sachs hires Devin... ... the FDA has Elsa... and X has...

    Read article →
Don't miss what's next. Subscribe to AI Week:
Start the conversation:
https://rdbms-insig… https://nrmroshak.c…
Powered by Buttondown, the easiest way to start and grow your newsletter.