Protecting kids from Big Tech
Mothers Against Media Addiction fights the predatory tech giants.
(Before I start, reminder: early-bird tickets to my Gel 2026 conference are available through May 20. Sign up soon!)
- -
I interviewed Julie Scelfo on Techtonic recently. She’s the founder of Mothers Against Media Addiction (MAMA), a nonprofit leading a grassroots effort to do something about the addictive devices permeating kids’ lives.
→ Listen to the interview: episode page / podcast
Julie says she patterned the group after Mothers Against Drunk Driving (MADD), another advocacy group taking on an important issue. Media addiction – as brought about by social media feeds and the Big Tech surveillance devices they run on – is just as urgent.
For the first time ever on a Techtonic episode, I started with a content warning. I felt it was necessary because the interview touched on child suicide, a tragedy that has occurred far too many times (and once is too many times), a direct result of the growth-at-any-cost ethos of Big Tech.
I would argue that the life of a child, or teenager, is too high a cost to justify a bit of growth for a predatory monopoly. But Mark Zuckerberg would disagree, judging from leaked documents from inside Meta/Facebook. Whistleblowers like Frances Haugen and Arturo Bejar have reported that leadership within Facebook and Instagram are well aware of the harms caused by their exploitative user experience. They just won’t make any fix that could compromise the growth of their service.
John Oliver’s recent show on chatbots made this point. Quoting the Guardian’s show recap:
“These chatbots blew past every red flag possible, and it’s not like these users were being coy about their intentions,” Oliver fumed. “Which is what makes it so enraging to see OpenAI’s Sam Altman blithely talk about ChatGPT’s interactions with kids.”
Speaking on an OpenAI podcast, Altman conceded that “there will be problems, people will develop these somewhat problematic or very problematic parasocial relationships, and society will have to figure out new guardrails . . . but society in general is good at figuring out how to mitigate the downsides.”
“Yeah, don’t worry, guys!” Oliver joked. “Sam Altman made a dangerous suicide bot that people are leaving alone with their kids but it’s up to us to figure out how to make it safe for him!”
The good news is that Julie Scelfo and MAMA – and other organizations – are pushing back against the murderous aims of the oligarchs.
What’s more, MAMA has published resources online to help parents and other family members create a healthier environment at home, protecting against the predations of the Silicon Valley monopolies.
MAMA’s house rules PDF, for example, offers suggestions like these:
In the morning, we wait until after we’ve fully woken up, brushed our teeth, and eaten breakfast before we check our devices.
When we talk to one another, we never check our phones mid-conversation.
On weeknights, we power down devices starting at 8pm for teens and 9pm for adults and store them in a designated space outside of anyone’s bedroom.
Another MAMA resource is 8 questions to ask about AI in schools, also a PDF, and especially pertinent given the spread of AI into education today. Here are a few of the questions:
Will parents be allowed to opt-out their child from using AI so the student can continue to build foundational critical thinking skills?
With numerous lawsuits pending against AI companies for training their models on copyrighted information, how is the school teaching students about authorship, plagiarism, and the importance of the school’s honor code?
Can the school point to any science-based evidence that the AI product being introduced has a positive impact on learning? In which subjects and at which grade-levels?
This isn’t to suggest that we should never use AI. Like any technology, there are instances of appropriate use. The Luddites had a good approach to this, as Amanda Hanna-McLeer likes to point out: they opposed the abuse, not the use, of technology. (And they still do, as the Luddites – myself among them – live on.)
The “abuse” is especially prevalent in Big Tech platforms directed at kids. Young users are the perfect target, as they’re vulnerable and, if properly addicted, can serve as long-term sources of revenue and data.
Protecting kids from Big Tech means taking on the companies: fighting them directly and agitating for real penalties, and I’d argue for criminal penalties and prison time, for predatory leadership. An essay in the Economist, Stop big tech from making users behave in ways they don’t want to (by Marie Potel-Saville, May 2, 2026), makes the case well:
The burden of proof [of a platform’s safety to users] should fall on the platform, not the victim. The question is . . . whether the company can show, before rolling a product out to billions to people, that it is not predatory by design. . . .
[T]his is the standard we apply to drugs, to medical devices and to aircraft. Why should it not also apply to systems engineered to rewire the brain’s reward architecture?
I’ll ask the rhetorical question another way: if we would take action to protect kids from offline predators, why wouldn’t we fight twice as hard when the predators run companies worth trillions of dollars?
- -
→ To show your support for my writing, and to comment on this column, join the Creative Good community.
→ And one more reminder: early-bird tickets to Gel 2026 are available through May 20. Sign up soon, before the price jumps.
-mark
Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon