Stop Trusting AI With Your Creative Soul

It exists to harm, not help.
Welcome to Productivity, Without Privilege. I’m Alan Henry, your MC for this newsletter, and yes, it’s two whole newsletters inside of a single month! Wild, I know, but when the mood strikes, the words must flow. Anyway, if you haven’t already, grab a copy of my book, Seen, Heard, and Paid: The New Work Rules for the Marginalized, for the graduate in your life, yeah? Makes a great gift!

***
While I’m writing this, over at Twitter (I will never call it the other thing) the strapped on LLM, Grok, has been in at least a day-long meltdown denying the horrors of apartheid in South Africa, making up claims about non-existant “white genocide,” and, most notably, replying to queries wholly unrelated to that topic with gibberish about that topic. And of course, with the right questions—remember, you can and should consider it your moral imperative to break generative AI whenever possible to inspect its seedy underpinnings—the chatbot revealed that yes, Elon Musk and his lackeys instructed the bot to perform this way.
This isn’t the first time an AI chatbot has gone rogue, so to speak, whether it’s making up statements wholesale that are provably false (which we’ve conveniently called “hallucinating,” which is far too cute to reflect the real damage it does,) or simply regurgitating a version or framing of real information that serves to benefit its creators and the companies behind it, as opposed to the actual education of its users.
What all of these incidents have in common is that they’re all proof that these incidents will keep happening and be more impactful in the future. Especially as this runaway train keeps rolling through our society, toppling its industries and creative fields. All driven by profit-hungry, already-wealthy executives who have read too many sci-fi novels to be able to separate desirable realities from undesirable ones (but not enough to have learned anything from what they read). But here’s my message to you: You don’t have to help the train along. If you don’t want to actively help stop it, that’s on you—but you certainly don’t have to contribute to both the social and environmental devastation that these companies are happy to leave in their wake.
Stop feeding them your ideas. Your passions. Your projects. Your thoughts. Your feelings. Stop giving them the opportunity to take what makes you, a real, indispensable, living, only-on-this-Earth-for-a-short-blissful-moment human being, and profit off of them in the name of an executive who thinks the plagiarism machine he built is worth more than you and your ideas ever will be.
Let me back up for a second: I’m not anti-AI at all, actually.
I’m anti-generative AI, which to me is an important distinction to make. Assistive AI technologies have been around for a long time: your grammar checker, for example, or AI being developed for use in medical research (not that much of that is happening in the United States anymore, but I digress) or protein folding or gene sequencing or pharmaceutical research or image processing from telescopes or data analysis from particle accelerators—those are all tools where mimicking human processes without replacing them can speed up development and discovery in a way that provides real tangible benefit. But generative AI is the kind of AI that exists only to replace human thinking and creativity. Six thumbs on one hand of a celebrity telling you to vote for a fascist is neither helpful nor valuable.
And, of course, I’m not even getting into the energy, environmental, and social negative impacts and outcomes that have already come and will continue to come as corporatists everywhere rush to AI as the next thing that will make their stock charts continue to go up and to the right, making their shareholders giggle with glee.
***
Meanwhile, companies like Business Insider are basically forcing their employees, including journalists, to use AI and tracking those who do and don’t.
Of course, this will lead to disproportionate punishment for marginalized employees, the ones who already have to ideate more, prefer to do their own research, or who, most likely, will be characterized as incompetent or lacking at their jobs if they present AI-generated work or ideas that are hallucinatory or factually incorrect.
At the same time, their privileged colleagues will be praised even for those same shoddy AI-generated ideas and poor writing because “at least they’re using the tool for what it’s intended to do.” BI forces its employees to bend over backwards to justify the money it’s shoveling to OpenAI to steal their content and traffic, and as always, it’s the most vulnerable and least protected among them that will see the short end of it.
I’ve said before, this is all by design. The end goal, as always, is to enrich the already wealthy by replacing human labor as much as possible and forcing skilled workers, especially those already marginalized somehow by society, into lower-paying jobs where they struggle to survive, and for whom survival means consumption, which, again, further enriches the same groups. And while I don’t think everyone involved is either aware of or privy to the benefits of the scheme, it’s baked into the heart of our economy. All we have to do is look at the disparities in wealth to understand it.
So what do we do about it? This is easier for some folks than others.
***
It’s a hard sell to tell students not to use generative AI to do their assignments, and it’s a hard sell to tell professors not to use it either. I get it, everyone is busy, and these tools dangle the hope of saving time and energy in front of us, so we can get through difficult tasks more quickly. But as I often say, and will say again: The goal of productivity is not to get things done so you can do more things. It’s to get things done faster so you can spend time doing the things that matter to you.
I beg you to stop giving generative AI your ideas, thoughts, brainstorming, and everything else that makes you human, intelligent, and creative.
Need help brainstorming? Make a friend, to connect with another human being, to engage someone whose opinion you actually value, rather than a machine that cares nothing for you, or the accuracy of what it delivers back to you. Part of the lure of generative AI tools—and this is absolutely exploited by businesses eager to either save their own skins by joining forces they can’t beat, and those eager to exploit their workforce as much as possible—is that they’ll “save time” and make people “more productive.” Of course, this is never about being so productive that you can go home and spend more time with your family. Oh no, RTO mandates have certainly dispelled that notion.
It’s all about being productive so you can get more work to do, drive the profits up and to the right, and make the shareholders happy. And if you don’t, or if you believe there’s another way to do that, you’re the one who’ll be held accountable, not the fundamentally flawed tool, or the premise that led to its use.
I promise you, if you’re worried that not engaging with these tools will mean you’ll get left behind, consider this: The rework, the correcting inaccuracies, the reputational damage, and the public humiliation that inevitably comes when a company uses AI to do something horrendously stupid that would never have happened without its involvement will eventually prove that your path is the right one. That is, if we can avoid turning the entire internet into slop written by slop for slop, and not for humans at all. I still have some faith.
[Read This]
Researchers Scrape 2 Billion Discord Messages and Publish Them Online, by Matthew Gault: Speaking of AI, this story from 404 Media seems to fit a bit of a theme lately. This isn’t the first time researchers have used social media for experiments that by virtually every account, aren’t ethical (because they involved either stolen information or because they didn’t get informed consent to participate, like this one).
Bottom line, researchers in Brazil, in this case, scraped tons of messages from about 10% of all public Discord servers between the years of 2015 and 2024, and published the data they scraped. They claimed to have anonymized the data and published it in the hopes that other researchers would use it for their own experiments, and of course, in case anyone wanted to train AI bots. Most distributed scraping technologies these days—tools that were originally developed for archival and data retention practices—have been instead turned toward AI development. After all, who else needs a ton of publicly available information, packaged neatly, and downloaded quickly, all without the hassle of actually asking the owners of that data for permission?
Californians would lose AI protections under bill advancing in Congress, by Khari Johnson: This is another AI story that I think has been buried a bit, but I have seen it covered in a couple of places: the huge budget bill that just passes the House of Representatives (by one vote, mind you) includes a cause that’s of dubious legality in the first place, but would essentially forbid state governments from taking any action to regulate AI and AI companies for at least 10 years. The bill, if passed and signed into law, would also essentially invalidate many of the laws on the books in a number of states (most notably California) that were built to give citizens some rights over their data, how it’s used, and how long it’s stored.
It’s scary that a government that supposedly represents its citizens (I know, I know) would essentially say, “hey, you have no rights because this technology is going to make my friends a ton of money,” but then again, we do live in America. Give the story a read, it’s by a friend and colleague, Khari Johnson, who was similarly poorly treated by our shared previous employer (they have a bit of a…poor record with their journalists of color.)
News Influencers Are Reaching Young People, and the Media Is Trying to Keep Up, by Pam Segall: I love this story, not just because I love Teen Vogue for having the spine that few other Conde Nast properties have, but because I’ve seen a ton of traditional media folks whining and crying about the same issues they complained about back in the 2010s when bloggers and blogs reigned supreme, and before that in the 2000s when “citizen journalism” was the catchphrase du jour. Now it’s “news influencers,” which really can mean anyone from a dude ranting at a camera for YouTube for 45 minutes to folks like yours truly with their own newsletters.
But as always, traditional media is portrayed as “struggling to catch up,” like underdogs who are fighting an impossible force, like a natural disaster. Of course, we know that the truth is that traditional media has largely been responsible for this shift in consumer behavior. By not listening to, speaking to, or meeting their communities where they are for decades, others have popped up to fill the void, and naturally, without the same level of rigor, attention, or training as people who have spent their entire lives doing the same work. But the difference is that they saw a need and wanted to fill it. Traditional media does face a lot of challenges not of its own making, of course, but this isn’t one of them, as the story notes, and again, it’s the kinds of journalists that are already marginalized who will suffer because of it. (What do you mean your newsroom isn’t interested in letting their journalists of color be the next “superstar journalists with their own personal brand?”)
[Try This]

My friend (and former Lifehacker colleague) Thorin Klosowski spends his time these days working at the Electronic Frontier Foundation, and he recently shared a project he’s been working on for a long time, which I’m now going to share with you, because it’s more important now than ever:
Get to Know iPhone Privacy and Security Settings
Get to Know Android Privacy and Security Settings
So for our “try this” for this newsletter, I’m going to suggest you take a little time, maybe this weekend (or today, if you’re reading this on Memorial Day proper, which is when I’m planning to send this out) or next, to familiarize yourself with all of the security and privacy tools and features of your smartphone. After all, these Apple and Google-designed gardens that we play in aren’t necessarily designed for perfect privacy out of the box. Apple has taken a harder line in protecting its users' privacy (from other companies, not necessarily from themselves), and Google is happy to harvest and sell access to any and all data it can get its hands on (and doesn’t terribly care if third parties do, too). This all means that it’s really up to you and I to look over our privacy options and make sure we set them to the options that make us the most comfortable.
For me, that means turning off as much ad tracking and fingerprinting as possible (even if that means I get some really really wild and untargeted ads) and limiting the amount of access any given application on my phone has to the rest of my data. That may be different for you though, but it’s always better to go in both eyes open instead of assuming the best on the part of others who quite literally make money from snooping on you and your activities.
Once you’re finished doing this with your devices, pass the guides on to the other folks in your life. Security is more important than ever, and your data is more valuable than ever. Make sure you treat it like it has value.
***
That’s it from me this time around. Take care of yourselves, be wonderful to each other, and spread a little love in the world. We badly need it these days. I’ll see you back here soon.