I did an AI
Chatbots are just friendly reverse-centaurs
This is gabestein.com: the newsletter!, which is a completely irregular note primarily focused on the intersection of culture, media, politics, and technology written by me, vitalist technologist Gabriel Stein. Sometimes there’s random silly stuff. If you’re not yet a subscriber, you can sign up here. See the archives here, and polished blog versions of the best hits at, you guessed it, gabestein.com.
I have been somewhat skeptical about generative artificial intelligence. So imagine my surprise when I found myself experimenting with OpenAI’s “custom GPTs.” If you’re not familiar, they’re chatbots, essentially base ChatGPT configured with free-text “instructions,” which act as a sort of setup prompt to guide the bot, “knowledge,” which are files with specific information you can upload that the bot can refer to when generating answers, and “actions,” or the ability to make structured API calls to specific services.
The GPT I built acts as a sort of behavioral psychologist to help you think through purchasing decisions more rationally. It’s based on behavioral finance research I’ve collected and a series of conversations about money some friends and I have been having over many years. (Side note: if you’re interested in being a part of those conversations, let me know). You can play with the bot here, if you want. But let’s not get distracted by the psychology of money, fascinating as it is.
What I came here to say is: these chatbots are somehow still pretty bad products?
Good products create value. When I say chatbots are not good products, I mean it in the very technical sense that I don’t think they are capable of solving user needs well enough to create value. At least, not on their own. (Have I mentioned I am a capital-PM Product Manager?)
I’ve been struggling to articulate exactly why, so let me start with an example. My, er, GPT produced some interesting, even surprising, results. For example, I’ve been fixating on buying a fancy safety razor hilariously called the Leaf Twig for a while without really knowing why. When I asked my GPT to help me think about whether I should buy it, it drew a line between the fulfillment I get from connecting with people to a desire to be presentable so I can make those connections. That’s a pretty interesting insight I had never considered before!
But is that kind of insight valuable? Only if you, the user, can do something with it. And therein lies the value problem. Generally, good products create value by doing hard things you’re bad at for you. These bots often ask you to do the hard work of converting their (maybe totally wrong!) advice into valuable action.
They are, effectively, what Cory Doctorow has termed “reverse-centaurs,” irrational AI brains directing unreliable humans, instead of rational human brains directing perfectly reliable AIs. As he points out, it’s not just chatbots that have this problem. Reverse-centaurism — from AI surveillance tools forcing warehouse workers to meet superhuman quotas to AI code generators requiring coders to be perfectly vigilant in catching their subtle errors — is pretty much the entire AI business model.
That’s not to say there’s no value to be extracted from these bots. But it requires the product developer to carefully scaffold them in tools that flip the reverse-centaur script by training the user to think of the bots as trainers, nudgers, reviewers, and brainstormers rather than problem-solvers while trying to constrain the AI from producing inaccurate results or going off the rails.
In other words, AI product development itself is a kind of reverse-reverse-centaurism. Any value a developer might extract from one of these bots will come not from the bot itself, but from how the developer facilitates the ongoing interaction between the bot and the user to keep the human in control of the interaction.
Sounds fun?
But back to politics really quick. Right after I sent my last newsletter about resisting the urge to assign causality to the election result, I came across a Washington Post post-mortem wherein the leading progressive SuperPAC reveals it is assigning causality to…literally the thing they should have been most prepared for:
“There is plenty of blame to go around for another election cycle riddled with misinformation online,” Priorities USA executive director Danielle Butterfield said in a statement. “Big Tech is still unwilling to hold bad actors accountable, Congress is unwilling to step in and write new rules for the 21st century, and Republicans will continue to slander and lie to voters to make their case. Because of all of this, Democrats lose, and we need to acknowledge this reality and figure out new ways to communicate with voters on today’s internet.”
Imagine saying that out-loud, much less to a reporter, in 2024. I can’t.
Have a great weekend.
Gabe