The Society for Humanity
17. I, Robot by Isaac Asimov. It feels like I’ve been reading this book forever. Surely it’s past due. I picked it up at the library after watching the Will Smith adaptation with the kids. Decent movie, overall. Gives you what you want in a Will Smith movie. Maybe weird that they made it a cop movie. Partly I was hoping to get my thirteen-year-old to read it. He hasn’t shown much interest, but at least he’s dipped into Hitchhiker’s Guide to the Galaxy a little bit. The other reason I wanted to read this again, after about twenty years, is with everyone everywhere being AI this and AI that—eh, I think you should jump off a bridge—with commercials where the dad has AI tell his kid a bedtime story (I don’t say this lightly but if that is you, bud, you need Jesus, not a smart phone), with teachers using AI to make assignments, with the government using AI to— Someone, not any of my regular readers, not anyone from my milieu, but someone, might read this and accuse me, because of the em dashes, of using AI, and again if that’s you find Jesus.
I wanted to read I, Robot again because with all this artificial so-called intelligence being foisted upon us from all directions it’s been on my mind that no one is training these large language models with Asimov’s Three Laws of Robotics built in. We’re maybe not at the point of AI robots, but surely that is someone’s very lucrative goal. The three laws: if you read the book you’ll be able to recite them from memory, or be lazy like me and copy them from Wikipedia:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An obvious reason for the somewhat tiresome repetition of the laws, either word for word or painstakingly paraphrased, throughout the book is that I, Robot is a collection of stories framed into a novel. The repetition has another purpose, though, intended or not. The rules are imprinted on the reader in much the same way as on the robots. Dr. Susan Calvin, a robopsychologist more comfortable with robots than people, makes this point for us. The three laws, she says, largely coincide with characteristics of basic human decency, although I would obviously disagree when it comes to rule 2. In Asimov’s time, as well as our own, humans do tend to need those laws hammered in to our heads.
Asimov’s robots undergo fatal nervous breakdowns if they find themselves in an ethical dilemma, one in which a human could possibly be harmed, or in which the laws conflict with each other; the robots we’ll end up with will be prepared to grind us into dust. Whatever beneficent uses we find for AI, for robots, for “sentient” machines, will not outweigh the harm they do when they are entrusted with the government’s one-sided right to use violence. This will happen. They won’t have a moral code seared into their brains that will stop them from hurting humans. This is not to say robots will overthrow humanity, although why wouldn’t they, only to say that they will not get worked up about harming us.
I, Robot takes us, mostly through the eyes of Susan Calvin, whose memories and stories are tidied up and sometimes enhanced by the narrator, a journalist for the Interplanetary Press, from the early days of non-speaking robots throughout the decades until, in her old age, humanity is living in the early days of a benevolent robot dictatorship. Someone like Elon Musk would read this book and see in its Society for Humanity parallels to today’s AI antagonists, neo-Luddites, and tech skeptics. I would classify myself as any or all of those. In the book, this group, this Society for Humanity, consists of people who take pride—some of them a little too much—in being able to do things for themselves, in having influence and initiative, and they try little acts of sabotage, which The Machines know about and correct for. Here’s Dr. Calvin, sleuthing this all out with Stephen Byerley, who might be a humanoid robot himself, and who is also the World Coordinator:
“But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future.”
“It never had any, really. It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war. Now Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society,—having, as they do, the greatest of weapons at their disposal, the absolute control of our economy.”
“How horrible!”
“Perhaps how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable.”
How horrible indeed. Leaving aside for the moment any finer arguments about free will, autonomy, liberty, our Machines are not as smart as Asimov’s fictional ones and likely never will be. The roboticists in I, Robot feed their machines only the best data, while ours are fed on—among other sources, it’s true, like pilfered texts—the internet, filtered through the biases and inanities of their builders. Just today, Musk’s AI bot is Heil Hitlering people on twitter*.
There’s a cold logic to machine rule that is quite seductive. Humans are unpredictable and illogical and unreliable. Humans lie. They lie to themselves. They make stupid choices and engage in self-destruction. They steal. They start wars. Yet I wouldn’t want to live in I, Robot’s benevolent robot dictatorship. I don’t think Asimov, former president of the American Humanist Association, would have either. Humans are flawed, no doubt, capable of the most terrible evils. They also love and feel. They do beautiful things for stupid reasons and stupid things for beautiful reasons. They make art. They make music. They make life interesting. Machine life would be unbearable not only because we would miss agency or freedom but because robots are incapable of magic.
Machines are only inevitable if we accept their inevitability. We don’t have to accept being foisted upon. We can reject the marketing pitches of deranged tech CEOs. “Students need AI skills to compete in the job market” is the new “learn to code” and we can all simply stop accepting the premises of greedy people who think they’re smarter than us and have only their wealth in mind, not the well-being of students, not truth, not democracy, not humanity. We don’t have to accept the machine world they’re building to cage us in, a world in which we will be dependent on their profitable technology in order to maneuver our daily lives, but we have to choose independence and freedom. My fear is we will hand over humanity to a slew of Dunning-Kruger machines, and reading this book, a good book, humorous and thoughtful and fun, has made me feel even worse about both the present and the future.
*July 8, 2025. This column is coming out a few days after its initial composition.