Confluence: Ethics and Attention
I recently finished Tim Hwang’s mercifully slim Subprime Attention Crisis. Its argument is that through blocking software, click fraud, and overall debasement of the advertising inventory, the internet advertising industry and all those who depend on it for income are due for a…correction. I don’t know enough to adjudicate on the argument itself✱, but I found myself deeply drawn into the context-setting discussion around programmatic advertising and the packaging of attention.
✱ It would be interesting to give ad inventory and pricing data the Didier Sornette treatment to see if one could detect superexponential growth.
I am keenly interested in the notion of normalizing things, because that is a lot of what I do: I systematize processes and that always means normalizing the objects the processes operate on, in order to process them in bulk. Normalization begets systematization, and later on commoditization and finally securitization (and therefore programmatic exchange). That is what happened to physical goods, housing, and now, advertising inventory as a proxy for attention. Hwang’s provocation is that these units may be worth much less than what people are paying for them.
The other thing that’s been on my mind is a conversation I had around Google’s recent summary dismissal of Dr. Timnit Gebru, ostensibly for coauthoring a paper that was something of a roundup of ways certain artificial intelligences, and by extension, her erstwhile employer, could be bad. My interlocutor said something to the effect of “AI ethics is a fake field and Google is dumb for entertaining it”, and I gave myself the exercise of contorting my thinking until I could see some way this take had some substance.
A few weeks prior to this event, I had participated in something of a tech ethics salon organized by Cennydd Bowles, in which I wondered aloud if companies can be trusted to have their own internal ethics committees whose function is more than a public relations prop. Having percolated in my head for some time, I think I have an answer: it depends, namely on the scope of the entity’s mandate. Publications have ombudspeople—the New York Times infamously eliminated theirs in 2017—but that role is mainly to police the integrity of individual pieces of content. Universities likewise have ethics committees, but those govern the compliance of individual experiments. Neither of these scenarios threaten business-critical losses to the organization if they are permitted to operate without interference. Indeed, they improve the aggregate quality of the product and/or prestige of the organization. The problem of ethics in artificial intelligence, or rather “tech” at large, is that so many such companies’ business models are fundamentally unethical. Permitting competent and well-resourced ethics people to do their job would mean leaving so much money on the table that a good chunk of those businesses would have to close up shop.
This brings me back to attention, and the systematization of processes. The reason why you systematize a process is so you can “scale” it, which is another way of saying not having to pay people to attend to it. Then you can run that process as many times as your computing hardware will let you, squeezing a few pennies out of every run. This is part of what makes tech companies so profitable (the other part being low variable costs): every burp and fart is metered, and everything that can be counted, can be monetized one way or another.
My own work generally focuses on scaling in the opposite direction: trying to take cumbersome, error-prone, and usually already fairly systematic processes off the backs of people who are already being paid to do something else, and shrink them to a point.
The role of artificial intelligence—or rather, machine learning, to satisfy the pedants—is to make decisions that could otherwise only reliably be made by a human, assuming the human could even muster the energy. (The systems Gebru was critiquing had the additional dimension of being able to fool or divert people with generated content, but what is writing besides a large number of decisions stacked end to end?) Then, like all other things computerish, the frequency of decisions is scaled up a zillionfold. This introduction of statistical methods in systems that were once completely deterministic means that sometimes these decisions will be wrong. When said decisions are wrong—or even sometimes when they’re “right”—people are harmed, and the people harmed are predictably overrepresented, in both frequency and severity, by women and racialized populations.
The extent to which “AI ethics” is a “fake field”, I entertain, is that its harms are harms of systematization in general—that is, from a deliberate draining of human care and attention from a process. Eliminating attention also muddies accountability; indeed that is the hallmark of system harm: redress is at best costly, if it isn’t impossible. AI is just the latest mechanism. Heck, you don’t even need computers, people have been systematizing processes forever. Systematizing a process is saying “it’s too resource-intensive to do manually”, either in the sense that you can’t afford it, or in the sense that it isn’t worth doing. And sometimes it really isn’t worth doing manually! But I think we have to be honest that when we are targeting “efficiency”, eliminating attention and care from a process is precisely what we are trying to do. That is a much, much larger conversation than artificial intelligence, or even software.
I further submit that the reported lack of diversity in the teams that develop these systems, the training sets they create, and the applications of their results—to the extent that deliberate invidious behaviour can be ruled out—is another manifestation of being stingy with attention. All computer systems project the biases of their creators; what you leave out will be represented at the same magnification factor as what you put in.
Nevertheless, there do exist ethical issues peculiar to artificial intelligence, and they have to do with what goes into those systems—and what comes out of them, irrespective of what went in. The ability to scale decision-making is definitely new. The opacity inherent in the technical details of how those decisions are made will definitely be exploited. The question is whether we can trust the likes of Google to fund an internal ethics committee that does meaningful work without interference, and I believe we have a data point for no.
It may be viable to have a kind of “training data ombudsperson” on the payroll, but that position would necessarily have to concede all the bigger stuff I said about systematization, to say nothing of surveillance-based advertising.
Here we can turn back to Hwang, who concludes his book (sorry, spoilers—can you even spoiler a nonfiction book?) by suggesting the creation of an entity analogous to the National Bureau of Economic Research, but for advertising. Maybe we also need one for AI. Or maybe we just need to listen to Dr. Gebru. There is also the Mols Sauter approach:
“Ethics? What a funny way to pronounce ‘meaningful and enforceable regulation’!”
At any rate, any narrow discussion of ethics in artificial intelligence would benefit from looping in the larger conversation about the role of systematization in our society, and which processes in our lives, businesses, and communities, actually do merit attention and care.