We often talk about technology as if it is just a tool and that there isn’t anything good or bad about it. The good or bad, we are fond of saying, comes from the use case. I myself used to believe this, but I have changed my mind in recent years. Technologies do come with an inherent ethic of their own and there is a limit to how much of a change our use of them can bring.
In his book The Shallows, Nicholas Carr writes about how the ethic of a technology exerts influence over the user. A gardening implement for example, turns the user of it into an extension of itself. A man holding a sword can’t do anything other than what a sword requires him to do. He can choose to not do it, but the sword is only an instrument to kill and cut and while the man is holding it, that is the only purpose he can serve. I wrote recently about how even something like a newspaper changes the person reading it. While reading it, you can’t do much else. It takes control of your ability to do things and keeps that control till you let go of it.
I feel the discourse on AI needs to be looked at in a similar light. A lot of people who defend its rampant use everywhere seem to be under the impression that it has no default nature and that everything depends on how we use it. I do think intention plays some role, but it is also clear to me that intentions can’t cross the barrier created by the inherent ethic of the AI tool.
What is it that we do when we use it? What is a chatbot really?
It’s a tool that generates strings of text or image patterns in response to a prompt. These strings cannot be controlled. Changes can only be suggested. The text generated by the chatbot has only one purpose — to seem authentic. The chatbot doesn’t care about being authentic. It doesn’t care about truth. It doesn’t care about the harm or good that it does. It can’t care about anything.
It is ony designed to have the appearance of authenticity. Like any other technology, its ethic turns us, the user, into an extension of it. When we use it, we become someone who doesn’t care about authenticity and accuracy. We become someone whose only concern is observing the appearance of authenticity and being satisfied. We are becoming okay with bad art because AI art looks ‘good enough’. We are becoming okay with bad writing because AI writing is ‘good enough’. We are getting comfortable with false or distorted information because AI generated answers look authentic enough.
We are becoming willing participants in the enterprise of devaluing merit.
In our quest to imagine a future where everything is automated, we have let go of the expectation that perhaps not everything should be automated. Art should not be automated, education should not be automated, governance and justice should not be automated. These are eminently human concerns and their pursuit cannot stop at ‘good enough’. It has to tend towards excellence.
It is not as if advocates of AI-based ‘creativity’ have given up on dreams of excellence and wholeheartedly support the kind of mediocrity that has come to define generative AI and the slop it ‘makes’. But their commitment to excellence exists somewhere in a vague utopian future. They keep telling us to ‘wait for it to get better’ as if it is a certainty. I think it was Gary Marcus who pointed out that such optimism is like looking at a baby double in size over a few short years and concluding that this doubling will go on infinitely. Indeed, with some of these advocates, the commitment seems to border on religious. I find myself unable to think of their justifications as fundamentally different from those of a believer who is eager to show himself as inferior to his gods.