Ethics Outsourcing: A Further AI Danger
“Generative AI doesn’t lie; it outputs incorrect information. It was you, the human being, who lied, when you then passed that information off as truth.”
There are many obvious ethical issues with Generative AI. I won’t go into them here, expect to point out that since I’m both a writer and a member of the Green Party, you can probably figure out where I stand on those issues.
But I want to talk about a further issue, which is perhaps a little less discussed, and which I term “ethics outsourcing” (not to be confused with ethical outsourcing).
Ethics outsourcing is the delegation of one’s intrinsic human responsibility to behave ethically to a Generative AI as though it were a human being with a similar intrinsic responsibility to behave ethically, disregarding the fact that it is not a human being, and it has no intrinsic responsibility to behave ethically.
Let me give you an example.
Say it’s the old days (i.e. before Generative AI) and you work for an insurance company answering queries from customers. You have a moral responsibility to give those customers answers that you can reasonably believe to be correct (within the confines of company policy of course, which is a separate discussion).
A customer contacts you asking if their policy would cover them if a certain eventuality occurred. You don’t know the answer.
You could consult the internal company documentation, or ask a more senior colleague. But you’re lazy, and you can’t be bothered to follow either of those approaches, so you just decide to tell the customer that they’re covered, even though you have no idea if that’s a truthful answer or not.
Would you do that? I’d hope not. (And if the answer is that you would, then I think that bad as our capitalistic hellscape is, you’re still a bit of a shit, because those customers you’re fucking over are real human beings). And of course, if you did do that, and then the eventuality happened, and it turned out that your guess had been wrong, and there was a clear documentary trail pointing at you, you would expect to face disciplinary action, rightfully so.
(Conversely, if you’d consulted the more senior colleague, and it turned out that they’d given you a lazy, guessed, and incorrect answer, that would, or at least should, be on them, not you).
But that was the old days.
So let’s rerun the scenario for our modern era.
A customer contacts you asking if their policy would cover them if a certain eventuality occurred. You don’t know the answer.
You could consult the internal company documentation, or ask a more senior colleague. But you’re lazy, and you can’t be bothered to follow either of those approaches, so you just decide to feed the question into ChatGPT or Copilot or whatever, and send whatever it spits out to the customer, even though you have no idea if it’s a truthful answer or not.
Who bears the moral responsibility if it does turn out to be an incorrect answer?
Well assuming you’re not supposed to be using AI, then I strongly believe that’s on you. On the other hand, if your company — in pursuit of reduced head counts and increased profits — has told you to use AI, then blame should be on whoever made that decision. But either way, the blame isn’t on AI. You can’t blame AI.
Note: I say “should” above, because they’ll probably have some wording that says you’re supposed to validate the answers coming out of AI, which ignores the fact that if you have to manually check every answer, many of the supposed productivity gains disappear, leaving a smaller number of workers to do essentially the same amount of work. (There is very much a separate discussion about the misuse of AI by employers).
(You could well argue that while Generative AIs / LLMs aren’t to blame, the AI companies who’ve created them should take a share of the blame, and you’re right, they should, but that still doesn’t absolve you, a human being who used an AI, of responsibility for what you did. Now you could argue that people have been lied to about AI, and you’re right, but that still doesn’t completely absolve them of blame, in much the same way as people who vote for lying demagogues needs to take some responsibility for their votes, and not just whine that they were lied to.)
To get back to the subject, this isn’t new. In 1979, IBM stated “A computer can never be held accountable, therefore a computer must never make a management decision.“
What’s new is that we as a society have collectively decided that IBM was wrong, and that computers can make decisions despite the fact that they cannot be held accountable.
The blame for that is shared, but it doesn’t alter the fact that wherever that blame lands up, it will be with human beings, not Generative AIs.
There are many examples of human beings making AI-enabled unethical decisions and actions. The Air Canada website chatbot that invented an entirely fictional bereavement policy (in that case, Air Canada tried to blame the chatbot and the judge basically ripped them a new one). The AI created travel website that listed a non-existent Dublin Halloween parade (to which thousands turned up). The newspaper book review column that listed a bunch of fictional novels (fictional in the sense that while the authors listed were real, the books were made up books, with made up plots, that those authors had never written and which simply didn’t exist). The literature festival in Bradford that didn’t merely use AI to generate publicity pictures, but managed to publish one that featured a girl with three feet. (Although I’m not sure that the last one quite counts as misinformation because it’s so utterly absurd).
But this issue is more than simply a problem of AIs generating incorrect information. It can also be about AIs being misused in ways that a human being would hopefully realise was wrong, of which the best example was perhaps the Willy’s Chocolate Experience debacle.

Imagine it was “the old days” and you had decided to run some sort of pop-up event, a “Christmas Wonderland” type event perhaps (or in this case, a rip-off of a popular children’s novel that’s just had a new film adaptation).
You go to a graphic designer, and the following conversation occurs:
You: We’d like you to do a poster for our Christmas Wonderland Event.
Graphic Designer: Cool. So what do you want the poster to show?
You: Well pictures of all the attractions that we’ll have on offer.
Graphic Designer: And what are those attractions?
You: We don’t know. We were hoping you would come up with ideas for what attractions we could offer.
Graphic Designer: What are you talking about? I’m a graphic designer, not an events planner! How do I know what attractions to do? I don’t know what’s feasible. I don’t know what your budget can support. I don’t know what planning or health and safety regulations might be involved. I don’t know how this sort of event works. You’re the ones putting on this event, so you’re the ones that need to design it!
If you were stupid enough to not realise that you need to actually plan and design your event prior to creating advertisements for it, a conversation with any competent graphic designer would hopefully give you a reality check.
But of course, it doesn’t work that way now. You can just go to an AI and type in a prompt that says you want a poster for a Christmas Wonderland Event that shows rides and attractions and decorations, and it will spit out a poster that at first glance will look impressive and competent and show all sorts of fun and interesting looking attractions. (Sure, a few people might have an extra arm and their teeth might not quite be human, but broadly, it will look good).
But you now have a poster promising things that you have no idea how to make real, and that you very likely can’t make real. And the result will be a lot of crying kids and angry parents.
And that, morally, is on you.
Not the AI.
Generative AI doesn’t lie; it outputs incorrect information. It was you, the human being, who lied, when you then passed that information off as truth.
The Nexus Files is free to read. But if you subscribe you'll get new posts emailed to your inbox automatically, and I won't feel like I'm pointlessly screaming into the void.