The fog of war is no cover for unethical AI: Watching the international order unravel in 5 scenes
By Christiana Zenner, PhD and Kelly Clancy, PhD
Note: this is the first official collaborative release between Moments of Zenner, Christiana Zenner’s newsletter, and PACES, a newsletter about the dangers of AI in public life spearheaded by Kelly Clancy. Feel free to share widely!

Over the past ten days, two era-defining melodramas have played out: a barrage of missiles raining on Iran, backlit by backroom dealings between Big AI and the Pentagon. Linking the two is Pete Hegseth’s unmoored philosophy of war fighting: “No stupid rules of engagement.” Since 1949 the international community has professed to follow the Geneva Convention’s rules of engagement to protect those most harmed by war: civilians, noncombatants, the sick, wounded, and injured. With no end in sight to this war, we believe that documenting the first moments of the conflict provides clarity as we watch the horrors in the Middle East continue to unfold.
No longer is our nation even pretending to adhere to the Geneva Convention. In a press conference this week, Hegseth described “Death & destruction from the sky all day. We're playing for keeps. Our warfighters have maximum authorities granted personally by POTUS & yours truly.” And AI is augmenting this flagrant, jingoistic departure from international norms. Specifically: AI corporations are fueling this new era of war fighting yet are utterly unprepared for – and, despite lip service, disinterested in — reigning in the worst excesses of craven warmaking.
While the “fog of war” has previously been invoked as a cover for explaining poor decisions about battle, the exact opposite is now true. Horrified people around the world are watching in real time, the grift, corruption, and depravity of the US’s federal warmaking machine; and amidst the rubble is a resounding moral clarity that the use of generativeAI-guided force is unwarranted, unethical, and extremely dangerous as precedent for future conflicts. It also reveals why there is no such thing as “ethical AI” in a warmaking, profit-driven, oligarchic country.
As academics and activists who study the ethical implications of AI for public life, we argue that it is wise to start from the premise that ALL AI IN WAR IS UNETHICAL until its principles of use are articulated, clarified, tested, and accepted by a directly representative democratic majority in conjunction with international norms of combat. As we have argued elsewhere, generativeAI’s ubiquitous deployment in military, education, and other contexts is neither inevitable nor neutral.
Dramatic scenes from the past week unveil five key questions that must be addressed – about the ethics of war in an AI era as well as the ethics of AI more generally in an era of global inequality, fascism, and the eclipse of post-WWII multilateralism.
Scene One, Question One: Who sets the guardrails or rules of engagement for AI? The first scene takes us 72 hours before the beginning of the war on Iran, where staredown over the moral uses of AI is intensifying. Anthropic, which styles itself as the ethical version of artificial intelligence, refused to back down on two guardrails: they wanted assurances that Claude would not be deployed for fully autonomous lethal force or domestic surveillance. The Pentagon refused: no guardrails.
Anthropic didn’t blink. The Pentagon pulled out of negotiations and declared its relationship with Anthropic over. President Donald Trump announced in characteristic manner that the US “WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS.”
Scene Two, Question Two: What will the government – or our broligarchs – do to protect us? OpenAI, waiting in the wings, snatched up the Pentagon contract in the early hours of Saturday morning, shortly before the bombing campaign began. CEO Sam Altman offered blithe assurance on X that we could stop worrying and trust the government (and him): “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome…Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.”
Scene Three, Question Three: Can AI protect the safety of noncombatants, a primary ethical obligation during war time? Mere hours after the new contract between OpenAI and the Pentagon, images of a girls school bombed by the U.S. began spreading across social media, as we learned that first 40, then 53, and now over 160 or more children were killed in the bombing of the Shajarah Tayyebeh school, in the town of Minab. Images of tiny pink backpacks surrounded by rubble and dust underscored the destruction and death of underage noncombatants.
Far from being a safeguard, AI-powered war is a galling amplification of the Armageddon-driven whims of men in power who believe that God is on their side – and AI has no moral compass to stop it. More than 40 members of the military have filed reports decrying the administration’s insistence that this is a holy war: “A combat-unit commander told non-commissioned officers at a briefing Monday that the Iran war is part of God’s plan and that Pres. Donald Trump was ‘anointed by Jesus to light the signal fire in Iran to cause Armageddon and mark his return to Earth.’”
Scene Four, Question Four: Are any institutions trustworthy for discerning recommended courses of action with regard to AI and militarism? The short answer is no. Neither the government, nor the tech companies, nor AI bots themselves, autonomous as they may be, are reliable simulators of ethical reasoning and all that it requires..
The absurdity reached new levels when we learned that the technology used to target those bombs was, in fact … Claude. AI helped target the bombs that killed those little girls (and countless others across the region). AI powered and decided targets of war. It did so in total violation of protocols of war, and at the whim of a handful of powerful men. This is especially terrifying when augmented by the giant question of nuclear warheads that is reaching peak volume with the US’s attacks on Iran.
Scene Five, Question Five: Is there any meaningful, individual ethical action on AI in this moment? As news of OpenAI’s capitulation to the Pentagon broke, furious consumers in the US and around the world decided to exercise their purchasing power, canceling their ChatGPT subscriptions and trading them for Claude, which became the most-downloaded App in the US. And yet, Claude may still be guiding missiles. More to the point, though, ethics is not built into AI, and without sustained independent human work, it cannot be. Instead, the industry is full of what Nitish Pawha calls “convenient ethical punts.”
These five scenes and questions suggest that “ethical” is an empty signifier when it comes to AI, especially in war. A technology is only as good as the framework within which it is deployed and, at present, we’re watching all of our institutions and leaders bend at the knee to unrestrained capitalistic exploitation, extraction, profit-seeking, jingoism, and data acquisition. The costs are enormous, uneven, and deadly. We know that war, like AI, extracts a terrible environmental price. The insinuation of AI into warmaking deepens and widens vicious circles of political instability and climate catastrophe for billions of people. In addition, the bombing of desalination plants in Iran and Bahrain, as well as oil storage sites in heavily-populated Tehran, are attacks on infrastructure that violate international law and lead to civilian health crises. They show the costs of exacerbating environmental catastrophe during war time.
What we need is not better algorithms but more rigorous human ethics for AI at all levels of social and political institutions. Scholars like Alondra Nelson, Ph.D. in the US and Abebe Birhane, Ph.D. in Ireland are leading the way. By contrast, vapid promises of safety professed on social media by economic and political elites are hardly reliable; rather, they are simply “Return-on-Investment” ethical performativity that credulously fawns over untrustworthy institutions.
These scenarios point to the need for the precautionary principle – postponing use of a technology until it can be proven that its use does not violate appropriate safety standards. Here, the modest standards would be those of the Geneva Convention.
At present, this is an immoral war waged with immoral tools. To re-establish sanity, we need actual human moral reasoning and not the facile exculpatory pablum of tech bosses such as Sam Altman, who recently told his followers on X: “We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.” Well, yes, Uncle Sams: Let’s serve humanity better by banning AI in war, as a start.
-
Well said! Thank you for speaking out for humanity and against the free rein of AI.
Add a comment: