Most Americans would be shocked to learn that the Trump administration might be using a generative AI model called “Claude” to make targeting decisions in the war against Iran. The Wall Street Journal reports that this is, in fact, the case. Anthropic, the company that makes and manages Claude, says the model is not capable of making reliable decisions of this kind.
After a missile strike blew up a primary school for girls, killing 175 people, most of them children, the Pentagon refused to confirm or deny that it had used AI to select the school as a target. The refusal appears to many to be an admission—both that the technology was being used in this way and that doing so is a breach of the laws of war, where military commanders and civilian leadership are required to know with certainty that they are targeting only legitimate military facilities.
For the last several weeks, the Secretary of Defense has been embroiled in an odd, ugly, and possibly unlawful standoff with Anthropic, which has safety rules in place designed to prevent such disasters. The Secretary has threatened to list the company as a supply chain risk, to make it harder for it to earn government contracts, unless it removes safeguards that prevent its technology from being used for mass surveillance and autonomous weapons systems.
The intentional targeting of civilians, or the intentional removal of safeguards intended to prevent harm to civilians, is a war crime. If the Minab girls school was targeted by an AI system its own makers say is not capable of making such decisions, and no one took any effort to correct such a mistake, or failed to do the simple verification that would reveal it was a mistake, then those in command who set that chain of events in motion are responsible, legally, for the deaths or more than 100 children and at least 175 civilians.
There is no evidence in the public domain that any large language model or any service based on them is capable of making life and death decisions with perfect infallibility. By some estimates, even the most reliable generative AI platform makes significant mistakes or misrepresents facts 15% of the time. There is no public comprehensive study of the feasibility of LLM-based generative AI systems making targeting decisions in combat.
According to Statista:
Data collected between May and June 2025 and analyzed by a cohort of journalists revealed that almost half of the responses (48 percent) from popular chatbots – free versions of ChatGPT, Gemini, Copilot and Perplexity – contained accuracy issues. 17 percent were significant errors, mainly regarding sourcing and missing context. In December 2024, the rate of inaccurate responses (observed using a smaller answers sample) was significantly higher: 72 percent for all four LLMs. 31 percent were major issues in that case.
Since the “generative” part means, in a sense, “making things up”, it is unlikely gen AI chatbots would ever be suitable for life-and-death decisions, let alone making decisions about who should die in combat operations. Whether the Department of Defense has developed a specialized “agentic AI” tool based on Claude or another platform is an important question, and the American people have a right to know.
According to The New York Times:
Claude is built around a lengthy internal constitution, written in part by philosophers, that is meant to guide the moral judgments it makes. To read that constitution is to face up to the weirdness of the world we have entered.
The primary directive Anthropic gives Claude is “to prioritize not undermining human oversight of A.I.” — it is told to prioritize that even over ethical behavior, because “ a given iteration of Claude could turn out to have harmful values or mistaken views, and it’s important for humans to be able to identify and correct any such issues before they proliferate or have a negative impact on the world.”
We must remember that the United States was founded on the principle of universal, unalienable human rights, including the rights to life, liberty, and the pursuit of happiness. Unalienable means no person can be separated from or deprived of those rights. The Constitution recognizes and upholds these rights as universal and places strict prohibitions on government action, to avoid any abuse or violation of these rights.
Article I ensures the people’s representatives in Congress, not a lone executive, fund, regulate, and control the armed forces, and requires the Congress declare war to activate those forces for combat operations. The 5th Amendment prohibits taking of life without due process. The 8th Amendment prohibits all forms of cruelty. The 9th Amendment recognizes all human rights, including those not written into law.
Power can never be legitimate, under American law, if it ignores any of these rights, including those not written into law but which enjoy full Constitutional protection under the 9th Amendment. No public official can ever have legitimate authority to allow AI systems to decide to end a person’s life.
AI is not ready for war. From what we have seen to date, AI systems should never be allowed to make life-and-death decisions in combat operations. More bluntly, AI should not be targeting anyone for death.
Congress and the Courts have an irreducible, legally binding obligation to ensure the administration adheres to this foundational standard—that the state must always favor human life, rights, and freedom, over the unlawful aim of thoughtless fast-paced killing.


