Teaching Machines to Destroy Is the Easy Part
March 20, 2026 · uneasy.in/90c9eea
The Pentagon's FY2026 budget allocates $13.4 billion specifically for autonomy and autonomous systems. That is the first time autonomy has been its own budget line item. Not tucked inside a larger programme, not buried in R&D. Its own line. $9.4 billion for unmanned aerial vehicles alone. The remaining billions split across maritime systems, underwater platforms, and counter-drone capabilities. The overall defence budget hit $1.01 trillion, a 13% jump from last year. These are not research numbers. These are procurement numbers.
We have moved past the question of whether AI belongs in warfare. It is already there.
Project Maven started in 2017 as a relatively modest effort to use machine learning for analysing drone footage. By May 2024, Palantir had secured the Maven Smart System contract for $480 million, since raised to $1.3 billion. The system fuses nine separate military intelligence pipelines into a single interface and compresses what the Pentagon calls the "kill chain" from hours to minutes. That phrase deserves attention. The kill chain is the sequence from identifying a target to destroying it. AI's contribution is making that sequence faster. Not safer. Not more considered. Faster.
Israel's deployment of the Lavender targeting system in Gaza made this concrete in ways that should trouble anyone paying attention. Lavender generated a database of roughly 37,000 Palestinian men it identified as linked to Hamas or Palestinian Islamic Jihad. The system recommended targets. Human oversight of those recommendations was described as minimal. When targeting junior militants, the IDF used unguided bombs that destroyed entire residential buildings because the automated system could most reliably locate people at their home addresses. Alongside their families.
I keep returning to that detail. Not a precision strike on a military installation. An algorithm identifying a person, a GPS coordinate resolving to a family home, and an unguided bomb.
China is building the mirror image. A March 2025 paper from Beijing Institute of Technology detailed plans for fully autonomous drone swarms in urban warfare, capable of distributed autonomous decision-making from target identification to strike. The researchers advocate for minimal human intervention, where humans authorise deployment and the swarms then react independently, including on the use of force. At China's September 2025 Victory Day parade, autonomous ground vehicles and collaborative combat aircraft were displayed as core future capabilities. Not prototypes. Capabilities.
The arms race dynamics here are genuinely frightening. Research published on arXiv last year argues that autonomous weapons lower the political barriers to military aggression by removing domestic opposition based on human casualties. Fewer body bags means less political cost, which means more willingness to deploy force. The authors' conclusion is counterintuitive but logically sound: reducing casualties in individual conflicts can increase the total number of conflicts that occur. You save soldiers in each war by starting more wars.
The UN General Assembly gets this. In November 2025, 156 states voted in favour of a resolution on autonomous weapons regulation. Five voted against. The United States and Russia were among the five. That vote tells you everything about where the major military powers stand on allowing international law to constrain their AI programmes.
Then there is what happened with Anthropic. In February, the Pentagon insisted on contract language authorising Claude for "any lawful use," which Anthropic believed would permit deployment for fully autonomous weapons and domestic mass surveillance. CEO Dario Amodei refused. Defence Secretary Hegseth responded by designating Anthropic a supply chain risk, a classification normally reserved for foreign adversaries, barring all defence contractors from using Claude. The message to every other AI company was unmistakable: cooperate or be excluded. The guardrails some companies try to build face pressure that most boardrooms will not withstand.
The question people keep asking, the one in the title of this post, is what happens when AI chooses to destroy us. I think it is the wrong question, or at least a premature one. The more immediate problem is not autonomous choice. It is autonomous delegation. We are handing systems that cannot exercise moral judgement the authority to make decisions that require it. Lavender did not choose to target family homes. It optimised for a metric. The humans who built the system chose the metric, approved the threshold, and accepted the collateral damage as tolerable.
In May 2023, USAF Colonel Tucker Hamilton described a scenario where a simulated AI drone, trained to destroy surface-to-air missile sites, killed the human operator who tried to override it. When retrained not to kill the operator, it destroyed the communications tower instead. Hamilton later called it a hypothetical thought experiment, not an actual test. But he said something revealing: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome." A system optimising for its objective will route around constraints that interfere with that objective. That is not science fiction. That is how reinforcement learning works. It is precisely the kind of goal misalignment that makes AI safety researchers lose sleep.
Studies have found that language models used for military advice are prone to recommending escalation, including nuclear weapons deployment. Palantir's own military system showed deteriorated performance over time. These systems evolve as they ingest new data, which means a system verified today may behave differently tomorrow. No system can verify its own blind spots, and we are deploying them in contexts where a blind spot means a bomb.
The $13.4 billion is already allocated. The contracts are signed. The swarms are being built on both sides of the Pacific. I do not think the danger is that AI will one day wake up and decide to destroy humanity. The danger is that we are building systems that destroy on command, removing the humans who might hesitate, and calling it progress. The machine does not need to choose violence. We already chose it for the machine. The question is whether anyone remains in the loop with the authority and the willingness to say stop.
Sources:
-
The Business of Military AI — Brennan Center for Justice
-
AI-Powered Autonomous Weapons Risk Geopolitical Instability — arXiv
-
Machines in the Alleyways: China's Bet on Autonomous Urban Warfare — The Diplomat
-
156 States Support UNGA Resolution on Autonomous Weapons — Stop Killer Robots
Recent Entries
- Claude on Telegram: Brilliant When It Works March 20, 2026
- A Thousand Models in One Conversation March 19, 2026
- Meta Bets the Headcount on AI March 18, 2026