AI and the Military
Excerpt from the book, A Mind Made of Math. Available on Amazon and for free at stevej.ca
One night in 1983, a Soviet lieutenant colonel named Stanislav Petrov faced a flickering warning screen that claimed American nuclear missiles were inbound. In theory, his orders were clear: repo
rt the “attack” and trigger a counterstrike. But Petrov hesitated. Something felt off. Why would a real first strike come as a mere handful of missiles? Trusting his intuition over the early-warning computer, he decided it was a false alarm and held his fire. (“The Man Who ‘Saved the World’ Dies at 77 | Arms Control Association,” accessed May 26, 2025, https://www.armscontrol.org/act/2017-10/news-briefs/man-who-saved-world-dies-77.)
He was correct, and his act of prudence averted the potential for a nuclear war. Now imagine that fateful decision being left not to a human but to an algorithm. Would the machine have the wisdom to question its own data? The risks of AI and robotics in military applications become stark through that contrast: in the most critical moments, will a human conscience be present, or just cold code? Today’s militaries are racing to bring more AI and autonomy into warfare. Autonomous drones, robotic sentries, and algorithmic early-warning systems are no longer science fiction but prototypes and even deployed devices. Indeed, many countries already employ automated weapons defenses, like high-speed missile interceptors and anti-rocket systems, that can identify and fire on incoming threats without immediate human direction. These systems, such as the U.S. Navy’s Aegis or Israel’s Iron Dome, act as human‐supervised guardians: if a human controller doesn’t intervene in time, the machine will fire by itself to shoot down an attack. The logic is straightforward: machines react faster than people. 21st-century algorithms can process sensor data and execute a response in milliseconds, far quicker than a soldier can blink. Chinese military scholars now talk about a coming “battlefield singularity” when the tempo of combat outstrips human decision-making altogether. (Paul Scharre, “A Million Mistakes a Second,” Foreign Policy (blog), May 27, 2025, https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/.)
Handing over the speed of war to silicon brains promises a tactical edge. But what about the control of war? The consequences of ceding control to autonomous systems could be grave. In finance, we’ve seen how algorithms operating at blistering speeds can interact in unforeseen ways; “flash crashes” have sent markets plummeting in minutes, until human regulators hit the brakes. Unfortunately, battlefields have no circuitbreakers. A similar whirlwind in war is a nightmare scenario. Imagine autonomous aerial drones from rival nations, each programmed to strike when threatened, encountering one another. A jittery sensor or a software glitch could trigger a cycle of instantaneous reprisals. One drone’s mistake could set off another’s lethal response, each machine acting and reacting faster than any human commander could intervene. In a conflict rife with autonomous systems, an absurdly small incident such as a flock of birds misidentified as an incoming missile could potentially escalate into a full-blown battle before any human even realizes an error occurred.
The fog of war has always bred mistakes and friendly fire, but with autonomous weapons the fog could thicken to a point where even those who unleashed the machines don’t fully understand their actions. The speed that makes these systems alluring also makes them brittle in the face of the unexpected. It is a black-box brittleness: when a deadly mistake happens, the algorithm cannot explain itself or feel remorse. And unlike Petrov, a machine will never intuitively disobey a flawed instruction; it will dutifully execute a fatal logic if that’s what its code and data tell it to do.
Aware of these high stakes, the world’s policymakers have begun crafting new rules to govern military AI and robotics; though in truth, they are scrambling to catch up to a technology racing ahead. International humanitarian law, the law of war forged in the wake of earlier horrors, already applies to all new weapons by default. No matter how novel or high-tech, an attack must still distinguish combatants from civilians and avoid disproportionate harm. Can an autonomous robot reliably follow the Geneva Conventions? Who is accountable if it violates them? These questions are testing the limits of the law. An autonomous system, by definition, selects its target and applies force on its own. The commander or operator might not even know exactly whom or what the robot will shoot when they activate it. This loss of direct human control “raises serious concerns from humanitarian, legal and ethical perspectives,” the International Committee of the Red Cross (ICRC) has warned. (“ICRC Position on Autonomous Weapon Systems | ICRC,” May 12, 2021, https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems.)
If the robot makes a mistake, who bears responsibility; the software developer, the military commander, the machine itself? (“Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association,” accessed May 26, 2025, https://www.armscontrol.org/act/2025-01/features/geopolitics-and-regulation-autonomous-weapons-systems.) Today, there is no clear answer. The fear is that autonomy could create a responsibility gap, where lethal harm occurs and no human can be readily held accountable. Accountability cannot be “transferred to machines,” as governments worldwide have formally agreed, yet without concrete rules and human oversight, that principle may be hard to uphold in practice. (Accessed May 26, 2025, https://www.ccdcoe.org/uploads/2020/02/UN-191213_CCWMSP-Final-report-Annex-III_Guiding-Principles-affirmed-by-GGE.pdf.)
Despite these worries, or rather because of them, diplomats have been convening in Geneva and elsewhere to negotiate how to keep a human leash on the technologies of war. Since 2017, discussions under the United Nations’ Convention on Certain Conventional Weapons (CCW) have inched forward on the issue of lethal autonomous weapons systems. At the CCW, nearly every nation agrees at least on paper that humans must remain responsible for decisions to use force. In 2019, an international panel distilled a set of guiding principles – among them, that international humanitarian law applies fully to autonomous weapons, and that human control and judgment must be retained across a weapon’s lifecycle. In essence, they affirmed that a robot is never absolved of the laws of war: its makers and operators are on the hook for its actions. (UN-191213_CCW-MSPFinal-Report-Annex-III_Guiding-Principles-Affirmed-by-GGE.Pdf.) There is also broad acknowledgment that certain lines shouldn’t be crossed. Even the UN Secretary-General António Guterres has called lethal autonomous weapons “politically unacceptable and morally repugnant,” urging a global ban on systems that operate without human oversight. (“‘Politically Unacceptable, Morally Repugnant’: UN Chief Calls for Global Ban on ‘killer Robots’ | UN News,” May 14, 2025, https://news.un.org/en/story/2025/05/1163256.) In 2023 he went further, asking states to conclude by 2026 a legally binding treaty to prohibit autonomous weapons that can’t be used in line with international humanitarian law and to ensure human control over others. (“Lethal Autonomous Weapon Systems (LAWS) – UNODA,” accessed May 26, 2025, https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-theccw/.) The ICRC and many NGOs echo this, recommending an outright prohibition on any weapon that targets people or behaves unpredictably, and strict regulations on the rest. Their logic is straightforward: machines should not have a license to kill human beings, and no algorithm should be entrusted with life-and-death decisions that even seasoned officers struggle with.
Momentum is building. At the United Nations General Assembly in 2023, an overwhelming majority of states, 164 nations, supported a first-ever resolution on autonomous weapons, calling for the Secretary-General to report on the challenges they pose and possible solutions. (“First Committee Approves New Resolution on Lethal Autonomous Weapons, as Speaker Warns ‘An Algorithm Must Not Be in Full Control of Decisions Involving Killing’ | Meetings Coverage and Press Releases,” accessed May 26, 2025, https://press.un.org/en/2023/gadis3731.doc.htm.) Dozens of countries, including a coalition led by nations like Austria, Costa Rica, and New Zealand, are now vocally calling for a preemptive ban on “killer robots”. (Mary Wareham, “Stopping Killer Robots,” Human Rights Watch, August 10, 2020, https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomousweapons-and.) This chorus spans continents and political systems, united by a shared intuition that removing humans from lethal decision-making crosses a moral red line. In a poignant parallel to earlier eras, they liken autonomous weapons to chemical or biological arms, abhorrent tools that the world managed to outlaw before they could wreak unchecked havoc. In 1995, states agreed to ban blinding laser weapons before they ever saw use on the battlefield, recognizing that some technologies are simply too cruel or destabilizing to allow. (“Protocol on Blinding Laser Weapons (Protocol IV to the 1980 Convention), 13 October 1995,” accessed May 26, 2025, https://ihl-databases.icrc.org/en/ihl-treaties/ccw-protocol-iv.) Many diplomats now argue that autonomous killing machines merit the same preemptive treatment. A line of prohibition should be drawn before an arms race spirals out of control.
And yet, for all this activity, progress is achingly slow and fraught with disagreement. The meetings in Geneva have been described as a tentative waltz: lots of diplomatic handshakes, statements of principle, but no binding rules in sight. A handful of militarily advanced states remain reluctant to commit to anything that might hamper their technological edge. These powers, including the United States and Russia, argue that existing humanitarian law is sufficient and that a new treaty might be premature or too restrictive. (“Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association,” accessed May 26, 2025, https://www.armscontrol.org/act/2025-01/features/geopolitics-and-regulation-autonomous-weapons-systems.) Some of them are investing heavily in AI-driven arsenals themselves, and they prefer broad ethical guidelines over hard bans. Geopolitics looms over the debate: rising tensions and mistrust make consensus difficult just when it’s needed most. The result is a kind of regulatory paralysis. Despite nearly a decade of talks, the world has little to show in concrete safeguards while the technology marches onward. The gap between the rapid development of military AI and the glacial pace of international law is widening, and it troubles many observers. It means we are essentially relying on voluntary restraint and the caution of individual commanders for now. That is a thin red line of defense against potential calamity. In the absence of a universal law, stopgap measures and norms are emerging. Several nations have announced their own policies to ensure a human check remains in place. The U.S. Department of Defense, for instance, issued directives that any autonomous weapon must be designed to allow “appropriate levels of human judgment” in its use. (Accessed May 26, 2025, https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf.)
In practice, this means an American drone or missile with autonomous features is supposed to have a person somewhere in the decision loop; at least to approve activating the system and must be able to monitor and intervene once the autonomous functionality has been activated. Other militaries have echoed similar positions. Britain has stated it will not develop systems that operate without context-appropriate human involvement, and France has outlined ethical principles for AI use in defense. At the same time, the Pentagon and NATO allies are pouring funds into AI driven surveillance and decision-support tools, which stop short of full autonomy but inch toward it. The line is thin: an algorithm that merely advises a human commander on which targets to strike can profoundly influence the decision, especially under time pressure. If a computer vision system flags a vehicle as an enemy with high confidence, how readily will the human override it? The risk of over-delegation, humans trusting the machine too much, is very real. Military history is replete with instances of human error; now we must consider machine error and human over-trust in those machines.
Outside of governments, civil society and even technologists themselves have mobilized. A global coalition of NGOs launched the Campaign to Stop Killer Robots in 2013, and it has since gathered endorsements from thousands of AI researchers, faith leaders, Nobel laureates, and ordinary citizens around the world. Notably, some of the very people designing advanced AI are among the most vocal in warning against autonomous weapons. In 2017, dozens of robotics and AI company founders, including renowned figures in Silicon Valley, sent an open letter to the United Nations urging a ban on these weapons, lest we usher in a “third revolution in warfare”. (Samuel Gibbs, “Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots,” The Guardian, August 20, 2017, sec. Technology, https://www.theguardian.com/technology/2017/aug/20/elonmusk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war.) They painted a dire picture: once loose, swarms of self-directing weapons could fight at a scale and speed humanity has never seen, “at timescales faster than humans can comprehend”. Such weapons, they warned, could be misused by tyrants and terrorists or even hacked to wreak havoc. The letter’s language was urgent: “Once this Pandora’s box is opened, it will be hard to close” and that plea helped drive the issue further into the diplomatic spotlight.
Likewise, some tech companies have pledged not to develop AI for use in weapons, and a few high-profile engineers have resigned rather than contribute to military AI projects they felt crossed ethical lines. This unprecedented involvement of the private sector and engineers in arms control advocacy shows how deeply society is entwined in the debate. Unlike past arms races which were the realm of governments and generals alone, the AI arms race involves corporate labs, university researchers, and even the everyday consumer technologies that could be repurposed for conflict. The lines between civilian and military innovation are blurred: a breakthrough in selfdriving car vision or drone navigation can quickly find its way into an unmanned weapon. This makes controlling the spread of military AI even harder. International export controls, the tools used to stem nuclear and missile proliferation, struggle to keep up when the critical ingredient is advanced software or a machine learning model that can be emailed or downloaded and when this intelligence can be embodied in drones or humanoid robots that have many valid civilian applications. As a result, gaps in governance persist. One glaring gap is enforcement: even if a new treaty is agreed upon, how will the world know if a nation or a company is secretly developing a forbidden autonomous weapon? Verifying a ban on a physical item like a missile is one thing; verifying the presence of an algorithm hidden in a computer chip or the extent to which a human was involved in a decision is quite another.
The major powers have been wary of any agreement that might allow intrusive inspections of their AI labs or require them to disclose sensitive software; let alone any type of inspection of logs detailing how individual decisions were made in the peak of conflict. The likely path to enforcement will rely on transparency and confidence-building: nations might declare what systems they have and perhaps invite observers to see that a human is in the loop during exercises. Trust is thin in this domain. Each country suspects that rivals might gain an edge by pushing their AI to the limit. The United States, Russia, and China all publicly tout their investments in military AI, and smaller states feel pressure to keep up or risk strategic vulnerability. Defense contractors, for their part, compete to sell the most cutting-edge autonomous capabilities to militaries hungry for the latest tech. At arms expos, you can already find booths displaying swarming attack drones and robotic tanks, marketed as force-multipliers for the modern battlefield. This commercial drive means that even if leading states hold back on deploying certain technologies, there’s a danger that less restrained actors could acquire autonomous weapons on the global market. The diffusion of robotics and AI is simply much harder to contain than that of, say, plutonium or chemical agents. A talented team of programmers with modest funding can potentially create a rudimentary autonomous weapon by repurposing civilian drones and open-source AI code. All this makes the governance challenge not only one of writing laws but of enforcing them in a practical sense. It calls for unprecedented cooperation between governments, the tech industry, and international organizations to monitor and control how these technologies spread and to agree on norms for their use.
What gives many experts pause is the sense that we are at a crossroads akin to the early days of the nuclear age, but without the same public awareness or fear. When the first atomic bombs showed their terrifying power, the world’s superpowers at least recognized the need to negotiate controls, leading eventually to arms treaties that restrained arsenals. By contrast, autonomous weapons are creeping into use without dramatic “mushroom cloud” moments to focus global attention. A drone that autonomously guns down a soldier may not make headlines in the way a devastated city would. The impact is more insidious and can fly under the radar. Critics argue that society has not fully appreciated how profound the societal impact of military AI and robotics could be.
It’s not just another step in the evolution of warfare; it may be a fundamental shift. It could alter the threshold of going to war. If one can send machines to fight, might leaders be more cavalier about using force? If an algorithm kills innocent civilians, justice for the victims is harder to obtain as it may be explained away as an unfortunate software bug. It might even change power dynamics between nations, as those with AI superiority could dominate without ever deploying a single soldier. These are underappreciated repercussions that extend well beyond the immediate tactical realm. For now, the world proceeds with both optimism and trepidation. International forums are alive with debate: committees parsing legal definitions of “meaningful human control,” ethicists invoking the principle of human dignity, generals extolling the life-saving potential of precision AI strikes that reduce collateral damage. Ensuring responsible use of AI in the military is not just a technical endeavor but a moral one. Treaties and agreements from the Geneva Conventions to new AI-specific protocols are society’s way of encoding our collective judgment about right and wrong in war. They evolve when pressured by necessity. The effort underway now, whether through the UN’s formal processes or independent pledges like the Hague’s call for responsible AI use, is an attempt to civilize the AI revolution before it fully overtakes the battlefield. It is a recognition that just because something can be done in war does not mean it should be done.


