How Autonomous Weaponry is Outpacing International Law 

Earlier this year, U.S. General and Commander of NATO's Allied Command Operations, Christopher Cavoli, testified before Congress that cheap FPV drones are now responsible for 'certainly over half' of all Russian casualties in the Russo-Ukrainian War, with other estimates as high as 80 percent [1].

Video game enthusiasts will know that FPV stands for first-person view—a drone camera relays back a video feed to a control center where human operators use goggles or a monitor to simulate being in a cockpit. From there, the pilot can steer the drone, identify targets, and strike when necessary. The drones require human action to carry out a kill, and therefore are appropriately named Human-in-the-loop weaponry.

So what exactly is this loop? Military experts use John Boyd's Observe-Orient-Decide-Act (OODA) Loop to contextualize decision-making in the field. Modeled after Boyd's experiences as a fighter pilot in Korea, the loop describes the steps to eliminate an enemy target [2]. Boyd argued that more than speed, tempo was the key in gaining a military advantage. If you consistently acted while your opponent was in the 'decide' phase, you outmaneuvered your way to victory. Militaries therefore aim to keep their decision periods short and unpredictable.

But AI-powered weaponry is here, and it's here to stay. In December of 2024, Ukrainian forces used both ground and air vehicles to conduct the first fully unmanned mission in the border town of Lypsi [3].

Now imagine a more advanced drone, which uses artificial intelligence to cut its decision-making phase to a hundredth, thousandth, or even millionth of a second, rendering it impossible for a human to step in and overturn its decision. Such a weapon would be referred to as a Human-out-of-the-loop or fully autonomous weapon.

A fully autonomous weapon's military advantages are clear, but its safety concerns quickly follow. For starters, giving life and death decisions to preprogrammed sets of algorithms raises serious ethical concerns.

International humanitarian law, born of the 1949 Geneva Conventions, rests on principles that assume human judgment: distinguishing combatants from civilians, using proportional force, and minimizing harm. These guidelines inherently cannot account for every situation, humans can adapt to these gaps in the law, but autonomous weapons may not [4].

Imagine two elementary schoolers stumbling upon a military position. A human soldier can observe, call out, and assess their behavior and intent. Are they lost civilians? Rebel scouts? The soldier can wait, question, or let them pass if they pose no threat. International law permits targeting combatants, even children, if they are directly participating in hostilities — but determining who qualifies requires assessing intent, behavior, and context. While a human may never imagine killing a child, an AI-based weapon may refuse to take on the tactical risk of letting the child live.

The international community has struggled for over a decade to answer a fundamental question: Can existing laws effectively govern weapons that make life-and-death decisions autonomously, or are new binding rules needed? Three paradigms for regulating LAWS (Lethal Autonomous Weapon Systems) have emerged. Traditionalists, led by the United States and Russia, argue that existing international humanitarian law is sufficient to govern autonomous weapons. Prohibitionists, including Austria, Brazil, Chile, and Kiribati, call for a complete ban on all LAWS, citing violations of human dignity, accountability gaps, and loss of control. Finally, dualists advocate for a two-tier approach: prohibiting systems that target humans directly, while regulating defensive systems like missile interceptors [5].

The United Nations had the opportunity to pass binding resolutions on the usage of autonomous weaponry in late 2024. Instead, after severe dilution by traditionalist nations, Resolution 79/62 passed 166-3 but delivered primarily rhetoric [6]. The resolution "highlighted the importance of addressing challenges" and called for "a comprehensive approach," but mandated no prohibitions and set no deadlines. It instead offers two days of informal consultations to discuss the UN Secretary-General's report, all while autonomous weapons race continue to accelerate.

Major military powers face competing pressures: the need to maintain military advantage, the desire to preserve operational flexibility, and the responsibility to prevent a future where algorithms make life-and-death decisions beyond human control. The tension between these interests are real, but delays in legislation have costs. The technology exists, development timelines are compressing, and 166 nations voted in December 2024 to address the risks. Yet without binding international rules, technologically advanced nations are reluctant to limit their advancement while their adversaries are free to do as they please. The future of war has arrived, and now we must decide whether the weapons we use reflect the values we fight for.

Special thanks to Matt Mande from CSIS’s Wadhwani AI Center for his support and comprehensive review of this article.

References:

[1] Christopher Cavoli, testimony before the Senate Armed Services Committee, April 3, 2025; "Ukraine War Transforms Warfare with Drones," Army Technology, April 8, 2025, https://www.army-technology.com/news/ukraine-drones-warfare/; "How Ukraine Is Leading a Drone Revolution," Atlantic Council, May 13, 2025; "AI's Growing Role in Modern Warfare," U.S. Army War College Press, August 21, 2025.

[2] "A Symbiotic Relationship: The OODA Loop, Intuition, and Strategic Thought," Defense Technical Information Center, accessed January 2026; "Evolving the OODA Loop for Strategy," Marine Corps Gazette, March 12, 2025; "The OODA Loop and the Half-Beat," The Strategy Bridge, July 13, 2023.

[3] "Ukraine's Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare," Center for Strategic and International Studies, https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare.

[4] Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), June 8, 1977, https://ihl-databases.icrc.org/en/ihl-treaties/api-1977.

[5] "Future Warfare: National Positions on Governance of Lethal Autonomous Weapons Systems," Lieber Institute for Law and Land Warfare, West Point, February 11, 2025, https://lieber.westpoint.edu/future-warfare-national-positions-governance-lethal-autonomous-weapons-systems/.

[6] UN General Assembly Resolution 79/62, "Lethal Autonomous Weapons Systems," A/RES/79/62 (December 2, 2024), https://documents.un.org/doc/undoc/gen/n24/391/35/pdf/n2439135.pdf; "Killer Robots: UN Vote Should Spur Treaty Negotiations," Human Rights Watch, December 5, 2024.


Previous
Previous

Scientific Mobility, Independence, and Science Diplomacy

Next
Next

Quantum 2.0 Needs Diplomacy: Advancing High‑Impact Sensing Through International Collaboration