April 11, 2016

Statement at United Nations CCW Expert Meeting on Lethal Autonomous Weapon Systems

By Kelley Sayler and Paul Scharre

As we enter our third year of discussions on lethal autonomous weapons, I would like to applaud states for their continued engagement on this important topic as well as offer some thoughts for consideration.

There has been an emerging view that, in addition to looking at technology, we ought to focus on the role of the human in lethal engagements. We share this perspective. Technology changes. In the field of machine intelligence, the past few years have seen rapid progress. Arguments in favor of banning weapons based on the state of technology today make little sense. What if technology improves? Could some of the same sensors and autonomy that can allow a self-driving car to avoid pedestrians be used to avoid civilian casualties in war? Perhaps. We should not preemptively close ourselves off to opportunities to decrease human suffering in war.

Humans’ obligations under IHL do not change, however. Humans are bound to ensure their actions are lawful. This imposes certain criteria on the extent to which humans must remain engaged in lethal decision-making.

Humans must have sufficient information about the weapon, the targets, and the specific context for action and sufficient time to make an informed decision that engagements are lawful. In addition, because weapons are tools in the hands of humans and not combatants themselves, humans must retain the ability to determine when those tools are no longer appropriate and engagements should cease. This does not mean real-time supervision of weapons or perfect information. That does not exist today for many weapons, such as cruise missiles. It does mean that autonomy must be bounded to ensure that failures, if they occur, do not lead to catastrophic consequences.

There is much work to be done to better understand the necessary role of human control and judgment in lethal force. Whether one uses the term “meaningful,” “appropriate,” or some other adjective, it is clear that continued human involvement in lethal force is essential. Rather than focus solely on what technology can and cannot do today, we ought to ask, if we had all the technology we could imagine, what role would we still want humans to play in lethal decision-making? What decisions require uniquely human judgment, and why?

Thanks very much for the opportunity to share these views.

  • Video
    • September 18, 2018
    Will WWIII Be Fought By Robots?

    What will autonomous weapons mean for how future wars are waged and the loss of human lives in armed conflicts? That's the topic of a new book, Army of None: Autonomous Weapon...

    By Paul Scharre

  • Commentary
    • Foreign Policy
    • September 13, 2018
    A Million Mistakes a Second

    Militaries around the globe are racing to build ever more autonomous drones, missiles, and cyberweapons. Greater autonomy allows for faster reactions on the battlefield, an ad...

    By Paul Scharre

  • Commentary
    • NBC News
    • August 7, 2018
    Six arrested after Venezuelan president dodges apparent assassination attempt

    Venezuelan President Nicolas Maduro was speaking at a military event when a drone carrying plastic explosives detonated on Saturday. CNAS Technology and National Security Dire...

    By Paul Scharre

    • Commentary
    • May 11, 2018
    Army of None: Autonomous Weapons and the Future of War

    What happens when a Predator drone has as much autonomy as a Google car? Or when a weapon that can hunt its own targets is hacked? Although it sounds like science fiction, the...

    By Paul Scharre

View All Reports View All Articles & Multimedia