November 15, 2017
Remarks by Paul Scharre to the United Nations Group of Governmental Experts on Lethal Autonomous Weapon Systems
Geneva, Switzerland
As we have heard this week, artificial intelligence and autonomy are rapidly advancing. While no nation has said they will build autonomous weapons, the technology will make such systems possible and, indeed, is already possible today for simple missions.
What would be the consequences if we were to delegate targeting decisions to machines? Would it make war more precise and humane, saving lives? Or would it lead to more accidents and less human responsibility?
A major challenge in answering these questions is the fact that the technology is constantly changing. 18 months ago, when this body last met in 2016, the AI research company DeepMind had just released AlphaGo, a computer program that beat the top human player at Go. To accomplish that feat, DeepMind trained AlphaGo on 30 million human moves so that it could learn how to play the game.
Last month, DeepMind released a new version, AlphaGo Zero, that learned how to play Go on its own without any human training data at all. Within a mere 3 days of self-play, it was good enough to beat the version from 2016 100 games to zero.
With technology moving forward at this pace, what will be possible 10 or even 5 years from now?
If we agree to foreswear some technology, we could end up giving up some uses of automation that could make war more humane. On the other hand, a headlong rush into a future of increasing autonomy, with no discussion of where it is taking us, is not in humanity’s interests either. We should control our destiny.
Instead, we should ask: “What role do we want humans to have in lethal decision-making in war?”
It is important to understand the technology, but to answer this question we need to focus on the human. The technology changes, but the human stays the same.
What decisions in war require uniquely human judgment? If we had all of the technology we could imagine, what decisions would we still want people to make in war, and why?
This concept has been formulated many ways and many states have expressed the importance of meaningful, appropriate, or necessary human judgment or control. The specific term is less important. What is important is that states continue to explore the meaning behind these terms in order to better understand the legal, moral, operational, and strategic rationale for human involvement in the use of force.
This perspective – focusing on the human – can be our guiding light for navigating our way through this period of technological change.
Paul Scharre (@paul_scharre) is a senior fellow at the Center for a New American Security and author of the forthcoming book Army of None: Autonomous Weapons and the Future of War, to be published in April 2018.
The remarks are available online.
More from CNAS
-
VideoWill WWIII Be Fought By Robots?
What will autonomous weapons mean for how future wars are waged and the loss of human lives in armed conflicts? That's the topic of a new book, Army of None: Autonomous Weapon...
By Paul Scharre
-
CommentaryA Million Mistakes a Second
Militaries around the globe are racing to build ever more autonomous drones, missiles, and cyberweapons. Greater autonomy allows for faster reactions on the battlefield, an ad...
By Paul Scharre
-
CommentarySix arrested after Venezuelan president dodges apparent assassination attempt
Venezuelan President Nicolas Maduro was speaking at a military event when a drone carrying plastic explosives detonated on Saturday. CNAS Technology and National Security Dire...
By Paul Scharre
-
Army of None: Autonomous Weapons and the Future of War
What happens when a Predator drone has as much autonomy as a Google car? Or when a weapon that can hunt its own targets is hacked? Although it sounds like science fiction, the...
By Paul Scharre