September 27, 2018

Future of Life Institute: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

By Michael Horowitz and Paul Scharre

In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Listen to the full conversation here.

  • Commentary
    • Foreign Policy
    • September 12, 2018
    The Algorithms of August

    An artificial intelligence arms race is coming. It is unlikely to play out in the way that the mainstream media suggest, however: as a faceoff between the United States and Ch...

    By Michael Horowitz

  • Commentary
    • War on the Rocks
    • September 6, 2018
    Beyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space

    Recently, one of us spent a week in China discussing the future of war with a group of American and Chinese academics. Everyone speculated about the role of artificial intelli...

    By Michael Sulmeyer & Kathryn Dura

  • Commentary
    • C4ISRNET
    • August 28, 2018
    New defense policy a reminder that US is not alone in AI efforts

    The National Defense Authorization Act (NDAA) for fiscal year 2019 is evidence the United States is developing a more robust artificial intelligence (AI) strategy. The new law...

    By Kathryn Dura

  • Commentary
    • Bulletin of the Atomic Scientists
    • August 20, 2018
    Artificial intelligence beyond the superpowers

    Much of the debate over how artificial intelligence (AI) will affect geopolitics focuses on the emerging arms race between Washington and Beijing, as well as investments by ma...

    By Michael Horowitz & Itai Barsade

View All Reports View All Articles & Multimedia