June 22, 2018
AI researchers should help with some military work
In January, Google chief executive Sundar Pichai said that artificial intelligence (AI) would have a “more profound” impact than even electricity. He was following a long tradition of corporate leaders claiming their technologies are both revolutionary and wonderful.
The trouble is that revolutionary technologies can also revolutionize military power. AI is no exception. On 1 June, Google announced that it would not renew its contract supporting a US military initiative called Project Maven. This project is the military’s first operationally deployed ‘deep-learning’ AI system, which uses layers of processing to transform data into abstract representations — in this case, to classify images in footage collected by military drones. The company’s decision to withdraw came after roughly 4,000 of Google’s 85,000 employees signed a petition to ban Google from building “warfare technology”.
Such recusals create a great moral hazard. Incorporating advanced AI technology into the military is as inevitable as incorporating electricity once was, and this transition is fraught with ethical and technological risks. It will take input from talented AI researchers, including those at companies such as Google, to help the military to stay on the right side of ethical lines.
Last year, I led a study on behalf of the US Intelligence Community, showing that AI’s transformative impacts will cover the full spectrum of national security. Military robotics, cybersecurity, surveillance and propaganda are all vulnerable to AI-enabled disruption. The United States, Russia and China all expect AI to underlie future military power, and the monopoly enjoyed by the United States and its allies on key military technologies, such as stealth aircraft and precision-guided weapons, is nearing an end.
Read the Full Article at Nature
More from CNAS
-
PodcastFuture of Life Institute: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning...
By Michael Horowitz & Paul Scharre
-
CommentaryThe Algorithms of August
An artificial intelligence arms race is coming. It is unlikely to play out in the way that the mainstream media suggest, however: as a faceoff between the United States and Ch...
By Michael Horowitz
-
CommentaryBeyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space
Recently, one of us spent a week in China discussing the future of war with a group of American and Chinese academics. Everyone speculated about the role of artificial intelli...
By Michael Sulmeyer & Kathryn Dura
-
CommentaryNew defense policy a reminder that US is not alone in AI efforts
The National Defense Authorization Act (NDAA) for fiscal year 2019 is evidence the United States is developing a more robust artificial intelligence (AI) strategy. The new law...
By Kathryn Dura