September 06, 2018
Beyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space
Recently, one of us spent a week in China discussing the future of war with a group of American and Chinese academics. Everyone speculated about the role of artificial intelligence (AI), but, surprisingly, many Chinese participants equated AI almost exclusively with armies of killer robots.
Popular imagination and much of current AI scholarship tend to focus, understandably, on the more glamorous aspects of AI — the stuff of science fiction and the Terminator movies. While lethal and autonomous weapons have been a hot topic in recent years, this is only one aspect of war that will change as artificial intelligence becomes more sophisticated. As Michael Horowitz wrote in the Texas National Security Review, AI itself will not manifest just as a weapon; rather, it is an enabler that can support a broad spectrum of technologies. We agree: AI’s most substantial impacts are likely to fly under the radar in discussions about its potential. Therefore, a more holistic conversation should acknowledge AI’s potential effects in cyber space, not by facilitating cyber attacks, but rather by improving cyber security at scale through increased asset awareness and minimized source code vulnerabilities.
Opportunities for AI-Supported Cyber Defense
One of the most common refrains about fighting in cyber space is that the offense has the advantage over the defense: The offense only needs to be successful once, while the defense needs to be perfect all the time. Even though this has always been a bit of an exaggeration, we believe artificial intelligence has the potential to dramatically improve cyber defense to help right the offense-defense balance in cyber space.
Much of cyber defense is about situational awareness of one’s own assets. Former White House Cybersecurity Coordinator Rob Joyce said as much in a 2016 presentation at USENIX: “If you really want to protect your network,” he advised, “you have to know your network, including all the devices and technology in it.” A successful attacker will often “know networks better than the people who designed and run them.” With the right combination of data, computing power, and algorithms, artificial intelligence can help defenders gain far greater mastery over their own data and networks, detect anomalous changes (whether from insider threats or from external hackers), and quickly address configuration errors and other vulnerabilities. This will cut down on the hacking opportunities — known as the attack surface — available for adversaries to exploit. In this way, network defenders can focus their resources on the most sophisticated and deadly state-sponsored campaigns.
This is not science fiction: DARPA experimented with this kind of self-healing computer in its 2016 Cyber Grand Challenge. There, teams competed to develop automatic defensive systems capable of detecting software flaws and devising and implementing patches to fix them in real time. Through AI’s automatic recognition and reparation, the teams were not only able to self-heal their systems; they also did so in seconds as compared to the previous metric of days.
Another cyber defense challenge that artificial intelligence could help mitigate has to do with the prevalence of code reuse. It may come as a surprise to readers that coders don’t always write their own code from scratch. Repositories like GitHub host modular plug-ins that allow coders to piggyback off earlier work by others. As Mark Curphey, the CEO of SourceClear, a startup that focuses on securing open-source code, has said, “Everyone uses open source … Ninety percent of code could be stuff they didn’t create.” While this leads to great efficiencies, eliminating the need to reinvent the wheel, it’s also risky because no one is accountable for the integrity of the code at the core of applications and firmware. No one coder (or the software company they work for) has an incentive to devote resources to the painstaking task of auditing millions of lines of code.
Read the Full Article at War on the Rocks
More from CNAS
-
PodcastFuture of Life Institute: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning...
By Michael Horowitz & Paul Scharre
-
CommentaryThe Algorithms of August
An artificial intelligence arms race is coming. It is unlikely to play out in the way that the mainstream media suggest, however: as a faceoff between the United States and Ch...
By Michael Horowitz
-
CommentaryNew defense policy a reminder that US is not alone in AI efforts
The National Defense Authorization Act (NDAA) for fiscal year 2019 is evidence the United States is developing a more robust artificial intelligence (AI) strategy. The new law...
By Kathryn Dura
-
CommentaryArtificial intelligence beyond the superpowers
Much of the debate over how artificial intelligence (AI) will affect geopolitics focuses on the emerging arms race between Washington and Beijing, as well as investments by ma...
By Michael Horowitz & Itai Barsade