October 19, 2018
How to Defend Against Foreign Influence Campaigns: Lessons from Counter-Terrorism
Two weeks ago, a grand jury in Pennsylvania indicted seven Russian intelligence officers for state-sponsored hacking and influence operations. Both U.S. Attorney General Jeff Sessions and FBI Director Christopher Wray affirmed the gravity of the crime. The same day, Vice President Mike Pence warnedthat China is launching an “unprecedented” effort to influence public opinion ahead of the 2018 and 2020 elections. From Russia to China to Iran, America’s adversaries are increasingly using influence operations — the organized use of information to intentionally confuse, mislead, or shift public opinion — to achieve their strategic aims.
To most Americans, the recent onslaught of influence operations at home may feel like a novel threat. But the reality is that while the battlefield has changed in important ways, nearly two decades of countering terrorism taught the United States a great deal about how to approach this latest challenge.
From 2010 to 2016, I worked as a counter-terrorism analyst supporting special operations forces, with three tours in Afghanistan. I went on to lead an intelligence team at Facebook that focused on counter-terrorism and global security. As such, I’ve had a front-row seat to observe how the government and tech companies dealt with terrorism’s online dimension, and to consider the similarities to today’s state-sponsored disinformation campaigns. Five key lessons stand out: improving technical methods for identifying foreign influence campaign content, encouraging platforms to collaborate, building partnerships between the government and the private sector, devoting the resources necessary to keep the adversary on the back foot, and taking advantage of U.S. allies’ knowledge.
Lesson 1: Hack It
A critical goal in any information battle is rooting out your adversary. In the tech sector, companies like Google, Twitter, and Facebook have employed a combination of methods to identify and address terrorist content. These techniques include automating content identification through machine learning, mitigating the amplification of nefarious content, and reducing anonymity.
Tech firms seeking to root out terrorism on their platforms have trained a variety of “classifiers” to help identify content that violates companies’ terms of service. Companies experimented with natural language understanding to help machines “understand” this content and categorize it as terrorist propaganda or not. On Twitter alone, their algorithms flagged all but 7 percent of accounts suspended for promoting terrorism in late 2017. And of those 93 percent flagged by machines first, 74 percent were taken down before launching a single tweet. (There are no widely accepted, publicly available indications of how many violating accounts were not caught by Twitter’s internal tools or human review.) Additionally, companies like Microsoft and Facebook bank the text, phrases, images, and videos they characterize as terrorist propaganda and use this data to train their software to recognize similar content before it can proliferate. Finally, companies reduced anonymity and improved attribution by tightening verification processes (e.g. checking accounts that show signs of automation rather than human control) to combat the automated spread of malign messaging.
These techniques can be applied directly to policing influence operations on social media platforms. Social media companies should identify the methods that have most effectively made their platforms “hostile” to terrorist content. The three ways of countering terrorism highlighted above — content identification through machine learning technologies, mitigating the amplification of nefarious content, and reducing anonymity — are good starting points. For example, creating a bank of commonly recycled disinformation campaign terms or phrases can be a source for automated flagging of this content for human review. Algorithms like those that detect potential terrorist propaganda, but which instead detect bots and track trolls, can help reduce the amplifiers of state-led disinformation campaigns.
Already, Facebook and Google are implementing practices along these lines, like de-ranking content rated false by third-party fact checkers and recalibrating search algorithms. Twitter’s suspension of 70 million accounts in May and June also signals a commitment to getting this right. However, there is much more to be done. Tech companies should make “hacking” the disinformation problem a genuine priority by directing a percentage of engineering capacity to automating the identification of state-sponsored influence campaigns. This can be incorporated into existing traditions, like a disinformation-themed Facebook “Hackathon,” and will help counter malicious foreign actors seeking to scale their operations using emerging technology.
Read the full article at War on the Rocks.
More from CNAS
-
ReportsSummary of Findings and Recommendations
KEY FINDINGS Soldier survivability is a function of protection and other relevant operational factors, such as situational awareness, mobility, and lethality. Throughout histo...
By Paul Scharre, Lauren Fish, Katherine Kidder & Amy Schafer
-
ReportsHuman Performance Enhancement
Executive Summary No attributes are more foundational to success in combat than the physical and cognitive performance of warfighters. Technological advantage has always playe...
By Paul Scharre & Lauren Fish
-
ReportsHuman Performance Enhancement TEST
No attributes are more foundational to success in combat than the physical and cognitive performance of warfighters. Technological advantage has always played a central role i...
By Paul Scharre & Lauren Fish
-
PodcastEp. 27: CENTCOM's Gen. Votel; Exosuits and super soldiers; Weaponizing social media and more
This week on the program: • During a flight over Turkmenistan this week, America’s top commander in the Middle East spoke by phone with Defense One Executive Editor Kevin Baro...
By Paul Scharre & Lauren Fish