InfoSecurity Professional INSIGHTS Archive: December 2020
The 2020 Gartner Magic Quadrant for IT Vendor Risk Management Tools
Having the right set of key risk indicators (KRIs) is critical for aligning IT risks to organizational performance. They help anticipate issues before they disrupt operations or breach sensitive data. Download this white paper for tips to build and optimize your KRI program. Plus, get three jam-packed pages of examples. Download now »
How to Stay Ahead of Adversarial Machine Learning
By Shawna McAlearney
Image: Kehan Chen/Getty Images |
As artificial intelligence technologies become more prevalent in business, so too do the potential security risks of machine learning (ML), in which machines access data and learn from their own experience rather than being programmed. One of the biggest security concerns involves adversarial machine learning in which an attacker uses bad, or deceptive, input to exploit the way artificial intelligence algorithms work and cause a malfunction in a machine learning model.
"Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms," noted MITRE in a recent press release. "Data can be weaponized in new ways, which requires an extension of how we model cyber adversary behavior to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle."
An interesting example is a vulnerability in self-driving cars. Ariel Herbert-Voss, a machine learning researcher at Harvard University, tricked a Tesla by pouring salt on the cold ground. Following the salt trail, the Tesla began "driving off into the sunset when it should not have been doing that," said Herbert-Voss, a Ph.D candidate and research scientist at OpenAI. In her presentation at Black Hat USA 2020, she also noted a similar study that was done with stickers, with the same results.
"There is no need to generate any sort of bespoke examples for this kind of attack; you literally are creating a physical artifact in the real world in which the auto-pilot algorithm interprets it as part of the environment and behaves accordingly," she said. "So it is the perfect example of bad inputs. The fact that we have done it on a real system that exists in the wild means that it is something that should be considered a reasonable example."
Transportation prediction algorithms also have an Achilles’ heel. A performance artist in Germany managed to convince the congestion algorithm in Google Maps that there was a "massive pileup," noted Herbert-Voss, based solely on having 99 smartphones running Google Maps sitting in a wagon, because each one was treated as a discrete car object.
Recommendation engines, like Amazon, Google (in the context of search results) and many others, are also vulnerable to bad input. "Black hat SEO has been around, and that’s all about trying to use things like click farms and review farming if you’[d like Yelp or Amazon to pump up your product," she said. "It’s a really good example of bad inputs and some of the harm that comes about from not thinking too clearly about where you are getting data."
"The more popular that machine learning gets as a tool to use for business analytics," said Herbert-Voss, "the more popular it gets to be a target for people to focus on trying to make money off of the exploitation of these kinds of systems."
This surge in interest recently prompted a number of industry leaders to join together and release the Adversarial ML Threat Matrix, a framework designed to "empower security analysts to detect, respond to, and remediate threats against ML systems."
Those involved—MITRE, Microsoft, Bosch, IBM, NVIDIA, Airbus, Deep Instinct, Two Six Labs, the University of Toronto, Cardiff University, Software Engineering Institute/Carnegie Mellon University, PricewaterhouseCoopers and Berryville Institute of Machine Learning—collaborated to promote awareness of ML risks and remediations before problems escalate. Feedback and contributions from both industry and academic researchers are encouraged.
In a Q&A done by MITRE on machine learning, Mikel Rodriguez, director of MITRE’s Decision Science research programs, said: In the 1980s, "people were just trying to make the internet work; they weren’t building in security, and we’ve been paying the price ever since. Same with the AI field. Suddenly, machine learning began working and growing much faster than we expected. The good news with AI is that it’s potentially not too late."
In an announcement about the threat matrix in its security blog, Microsoft cited a Gartner report that predicts 30% of all AI cyberattacks by 2022 are expected to leverage training-data poisoning, model theft or adversarial samples to attack machine learning systems.
"Certainly, there’s some risk—whether it’s just a failure of the system or because a malicious actor is causing it to behave in unexpected ways, AI can cause significant disruptions," said Charles Clancy, MITRE’s chief futurist, senior vice president, and general manager of MITRE Labs, in the same MITRE Q&A. "But when it comes to understanding machine learning risk … AI is going to add efficiencies and capabilities to systems all around us. … [But] some fear that the systems we depend on, like critical infrastructure, will be under attack, hopelessly hobbled because of AI gone bad."
Adversarial machine learning in the real world is quite different from academic research. Herbert-Voss said, "Most attacks proposed in machine learning research don’t work in the real world."
While she said no one has any open data on how common these kinds of attacks are, "We do know that they happen and that there are ways we can protect against them."
Her advice comes down to three recommendations that are centered on avoiding model leakage and bad inputs that will cut down on issues by about 85%.
1. Use block lists. Numerous organizations maintain free blocklists of IP addresses and URLs belonging to systems and networks suspected of online malicious activities. Some require registration and approval in order to access such lists.
2. Verify data accuracy with multiple signals. For example, using a facial recognition system, Herbert-Voss saw a 75% reduction in adversarial example-induced false positives when she used two camera sources instead of one for identification. She noted this percentage increased when the two cameras were farther apart.
3. Don’t expose raw statistics to users. Herbert-Voss warned there is a trade-off between providing interpretation predictions and providing enough information that an attacker could reverse-engineer your model. She said "rounding statistics is the best way to still provide information to users and reduces the ability to reverse-engineer models deployed on cloud API service by about 60%."
"There’s a truism in the power industry that the most dangerous adversaries to our electric grid are … squirrels," Clancy said. "Keep that in mind—there are risks to AI, but it’s also extremely valuable. Either way, the train is barreling down the tracks, so we need to ensure AI is as safe as possible from attackers."
Shawna McAlearney based in Las Vegas, is a regular contributor to INSIGHTS.