Algorithmic Warfare: AI | A Tool For Good and Bad



With promises of crunching mounds of data into bite-sized nuggets of actionable information, machine learning could be a breakthrough for the intelligence community. However, vulnerabilities within such systems could open them up to cyber attacks.

Jason Matheny, director of the Intelligence Advanced Research Projects Activity, said his organization funds research at over 500 universities, colleges, businesses and labs. A third of his portfolio focuses on machine learning, speech recognition and video analytics.

“For us, machine learning is an approach to dealing with this deluge of data that the intelligence community is confronted with,” he said during a panel discussion at a Defense One event focusing on artificial intelligence. Machine learning is a subset of artificial intelligence.

However, despite these promises, the community has become anxious about potential vulnerabilities lurking within the systems, he said.

“Right now, what we receive in the intelligence community is usually too brittle,” Matheny said. “It’s too insecure for us to deploy.”

For example, most image classifiers can be spoofed in less than an hour by a college student, he said.

“A favorite parlor trick now of computer science undergrads is fooling the state-of-the-art image classifier to think that this school bus picture is actually a picture of an ostrich,” he said.

Other vulnerabilities include “data poisoning” attacks where a small amount of information is mislabeled so that it confuses the classifier. Another is called “model inversion,” which takes a classifier and manipulates its training data.

“Most of the commonly used machine learning systems are vulnerable to these kind of attacks,” he said. “We as a community need to … become much more careful in the way we develop machine learning systems that are defensive against various kinds of adversarial attacks.”

Red teams — which attempt to find and document holes in information technology systems — are becoming prevalent in the cybersecurity community, Matheny said. That same approach “is now sorely needed in the machine learning community.”

IARPA is working on research projects that target the issue.

“One is developing classifiers that are robust to various kinds of adversarial inputs,” he said. Another is “understanding the different failure modes — the ways in which classifiers can be attacked, creating … ensemble approaches so that you can fool one classifier some of the time, but you can’t fool all of the classifiers all of the time.”

When asked if he was anxious about ethical issues surrounding artificial intelligence, Matheny said security was much more of a concern.

“We’re much less worried about ... Terminator and SkyNet scenarios than we are of sort of ‘Digital Flubber’ scenarios — you know, just really badly engineered systems that are vulnerable to either error or to malicious attack from outside,” he said.

Looking forward, Matheny said the future of AI is likely a positive one where “human prosperity increases due to the availability of these tools to reduce the monotonous tasks that many of us would like to part with,” he said. However, the defense industry and the government must work to secure that future.

“For those of you … who either build AI systems or who consume them, please start insisting on security in these systems,” he said.

Matheny pointed to a recent Center for a New American Security report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation,” by authors Paul Scharre and Gregory C. Allen, that found that today’s AI systems suffer from a number of unresolved vulnerabilities.

“These vulnerabilities are distinct from traditional software vulnerabilities and demonstrate that while AI systems can exceed human performance in many ways, they can also fail in ways that a human never would,” the report said.

It is likely that attacks on artificial intelligence systems will become more common, it added.

Ironically, many of these attacks will be powered by machine learning, the report said.

“AI is being widely used on the defensive side of cybersecurity, making certain forms of defense more effective and scalable, such as spam and malware detection,” the report said. “At the same time, many malicious actors have natural incentives to experiment with using AI to attack the typically insecure systems of others.”

Artificial intelligence can attack targets faster than humans can, and with less labor costs, the report noted.

“To date, the publicly-disclosed use of AI for offensive purposes has been limited to experiments by ‘white hat’ researchers, who aim to increase security through finding vulnerabilities and suggesting solutions,” Scharre and Allen said.

However, the pace of progress in AI suggests that adversaries will likely soon launch cyber attacks that leverage machine learning capabilities, they said.

Other countries are interested in using artificial intelligence for cybersecurity, the report noted. But this could enable adversaries to launch large-scale attacks.

“Recent years have seen impressive and troubling proofs of concept of the application of AI to offensive applications in cyberspace,” the report said.

For example, ZeroFox, a digital security company with offices in Baltimore and London, demonstrated that a fully automated spear phishing system could create tailored tweets on Twitter based on a user’s interests, achieving a high rate of clicks to a link that could be malicious, the report said.

Scharre and Allen also said that they “expect attackers to leverage the ability of AI to learn from experience in order to craft attacks that current technical systems and IT professionals are ill-prepared for, absent additional investments.”

nationaldefensemagazine.org