Adversarial ML Threat Matrix Framework released to Protect Machine Learning Systems From Attacks

Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework called Adversarial ML Threat Matrix. That aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems.

Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications.

During last four years, Attacks against machine learning has been notably increased said by Microsoft in Blog post.

Adversarial ML Threat Matrix Framework

Primary audience is security analysts: The securing ML systems is an infosec problem. The goal of the Adversarial ML Threat Matrix is to position attacks on ML systems in a framework that security analysts can orient themselves in these new and upcoming threats. The matrix is structured like the ATT&CK framework, owing to its wide adoption among the security analyst community – this way, security analysts do not have to learn a new or different framework to learn about threats to ML systems. The Adversarial ML Threat Matrix is also markedly different because the attacks on ML systems are inherently different from traditional attacks on corporate networks.

Grounded in real attacks on ML Systems: This framework is seeded with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE have vetted to be effective against production ML systems. This way, security analysts can focus on realistic threats to ML systems. Microsoft also incorporated learnings from Microsoft’s vast experience in this space into the framework: for instance, we found that model stealing is not the end goal of the attacker but in fact leads to more insidious model evasion. We also found that when attacking an ML system, attackers use a combination of “traditional techniques” like phishing and lateral movement alongside adversarial ML techniques.

Open to the community:

When it comes to Machine Learning security, the barriers between public and private endeavors and responsibilities are blurring; public sector challenges like national security will require the cooperation of private actors as much as public investments. So, in order to help address these challenges, we at MITRE are committed to working with organizations like Microsoft and the broader community to identify critical vulnerabilities across the machine learning supply chain.This framework is a first step in helping to bring communities together to enable organizations to think about the emerging challenges in securing machine learning systems more holistically.”

– Mikel Rodriguez, Director of Machine Learning Research, MITRE

Read more: What is Sandbox in Computer Security?

Leave a Reply