Skip to content
  • Instagram
  • Home
  • Contact us
  • About us
  • Privacy Policy
Techusers

Techusers

Learn To be a Dev

  • Home
  • Guide
  • Security & Privacy
  • Tech News
  • Computer
  • Top List
  • What is
  • Engineering
  • How To
  • Tech Terms
  • Toggle search form

Adversarial ML Threat Matrix Framework released to Protect Machine Learning Systems From Attacks

Posted on 24/10/202010/06/2021 By Techuser

Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework called Adversarial ML Threat Matrix. That aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems.

Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications.

During last four years, Attacks against machine learning has been notably increased said by Microsoft in Blog post.

Adversarial ML Threat Matrix Framework

Primary audience is security analysts: The securing ML systems is an infosec problem. The goal of the Adversarial ML Threat Matrix is to position attacks on ML systems in a framework that security analysts can orient themselves in these new and upcoming threats. The matrix is structured like the ATT&CK framework, owing to its wide adoption among the security analyst community – this way, security analysts do not have to learn a new or different framework to learn about threats to ML systems. The Adversarial ML Threat Matrix is also markedly different because the attacks on ML systems are inherently different from traditional attacks on corporate networks.

Grounded in real attacks on ML Systems: This framework is seeded with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE have vetted to be effective against production ML systems. This way, security analysts can focus on realistic threats to ML systems. Microsoft also incorporated learnings from Microsoft’s vast experience in this space into the framework: for instance, we found that model stealing is not the end goal of the attacker but in fact leads to more insidious model evasion. We also found that when attacking an ML system, attackers use a combination of “traditional techniques” like phishing and lateral movement alongside adversarial ML techniques.

Open to the community:

“When it comes to Machine Learning security, the barriers between public and private endeavors and responsibilities are blurring; public sector challenges like national security will require the cooperation of private actors as much as public investments. So, in order to help address these challenges, we at MITRE are committed to working with organizations like Microsoft and the broader community to identify critical vulnerabilities across the machine learning supply chain.This framework is a first step in helping to bring communities together to enable organizations to think about the emerging challenges in securing machine learning systems more holistically.”

– Mikel Rodriguez, Director of Machine Learning Research, MITRE

Read more: What is Sandbox in Computer Security?

Related

Security & Privacy Tags:Machine Learning

Post navigation

Previous Post: Zoom adds 2FA(Two-factor authentication) support
Next Post: Google Chrome 87 Rolling out with Performance Improvements, chrome actions, and More

Recent Posts

  • How to Remove User Accounts in Windows 11
  • Jio 5G beta trial expands to Chennai; 5G-powered Wi-Fi services launched
  • What is a .NFA file? How to open it
  • What is a .M3U8 file? How to open it
  • What is a .AAX file? How to open it

Copyright © 2023 Techusers.

Powered by PressBook WordPress theme