• Home A Adversarial Machine Learning

Adversarial Machine Learning

What is Adversarial Machine Learning?

Adversarial Machine Learning is the practice of tricking machine learning or artificial intelligence models, often with malicious intent. Also referred to as Adversarial AI, the trick occurs with slight manipulations to inputs that can circumvent or fool machine learning algorithms. The manipulation could lead to misidentification or even invisibility to machine learning systems. As machine learning and AI become more heavily used in cybersecurity, circumventing their techniques can lead to successful cyber attacks.

How to Recognize This Threat: Adversarial machine learning threats will continue to exist as long as machine learning and artificial intelligence do. So that means, these threats are ongoing. A common example in the cyber threat world is attempted evasion of spam and phishing filters to deliver malicious emails.

How to Prevent This Threat: Creators of machine learning & AI models are constantly attempting to hone them and do implement methods to counteract adversarial machine learning attacks. But the best form of prevention is a layered security approach. Simply relying on one form of machine learning threat detection leaves networks vulnerable. So create added layers by training employees, segmenting networks, adding multifactor authentication, and implementing multiple security solutions.