How Machine Learning, Deep Learning, and AI Expand the Threat Landscape

in machine •  7 years ago 

26d554da3681d7d4a14d1c60d10cd9bd.jpg
Smart companies are using artificial intelligence (AI) and machine learning (ML) techniques to improve the scale and speed at which they do business. Smart criminals are doing the same.

As AI and ML become more mainstream, security teams will likely see more adversaries attempting to poison and evade data sets.

“As the use of these techniques increases, so will the threats,” says Dr. Celeste Fralick, Chief Data scientist and Senior Principal Engineer, McAfee. In its 2018 Threat Predictions report, McAfee Labs predicts an increased use of ML attacks from adversaries over the next year.

The concept of adversarial machine learning—the study of bad actors attacking analytics—has already been demonstrated by both black and white hat hackers. Bad actors have already used this technology in documented attacks.

For security teams, the next big challenge is understanding how the adversaries can attack machine learning – part of the ongoing game of cat and mouse between defenders and attackers.

“Machines will work for anyone, fueling an arms race in machine-supported actions from defenders and attackers,” the McAfee Labs report states.

Defenders Are Smart, But So Are Adversaries
Because of the sheer amount of data, machine learning is typically used to detect cyber attacks and the adversary is enticed to specifically attack the analytic model whether he can see it or not. There are a number of ways attackers can manipulate algorithms, including:

Influence: Attacking the model’s training set – the sample data that the algorithms use for learning – to affect the model’s decision-making capability.
Specificity: Attacks can be targeted at specific features in the model or be “indiscriminate” across the entire model.
Security integrity and availability: Integrity impacts all data or a sample of that data, while availability overwhelms the system with so many false positives that security analysts end up ignoring the signal or increase its thresholds so as to not alarm, unknowingly allowing malware to enter.
Evasion and Poisoning: Evasion increases false negatives using perturbations or false data, and poisoning impacts the data used to train the model.
Organizations need to stay one step ahead of this evolving threat. One way to do that, Fralick says, is to include analytic vulnerability checks of machine learning or deep learning models during development. “You need to plug the holes before products are shipped,” she explains. “Put analytic risk mitigation into development to predict, evade, learn and adapt from these types of attacks.”

McAfee recommends a systemic and holistic approach against these threats, coupling machine learning models with process improvements in internal analytic development to increase protection against evasion, poisoning, or other types of attacks. Using AI to augment the skills and expertise of analysts – a “human-machine teaming” approach – will be more effective than just machines alone.

“Human-machine teaming has tremendous potential to swing the advantage back to the defenders, and our job during the next few years is to make that happen,” the McAfee Labs report states. “To do that, we will have to protect machine detection and correction models from disruption, while continuing to advance our defensive capabilities faster than our adversaries can ramp up their attacks.”

To learn more about how humans and machines can team up to defend against attacks, visit https://www.mcafee.com/us/solutions/machine-learning.aspx.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
http://integrated-security.csoonline.com/endpoint-security/machine-learning-deep-learning-ai-expand-threat-landscape/