Attacking Marchine Learning Systems
Bruce Schneier writes how Machine Learning (ML) security is quickly advancing as more sophisticated techniques are developed to steal or disrupt ML models and data.
Cryptography and ML Security share the same characteristics and risks, such as passive attacks that can scale to massive levels and complex mathematical attacks. However, he notes that software and network vulnerabilities still provide the most significant attack vector.
Everything he wrote three years ago still seems to apply today - it’s just coming more sharply into focus.
Related Posts
-
Backdoor Attack on Deep Learning Models in Mobile Apps
This MITRE ATLAS case study helps bring to life the framework
-
Adversarial Threat Landscape for Artificial-Intelligence Systems
If your organisation undertakes adversarial simulations, learn about ATLAS
-
AI Knows What You Typed
Researchers apply ML and AI to Side Channel Attacks