Backdoor Attack on Deep Learning Models in Mobile Apps
Deep learning models are increasingly used in mobile applications as critical components. Researchers from Microsoft Research demonstrated that many deep learning models deployed in mobile apps are vulnerable to backdoor attacks via “neural payload injection.” They conducted an empirical study on real-world mobile deep learning apps collected from Google Play. They identified 54 apps that were vulnerable to attack, including popular security and safety critical applications used for cash recognition, parental control, face authentication, and financial services.
This MITRE ATLAS case study helps bring to life the framework referenced above.
Initial access is via a malicious APK installed on the victim’s devices via a supply chain compromise. Machine Learning Attack Staging is by a “trigger placed in the physical environment where it is captured by the victim’s device camera and processed by the backdoored ML model”. The team were successful in “evading ML models in several safety-critical apps in the Google Play store.”
Related Posts
-
AI Knows What You Typed
Researchers apply ML and AI to Side Channel Attacks
-
Identify Vulnerabilities in the Machine Learning Model Supply Chain
Adversaries can create 'BadNets' to misbehave on specific inputs, highlighting need for better neural network inspection techniques
-
Is there an Ethical use for Deep Fake technology?
Entrepreneur used Deep Fake to send 10K thank you videos. Is this the first ethical use case for Deep Fake technology?