Identify Vulnerabilities in the Machine Learning Model Supply Chain
Deep learning-based techniques have shown remarkable performance in recognition and classification tasks, but training these networks is computationally expensive. Many users opt for outsourcing the training or using pre-trained models.
An adversary can target the model supply chain and create a “BadNet” that performs well on the user’s data but misbehaves on specific inputs.
The paper provides examples of backdoored handwritten digits and US street signs. Results indicate that backdoors are powerful and difficult to detect, so further research into techniques for verifying and inspecting neural networks is necessary.
Related Posts
-
Freedom to Train AI
Clickworkers are part of the AI supply chain. How to vet?
-
Microsoft Training Data Exposure
Does your org manage its cloud storage tokens?
-
Backdoor Attack on Deep Learning Models in Mobile Apps
This MITRE ATLAS case study helps bring to life the framework