Debunking the risk of GPU Card theft
If I rip a GPU card out of a machine, can I gain access to the AI model it was processing?
Every now and then, whispers about potential AI risks start to gain traction, and I feel it’s essential we debunk the wrongheaded or esoteric ones so we keep our eyes trained on the risks that truly matter…such as the security of centralised hubs, which offer pre-trained models to download.
the idea that you can just break into a data center and steal the model has a lot of memetic sticking power, but is stupid if you actually know anything about this topic. here’s a thread on how confidential computing works in the NVIDIA H100…
Well worth a read if you want to understand better the security engineering in modern GPU cards and the working practices of cloud providers offering GPU enabled compute.
Related Posts
-
Meta LLaMA leaked: Private AI for the masses
AI Governance Dilemma: Leaked Llama Model Outperforms GPT-3! Explore the debate on trust, policy, and control as cutting-edge AI slips into public domain.
-
OpenAI GPT-4 System Card
OpenAI published a 60-page System Card, a document that describes their due diligence and risk management efforts
-
Local Inference Hardware
Truly private AI. Can it pay for itself?