ChatGPT bug bounty program doesn’t cover AI security
OpenAI announced a bug bounty program, but it only considers non-ML security defects.
However, the bug bounty program does not extend to model issues or non-cybersecurity issues with the OpenAI application programming interface or ChatGPT. “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” Bugcrowd said. “Addressing these issues often involves substantial research and a broader approach.
OpenAI has tested model safety through a red team approach. As someone that founded and ran a Fortune 5 Red Team, I can’t help but notice the lack of experienced non-ML red teamers in their efforts to date.
I hope that as part of Microsoft’s investment and deployment of OpenAI models, the Microsoft Red Team was engaged to simulate adversaries to test the model’s resilience against potential threats. If not, this is an obvious missed opportunity.
Related Posts
-
Bug Bounty Platforms Business Model Hinges on Specialised LLMs
An uptick in LLM generated bounty submissions increases asymmetric costs to developers and is a systemic risk for the platforms
-
We accidentally invented computers that can lie to us
Hallucinations as Bugs: AI's Double-edged Sword in Disruptive Technology and Society.
-
Use ChatGPT to examine every npm and PyPI package for security issues
AI-driven Socket identifies and analyzes 227 vulnerable or malicious packages in npm and PyPI repositories.