ChatGPT bug bounty program doesn’t cover AI security

AI Security: The Limits of Bug Bounty Programs and the Need for Non-ML Red Teaming

OpenAI announced a bug bounty program, but it only considers non-ML security defects.

However, the bug bounty program does not extend to model issues or non-cybersecurity issues with the OpenAI application programming interface or ChatGPT. “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” Bugcrowd said. “Addressing these issues often involves substantial research and a broader approach.

OpenAI has tested model safety through a red team approach. As someone that founded and ran a Fortune 5 Red Team, I can’t help but notice the lack of experienced non-ML red teamers in their efforts to date.

I hope that as part of Microsoft’s investment and deployment of OpenAI models, the Microsoft Red Team was engaged to simulate adversaries to test the model’s resilience against potential threats. If not, this is an obvious missed opportunity.