Bug Bounty Platforms Business Model Hinges on Specialised LLMs
The misuse of Large Language Models (LLMs) is poised to significantly increase the already disproportionate burden developers face when triaging bug bounty submissions. Without timely adaptation by the platforms, this trend could pose a systemic risk, undermining customer confidence in their value proposition.
For instance, Daniel Stenberg, the creator of curl—a widely used command-line tool and library for transferring data with URLs—recently encountered a bug bounty submission influenced by LLMs. This submission, made by a "luck-seeking" bug bounty hunter, highlights the practical challenges developers face with the influx of AI-assisted reports.
When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means.
This incident not only highlights the difficulties developers face with a potential surge of AI-enhanced bug submissions, but hints at a potential erosion of trust among users and customers if the platforms fail to promptly adapt their submission vetting.
Bug bounty programmes are serious business - even at the lower end. But serious money does not translate into serious efficiency. Certainly, the bug bounty platforms can rightly claim they created a market that didn't exist before. But they are a long way from achieving an efficient market:
Our bug bounty has resulted in over 70,000 USD paid in rewards so far. We have received 415 vulnerability reports. Out of those, 64 were ultimately confirmed security problems. 77 of the report were informative, meaning they typically were bugs or similar. Making 66% of the reports neither a security issue nor a normal bug.
In Six Sigma terms, those numbers imply the process is operating at around 1 sigma - significant room for improvement!
And remember, these numbers are pre-GPT-4 - we don't yet know the full impact.
The largest historical driver of false positive submissions is the opportunistic and naive use of automated tools, like vulnerability scanners and static code analysers.
The culprits are wannabe bug bounty hunters who take the path of least resistance.
Repeat time-wasters do get weeded out - and to be clear, plenty of talented bug bounty hunters deliver professional-grade findings and reports (this post is not about them).
As code sniffs can identify auto-generated code, low-grade bug bounty submissions tend towards obvious tells: different sh!t, same smell.
But now, thanks to luck-seekers pasting snippets of curl source code into state-of-the-art LLMs, Daniel receives compelling-looking vulnerability reports with credible-looking evidence.
It's true that it can be easy to spot AI-generated content, due to certain language patterns and artifacts that reveal its origin. However, if edited AI-generated content is mixed with original writing, it gets harder to distinguish between the two. AI language detection tools can operate at the phrase or sentence level, and help identify which parts are more likely to have been generated by AI, but except in extreme cases, this doesn't reveal intent.
This creates a potential problem for developers: the time spent detecting garbage submissions increases. This means that they will have to spend more time carefully considering vulnerability reports that appear legitimate, in order to avoid missing genuine bug reports.
What can be done about it?
Punish the Buggers?
Outlawing LLM-assisted vulnerability submissions is not the solution.
The bug bounty hunter community is international and non-native English speakers already use AI language tools to improve report communications. Also, is using AI to rework text and improve markup bad? Daniel argues no and I agree with him.
The underlying problems are similar, but slightly different to before:
- the submitter is failing to properly scrutinise LLM generated content prior to submission. They don't understand what they are submitting, otherwise they would not submit it.
- the choice to use general-purpose LLMs to find software security bugs
SOTA LLMs are improving at identifying genuine vulnerabilities across a wider range of bug classes, but their performance is spotty across language and vulnerability types.
Further, limited reasoning skills lead to false negatives (missed vulns), and hallucinations lead to convincing-looking false positives (over-reporting).
You can mitigate false positive risk through prompt engineering and critically scrutinising outputs, but obviously some people don't. Even some semi-skilled hunters with platform reputation points are succumbing to operating AI on autopilot and waving the bad stuff through.
The big difference now is generative AI can produce reams of convincing looking vulnerability reports that materially drive up the opportunity cost of bug triage. Developer DDoS anyone?
Up to now, developers and security teams rely in part on report "tells" to separate the wheat from the chaff.
If you've ever dismissed a phishing email at first sight due to suspicious language or formatting, this is the sort of shortcut that LLM-generated output eliminates.
OK, so should the bug bounty hunter be required to disclose LLM use?
I don't think holds much value - just as with calculators - assume people will use LLMs as a tool. It may however offer limited value as an honesty check: a declaration subsequently found to be false could be grounds to kick someone off the platform (but at that point, they've already wasted developers' time).
When you're buyin' top shelf
In the short term, I believe that if you use an LLM to find a security bug, you should be required to disclose which LLMs you used.
Preference could be given to hunters who submit their LLM chat transcripts.
Bug bounty submissions can then be scored accordingly; if you use a general-purpose LLM, your report gets bounced back with a warning ("verify your reasoning with model XYZ before resubmitting). On the other hand, If the hunter used a code-specialised LLM, it gets labelled as such and passes to the next stage.
So rather than (mis)using general purpose LLMs trained on common crawl data to identify non-obvious software security vulnerabilities, AI-curious bug bounty hunters could instead train open-source LLMs with good reasoning on the target programming language and fine-tune them on relevant security vulnerability examples.
The platforms can track, publish and rank the most helpful LLM assists, attracting hunters towards using higher-yielding models.
In the medium term, I think the smart move for the bug bounty platforms is to "embrace and own" the interaction between hunter and LLM.
Develop and integrate specialised software-security LLMs into their platforms, make inferencing free and actively encourage adoption. Not only would this reduce the tide of low-quality submissions, but now the platform would gain valuable intelligence about hunters' LLM prompting and steering skills.
Interactions could be scored (JudgeGPT?), further qualifying and filtering submissions.
The final benefit is trend spotting LLM-induced false positives and improving guardrails to call these out, or better yet eliminate them.
But we are where we are.
What could bug bounty platforms do right now to reduce the asymmetric cost their existing process passes downstream to software teams receiving LLM-wrapped turds?
You're so Superficial
Perhaps start with generating a submission superficiality score through behaviour analytics.
Below-threshold submissions could trigger manual spot checks that weed them out earlier in the process (augmenting existing checks).
Here are some starting suggestions:
- apply stylometric analysis to a hunter's prior submissions to detect sudden "out of norm" writing style in new submissions. A sudden change in a short space of time is a strong clue (but not proof) of LLM use. As noted earlier, this could be a net positive for communication, but it signals a behavioural change nonetheless and can be a trigger to look more closely for signs of weak LLM vulnerability reasoning
- perform a consistency check on the class of vulnerabilities a hunter reports. If a bug hunter typically reports low-hanging fruit vulnerabilities but out of the blue is reporting heap overflows in well-fielded C code, it should be verified if this change is legitimate. But faced with a wall of compelling looking LLM generated text platforms are passing the problem downstream. A sudden jump in vulnerability submission difficulty can have many explanations and be legitimate, but with the rate of LLM adoption, these cases will become the exception.
- detect a marked increase in the hunters' rate of vulnerability submissions. A hunter may have more time on their hands - so this is not a strong signal if considered in isolation. But, LLM-powered submissions are cheap to produce and, for some, will be hard to resist as they double down on what works.
Choice of signals and weightings aside, the overall goal is to isolate ticking time bomb submissions that tarpit unsuspecting developers on the receiving end.
Behavioural analytics and generative AI have a lot in common. Used wisely, their output can trigger reflection and add potential value. Used poorly, their output is blindly acted upon, removing value for those downstream.
The antidote for both is the same: leaders must be transparent about their use and guiding principles, reward balanced decision-making, and vigorously defend the right to reply by those impacted.
If bug bounty platforms can get on the right side of the LLM wave, they can save developers time and educate a generation of bug bounty hunters on how best to use AI to get more bounties. This drives up their bottom line, reduces customer dissatisfaction, and, in the process, makes the Internet a safer place for us all.
What if the platforms fail to adapt or move too slow?
You be focused on what you're missing
I wonder if the bug bounty platform clients - many of which are major software and tech companies - become easy pickings for a team wielding an AI system that combines the predictive power of a neural language model with a rule-bound deduction engine, which work in tandem to find" high impact vulnerabilities.
Scoring the largest public bug bounties across all major platforms would instantly gain them notoriety and significant funding.
How fast would they grow if they subsequently boycott the platforms and entice prospects with free security bugs for early adopters?
The winner will be the first one to deliver on a promise like this:
"We won't send you any time-wasting vulnerability reports. But if we ever do, we'll pay you double the developer time wasted".
Radical marketing, but with breakthroughs in applied AI performance this is starting to look increasingly plausible.
Oh, and let's not forget the risk side of the house: if LLM powered vulnerability discovery makes software products and services less risky, BigCo CROs will welcome the opportunity to negotiate down product liability insurance premiums.
Related Posts
-
ChatGPT bug bounty program doesn’t cover AI security
AI Security: The Limits of Bug Bounty Programs and the Need for Non-ML Red Teaming
-
We accidentally invented computers that can lie to us
Hallucinations as Bugs: AI's Double-edged Sword in Disruptive Technology and Society.
-
Karpathy on Hallucinations
Dream machines: it's a feature, not a bug