It's mostly just 'human factors'. What I'm describing below applies across the spectrum of bug reports from fake to huge to everything in between. Nothing of what I'm listing below is an attempt to directly explain or rationalize events in the article, it's just some context from my (anecdotal) experience.
- The security researcher community is composed of a broad spectrum of people. Most of them are amazing. However, there is a portion of complete assholes. They send in lazy work, claim bugs like scalps and get super aggressive privately and publicly if things don't go their way or bugs don't get fixed fast enough or someone questions their claims (particularly about severity). This grates on everybody in the chain from the triagers to the product engineers.
- Some bounty programs are horribly run. They drop the ball constantly, ignore reports, drag fixes out for months and months, undercut severity...all of which impact payout to the researcher. These stories get a lot of traction in the community, diminishing trust in the model and exacerbating the previous point because nobody wants to be had.
- Bug bounties create financial incentives to report bugs, which means that you get a lot of bullshit reports to wade through and catastrophization of even the smallest issues. (Check out @CluelessSec aka BugBountyKing on twitter for parodies but kind of not) This reduces SnR and allows actual major issues to sit around because at first glance they aren't always distinguishable from garbage.
- In large orgs, bug bounties are typically run through the infosec and/or risk part of the organization. Their interface to the affected product teams is going to generally be through product owners and/or existing vulnerability reporting mechanisms. Sometimes this is complicated by subsidiary relationships and/or outsourced development. In any case, these bugs will enter the pool of broader security bugs that have been identified through internal scanning tools, security assessments, pen tests and other reports. Just because someone reported them from the outside doesn't mean they get moved to the top of the priority heap.
- Again in most cases, product owns the bug. Which means that even though it has been triaged, the product team generally still has a lot of discretion about what to do with it. If its a major issue and the product team stalls then you end up with major escalations through EVP/legal channels. These conversations can get heated.
- The bugs themselves often lack context and are randomly distributed through the codebase. Most of the time the development teams are busy cranking out new features, burning down tech debt or otherwise have their focus directed to specific parts of the product. They are used to getting things like static analysis reports saying 'hey commit you just sent through could have a sql injection' and fixing it without skipping a beat (or more likely showing its a false positive). When bug reports come in from the outside, the code underlying the issue may have not been touched for literally years, the teams that built it could be gone, and it could be slated for replacement in the next quarter.
- Some of the bugs people find are actually hard to solve and/or the people in the product teams don't fully understand the mechanism of action and put in place basic countermeasures that are easily defeated. This exacerbates the problem, especially if there's an asshole researcher on the other end of the line that just goes out and immediately starts dunking on them on social media.
- Most bugs are just simple human error and the teams surrounding the person that did the commit are going to typically want to come to their defense just out of camaraderie and empathy. This is going to have a net chilling effect on barn burners that come through because people don't want to burn their buddies at the stake.
All of this to say it takes a lot of culture tending and diplomacy on the part of the bounty runners to manage these factors while trying to make sure each side lives up to their end of the bargain. Most of running a bounty is administrative and applied technical security skills, this part is not...which is why I said it can be the hardest.
Reading the OA, I also believe that there's a wide variety of technical detail that could be the cause of, say, not responding.
Maybe the reports get to the tech teams, the tech team figures out that this bug will definitely be caught by the static analyzer, and they have other more pressing issues.
The main problem today IMO is that the incentives for finding and actively using exploits are much higher than the incentives for fixing them, and certainly much higher than building secure code that doesn't have the issues in the first place.
After all, nobody will give you a medal for delivering secure code. They will give you a medal for delivering a feature fast.
I’ve been in infosec since the 90’s, moving slowly has killed way more companies than any security issues.
I’ve worked at some of the largest financial institutions and they spend billions on security every year to achieve something slightly better than average. Building products with a step function increase in security would incur costs in time and energy and flexibility that very few would be willing to pay.
- The security researcher community is composed of a broad spectrum of people. Most of them are amazing. However, there is a portion of complete assholes. They send in lazy work, claim bugs like scalps and get super aggressive privately and publicly if things don't go their way or bugs don't get fixed fast enough or someone questions their claims (particularly about severity). This grates on everybody in the chain from the triagers to the product engineers.
- Some bounty programs are horribly run. They drop the ball constantly, ignore reports, drag fixes out for months and months, undercut severity...all of which impact payout to the researcher. These stories get a lot of traction in the community, diminishing trust in the model and exacerbating the previous point because nobody wants to be had.
- Bug bounties create financial incentives to report bugs, which means that you get a lot of bullshit reports to wade through and catastrophization of even the smallest issues. (Check out @CluelessSec aka BugBountyKing on twitter for parodies but kind of not) This reduces SnR and allows actual major issues to sit around because at first glance they aren't always distinguishable from garbage.
- In large orgs, bug bounties are typically run through the infosec and/or risk part of the organization. Their interface to the affected product teams is going to generally be through product owners and/or existing vulnerability reporting mechanisms. Sometimes this is complicated by subsidiary relationships and/or outsourced development. In any case, these bugs will enter the pool of broader security bugs that have been identified through internal scanning tools, security assessments, pen tests and other reports. Just because someone reported them from the outside doesn't mean they get moved to the top of the priority heap.
- Again in most cases, product owns the bug. Which means that even though it has been triaged, the product team generally still has a lot of discretion about what to do with it. If its a major issue and the product team stalls then you end up with major escalations through EVP/legal channels. These conversations can get heated.
- The bugs themselves often lack context and are randomly distributed through the codebase. Most of the time the development teams are busy cranking out new features, burning down tech debt or otherwise have their focus directed to specific parts of the product. They are used to getting things like static analysis reports saying 'hey commit you just sent through could have a sql injection' and fixing it without skipping a beat (or more likely showing its a false positive). When bug reports come in from the outside, the code underlying the issue may have not been touched for literally years, the teams that built it could be gone, and it could be slated for replacement in the next quarter.
- Some of the bugs people find are actually hard to solve and/or the people in the product teams don't fully understand the mechanism of action and put in place basic countermeasures that are easily defeated. This exacerbates the problem, especially if there's an asshole researcher on the other end of the line that just goes out and immediately starts dunking on them on social media.
- Most bugs are just simple human error and the teams surrounding the person that did the commit are going to typically want to come to their defense just out of camaraderie and empathy. This is going to have a net chilling effect on barn burners that come through because people don't want to burn their buddies at the stake.
All of this to say it takes a lot of culture tending and diplomacy on the part of the bounty runners to manage these factors while trying to make sure each side lives up to their end of the bargain. Most of running a bounty is administrative and applied technical security skills, this part is not...which is why I said it can be the hardest.