Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Explain I'm naive: why would Apple's bug bounty program be so poorly run? Is it simply a sign of organizational failure? (e.g. perhaps the managers running the program have been promoted to a position that they simply don't belong in, and higher up execs don't care? Or are they prioritizing profit over success?)

I would think that, given the profitability and positioning of Apple in the marketplace, that they would be heavily incentivized to provide large bounties for finding such destructive vulnerabilities. And I imagine that there are plenty of security people working there who genuinely care about fixing fix these issues right away.



Bug bounty programs are the antithesis of Apple's internal methodology, culture, and way of doing business. They keep everything close to the chest, they shun "outsiders", etc.. The idea that someone outside of Apple, from the unwashed masses, could find a flaw in Apple's own software is a pretty big pill for them to swallow. Thus it doesn't surprise me there are problems with their bug bounty program. I think if they could they would prefer to just silence all vulnerability/bug reports with a gag order rather than acknowledging or even investigating them.


That makes apple (the org, not the fanboys) sound a bit cultish... Can't say I'm surprised though...


There are entire books that romanticise the cult aspect of working at Apple.


It is cultish.


It is in SV after all..


There are plenty of companies in SV that are the opposite of cultish, eg. Google is known to try to be more like a university. (or it was at least, I think they are pulling back on that as their political problems mount.)


This is not my experience with google. It’s a bit cultish; “googliness” meaning “being good and helping” is one of many examples of this cult.


that's just dumb, like third parties do all the work and contact you about critical bugs the only effort on Apple's part of verification and some coordination which shouldn't be a huge issue for a company the size of apple.. just hire a team to do it and be done with it the whole 'secrecy culture' is a bunch of hogwash


Apple is all about silos.

So a security threat gets reported to this bug bounty team. They are able to reproduce and confirm. The bug is in some deep, crusty part of the kernel; the code for which isn't available to this team, because Silos.

The team who does have access to this Silo is tracked down. It gets processed into a ticket. Maybe it gets done, maybe it doesn't. Their backlog is already maxed out, as is everyone's.

The security team says "we've done all we can".

This is not a matter of "lol just hire a team". You need leadership aligned, so the manager's manager can see their task list, or open security issues, and say "what the fuck, why are you not prioritizing this".

That's not Apple. Apple is product-driven. They actually, legitimately don't care about Privacy and Security. Their manager-managers get mad when products aren't delivered on time. They may also push-back on purposeful anti-privacy decisions. Its not in their culture to push back on Security issues, or latent Privacy issues resulting from those Security issues.

"Just tear down the silos" > Working for Apple is a cult. The silos are a part of the cult; left-over from one of the worst corporate leaders of all time, Jobs. Try telling a cult member their beliefs are false.

"Grant the security team access to everything" > And, I guess, also hire the smartest humans to ever exist on the planet to be able to implement security improvements across thousands of repositories in dozens of programming languages, billions of lines of code? And, even if they could, push those changes through a drive-by review and deployment with a team on the other side of the planet you've never met? You, coming into their house, and effectively saying "y'all are incompetent, this is insecure, merge this" (jeeze ok, we'll get to it, maybe in... iOS 18)


Accurate - Engineering at Apple has no tradition of security; nor does it have a tradition of being very efficient. It's mostly based on heroics of some very few very talented developers. Processes that are in place are actively hindering development.

Scaling development is hard, and Apple has never really gotten it right. I am wondering if a zero day is $1M on the open market - wouldn't it be easier and cheaper to get an engineer inside Apple to leave some plausible deniability bugs in the code? Or compromise an engineer already there?

Software engineering never had security as its main goal - but today, if you had to do it all over, security would be built into all processes from the get go, and that's likely the only way software could be made secure.

It always amazes me Apple (and others) can't even make a browser that doesn't have a drive by zero day that can take over my computer. Why is that? There must be something fundamentally wrong in the system here. And I think what's wrong is that security was not even in the minds of engineers when most of these software modules were created.

BSD had it built in, but they watered it down instead of - what they should have done - doubling down on it.


I’ve worked on the bug bounty program for a large company. We did the whole thing. It’s hard. The part you’re talking about can be the hardest.

Is probably less than believable to read because it sounds like it should be easy. I don’t have any good answers there. I’m also not suggesting that customers and researchers accept that, but saying it’s easy just diminishes the efforts of those that run good ones.


could you try litle bit harder to provide any example why it is "harder than it looks". you repeated multiple times that its hard, but what exactly(aproximately) makes it hard?


I think it's the phrases 'some coordination' and 'company the size of Apple'. It's rarely the case (well, hopefully?!) that a fix is as trivial as 'oh yeah, oops, let's delete that `leak_data()` line' - it's going to involve multiple teams and they're all going to think anything from 'nothing to do with us' to 'hm yes I can see how that happened, but what we're doing in our piece of the pie is correct/needs to be so, this will need to be handled by the [other] team'.

Not to say that people 'pass the buck' or are not taking responsibility exactly, just that teams can be insular, and they're all going to view it from the perspective of their side of the 'API', and not find a problem. (Of course with a strict actual API that couldn't be the case, but I use it here only loosely or analogously.) Someone has to coordinate them all, and ultimately probably work out (or decide somewhat arbitrarily - at least in technical terms, but perhaps on the basis of cost or complexity or politics or cetera) whose problem to make it.


What's worse, typically an exploit doesn't involve knowledge of the actual line of code responsible -- it's just a vague description of behavior or actions that leads to an exploit, making it much easier to pass the buck in terms of who is actually responsible for fixing it. The kicker is if your department/project/whatever fixes it, you're also taking responsibility for causing this error / causing this huge affront to the Apple way...


Most good exploits have a pretty solid root cause attached to them.


It's mostly just 'human factors'. What I'm describing below applies across the spectrum of bug reports from fake to huge to everything in between. Nothing of what I'm listing below is an attempt to directly explain or rationalize events in the article, it's just some context from my (anecdotal) experience.

- The security researcher community is composed of a broad spectrum of people. Most of them are amazing. However, there is a portion of complete assholes. They send in lazy work, claim bugs like scalps and get super aggressive privately and publicly if things don't go their way or bugs don't get fixed fast enough or someone questions their claims (particularly about severity). This grates on everybody in the chain from the triagers to the product engineers.

- Some bounty programs are horribly run. They drop the ball constantly, ignore reports, drag fixes out for months and months, undercut severity...all of which impact payout to the researcher. These stories get a lot of traction in the community, diminishing trust in the model and exacerbating the previous point because nobody wants to be had.

- Bug bounties create financial incentives to report bugs, which means that you get a lot of bullshit reports to wade through and catastrophization of even the smallest issues. (Check out @CluelessSec aka BugBountyKing on twitter for parodies but kind of not) This reduces SnR and allows actual major issues to sit around because at first glance they aren't always distinguishable from garbage.

- In large orgs, bug bounties are typically run through the infosec and/or risk part of the organization. Their interface to the affected product teams is going to generally be through product owners and/or existing vulnerability reporting mechanisms. Sometimes this is complicated by subsidiary relationships and/or outsourced development. In any case, these bugs will enter the pool of broader security bugs that have been identified through internal scanning tools, security assessments, pen tests and other reports. Just because someone reported them from the outside doesn't mean they get moved to the top of the priority heap.

- Again in most cases, product owns the bug. Which means that even though it has been triaged, the product team generally still has a lot of discretion about what to do with it. If its a major issue and the product team stalls then you end up with major escalations through EVP/legal channels. These conversations can get heated.

- The bugs themselves often lack context and are randomly distributed through the codebase. Most of the time the development teams are busy cranking out new features, burning down tech debt or otherwise have their focus directed to specific parts of the product. They are used to getting things like static analysis reports saying 'hey commit you just sent through could have a sql injection' and fixing it without skipping a beat (or more likely showing its a false positive). When bug reports come in from the outside, the code underlying the issue may have not been touched for literally years, the teams that built it could be gone, and it could be slated for replacement in the next quarter.

- Some of the bugs people find are actually hard to solve and/or the people in the product teams don't fully understand the mechanism of action and put in place basic countermeasures that are easily defeated. This exacerbates the problem, especially if there's an asshole researcher on the other end of the line that just goes out and immediately starts dunking on them on social media.

- Most bugs are just simple human error and the teams surrounding the person that did the commit are going to typically want to come to their defense just out of camaraderie and empathy. This is going to have a net chilling effect on barn burners that come through because people don't want to burn their buddies at the stake.

All of this to say it takes a lot of culture tending and diplomacy on the part of the bounty runners to manage these factors while trying to make sure each side lives up to their end of the bargain. Most of running a bounty is administrative and applied technical security skills, this part is not...which is why I said it can be the hardest.


Reading the OA, I also believe that there's a wide variety of technical detail that could be the cause of, say, not responding.

Maybe the reports get to the tech teams, the tech team figures out that this bug will definitely be caught by the static analyzer, and they have other more pressing issues.

The main problem today IMO is that the incentives for finding and actively using exploits are much higher than the incentives for fixing them, and certainly much higher than building secure code that doesn't have the issues in the first place.

After all, nobody will give you a medal for delivering secure code. They will give you a medal for delivering a feature fast.


I’ve been in infosec since the 90’s, moving slowly has killed way more companies than any security issues.

I’ve worked at some of the largest financial institutions and they spend billions on security every year to achieve something slightly better than average. Building products with a step function increase in security would incur costs in time and energy and flexibility that very few would be willing to pay.


And yet here we are ;)


Apple has always been infamously bad at doing anything with external bug reports. Radar is a black hole that is indistinguishable from submitting bug reports to /dev/null unless you have a backchannel contact who can ensure that the right person sees the report.

Bug bounty programs are significantly more difficult to run than a normal bug reporting service, so the fact that they're so bad at handling the easy case makes it no surprise that they're terrible at handling security bugs too.


I used to submit bug reports for things I found in macOS or any other applications, like that Pages would include a huge picture in the files for no reason at all. But those bug reports would usually be closed and "linked" to another bug report you don't have access to. Essentially shutting you out. At some point you just give up. At some point bugs are getting fixed but there is no pattern to it.


I actually got response for a bug report saying "We fixed that, can you try it on the next beta and send as a code sample to reproduce it if the bug is still there". But that bug was about the way SwiftUI draws the UI.


Disclaimer: I am not an Apple insider by any means, and this is all a hypothesis.

Their management of the bug bounty program seems like a reflection of their secretive (and perhaps sometimes siloed) internal culture. I'd argue that for any bug bounty program to be successful, there needs to be an inherent level of trust and very transparent lines of communication - seeing as though Apple lacks it internally (based on what I've read in reporting about the firm) it is not particularly surprising that their program happens to be run in the shadows as the OP describes.

I forget the exact term for it, but there is a "law" in management which postulates that the internal communication structures of teams are reflected in the final product that is shipped. The Apple bug bounty seems to be an example of just that.

Edit: Its called Conway's Law


> Explain I'm naive: why would Apple's bug bounty program be so poorly run?

Hubris.

Apple's culture is still fundamentally the same from the day they ran ads saying "Macs don't get viruses" to today. They used a misleading ad copy to convince people they could just buy a Mac and be safe, not needing to do anything else... ignoring that Macs still got malware in the form of trojans, botnets and such... and encouraging a culture of ignorance that persists to this day. "It just works." etc.

So now their primary user base is majorly people who have zero safe online habits.

And that sort of mentality feeds back into the culture of the company... "Oh, we're not Windows, we don't get viruses. We don't get viruses because our security is good. Our security is good, obviously, because we don't get viruses." It, in effect, is a feedback loop of complacency and hubris. (A prime example of this is how Macs within the Cupertino campus were infected with the Flashback botnet.)

Since their culture was that of security by obscurity (unlike, say, Google's explicit design in keeping Chrome sandboxed and containered for sites), closed source and again, hubris... it's coming back to bite Apple in the ass despite their ongoing "We don't get viruses" style smugness. If it's not about Macs not getting viruses, it's about how Apple values your privacy (implying others explicitly don't) and like with everything else, it's repeated often enough to where the kool aid from within becomes the truth.

Apple's culture is that of smugness, ignorance and yep... hubris. Why should they have a serious, respectable bug bounty program if they've been busy telling themselves that they don't simply have these kinds of security problems that they've bragged about never having?


Best explanation I've heard was in Darknet Diaries about Zero Day Brokers, which was a fantastic listen! (https://open.spotify.com/episode/4vXyFtBk1IarDRAoXIWQFf?si=3...)

The short version is that if the bounties become too large they'll lose internal talent who can just quit to do the same thing outside the org. Another reason was that they can't offer competitive bounties for zero days because they'll be competing with nation states, effectively a bottomless bank, so price will always go up.

I don't know much about this topic, but surely there are some well structured bounty programs Apple could copy to find a happy middle ground to reward the white hats.


That explains the payout, but not the poor communication on the part of Apple.


this is the real reason. not anything internal/culture related

A good iOS 0-day is worth hundreds of millions of dollars in contracts with shady governments. Apple can't compete with that multiple times a year


This doesn't compute: is the claim Apple badly manages its bug-bounty because 0-days are too valuable? If that's the case, I'd expect the opposite effect: Apple would recognize how valuable the reports being sent to them by white-hats are, and would react with a sense of urgency and gratitude. As it is, Apple is behaving as if 0-days are worth very little, and not a big priority.


According to Zerodium, iOS exploits are cheaper than Android exploits because they are so plentiful in comparison.


Here's my totally outsider informed guesswork. We've seen similar problems recently with Microsoft, where legitimate sounding issues are denied bounties, so this kind of issue is not unique to Apple.

My guess would be, that MSRC and Apple's equivalent have an OKR about keeping bounties under a certain level. Security is seen as a cost centre by most companies, and what do "well run" companies do with cost centres... they minimize them :)

I don't think that organizationally either company wants to have bad security, and I don't think that individual staff in those companies want to have bad security, but I do think that incentive structures have been set-up in a way that leads to this kind of problem.

I've seen this described as lack of resources in the affected teams, but realistically these companies have effectively limitless resources, it's that they're not assigning them to these areas.


Apple does not have cost and profit centers. They maintain a single profit and loss balance sheet for the entire company.

That doesn't mean the functional areas don't have a budget or resource constraints, but Apple's structure is quite different from most companies.

I'd agree with the other comments that pin Apple's specific issues on their insular culture that discourages all forms of external communication if you're not part of marketing. Great bug bountry programs require good communication with researchers and some level of transparency, two things apple is structured to avoid and discourage.


My hypothesis is a lot simpler:

Hacking is much more profitable than preventing hacking.

Incentives are heavily biased towards security exploits on all levels.

End of story.

There's no reward for "your code never got hacked". There's a reward for delivering a feature in time and a penalty for not doing so.

You'll get a bonus or promotion for delivering features. If you take twice as long because you made your code really secure - no one will know.

I think that's really all there is to it. Security is obscure and complicated.


> My guess would be, that MSRC and Apple's equivalent have an OKR about keeping bounties under a certain level. Security is seen as a cost centre by most companies, and what do "well run" companies do with cost centres... they minimize them :)

It would have to be this.

If you start to increase the payout, you get more people wanting the payout.


I imagine they are just overwhelmed.

Let’s say they have a team of 6 engineers tasked with this. They probably receive hundreds of reports a day, many bogus, some real, but all long winded descriptions like this framed to make the vuln seem as bad as possible. In addition many vuln reports are generated by automated tools and sprayed to thousands of sites/vendors daily in the hope of one of them paying out, they seem coherent at first glance but are often nonsense or not really a vuln at all, and of course there are many duplicates or near duplicates.

If each of these takes 20 mins to triage, 1 hour to properly look at and days to confirm, you can see how a team of any reasonable size would soon be completely submerged and unable to respond to anything but the most severe and succinct vulnerability reports in a timely way.


I don't buy this. They fixed one reported, got back to him and acknowledge the lack of disclosure, apologised, promised to fix it and never actually disclosed it 3 reports later.

It's not a case of "someone missed this" it's "this seems dysfunctional".


I feel like Apple could afford to staff a security program like this. Much, much smaller and less wealthy companies manage it.


much smaller, less wealthy companies attract far fewer reports


> Let’s say they have a team of 6 engineers tasked with this.

They are a trillion dollar company. They can have as many engineers as they'd like.


You'd probably be surprised how small some departments are at large companies, if they're seen as a cost centre rather than a profit centre.

I agree they could and should do a lot better, I'm just imagining the probable reasons for this level of dysfunction - the most likely explanation to me is an overwhelmed, overworked department submerged in so many requests that they can't filter out the useful ones or respond in a timely way.

Just as one other example of this, the bug reporting system at Apple is antiquated and seen as a black hole outside the company, probably again due to underinvestment.


Apple didn't get rich by writing a lot of checks

ironically, spoken by Gates @0:52 https://www.youtube.com/watch?v=H27rfr59RiE


Its company culture, some companies see failure as a problem to be avoided others see failure as the road to success.

Evidently apple see failures like bugs as a problem to be avoided and are just trying to avoid the problem with the obvious result that they have problems and look like a failure.

Companies that accept failure as a consequence of trying will learn and improve until they achieve success.


Having worked in at a large tech company with a big bug bounty program and seeing tonnes of bugs come through, my experience is that usually there is a wide disconnect between the bug bounty program (situated in one org of the company) and the engineering group responsible for fixing the bug (which is in a different part of the company.) This is exacerbated by misaligned incentives, bug bounty team wants fixes ASAP while PMs/TPMs dont care about security and want engineers to be busy on feature development vs fixing bugs. On top of that if the leader of that org comes from a non-tech background then its even harder to convince them to prioritize security. Bug bounty teams are mostly powerless pawns in the politics between leaders of several different orgs with varying cultures of caring about security.

This is roughly how I have seen things work internally:

* When a bug report comes in, the bug bounty triage team tries their best to triage the bug and if legit, passes it on to the infosec team situated within the organization where the bug belongs.

* Security team for the org then scrambles to figure out which exact team the bug belongs to assigns it to them.

* A day or two later that team picks up the bug and there is usual back forth on ownership, "oh, this is that part which this another team wrote and no one from that time now works at the company" or "its not us, its another team, please reassign."

* Even when the right team is assigned to the bug, there are discussions about priority and severity - "oh we dont think its a high sev issue" types of discussions with PMs who have no knowledge about security.

* Even when everything gets aligned, sometimes the fix is so complicated that it cant be fixed within SLA. In the meantime, security researchers threaten to go public, throws tantrums on Twitter while Bug bounty chases internal teams for a fix.

* When the bug cannot be fixed within SLA, the engineering folks file for an exception. This then gets escalated to a senior leader who needs to approve an exception with agreement from a leader within security. This takes a couple of days to weeks and in the meantime, security researcher has now completely lost it because they think no one is paying attention to this crazy oh so critical bug they spent day and night working on.

* When exception is granted, bug bounty swallows the pill and tries to make up excuses on why it cant be fixed soon. Eventually, 90days are over and researcher feels disrespected and establishes animosity and starts to think everyone on the other side a complete idiot.

* A blog shows up on HN and gets picked up by infosec twitter and slowly media catches up. Now, internally everyone scrambles to figure out what to do. Bug bounty team says "we told you so" and engineering team figures out a magical quick band-aid solution that stops the bleeding and partially fixes the bug.


That honestly sounds like a failure to communicate with the researcher first and foremost. If it's difficult to prioritize the fix internally due to organizational politics, that's one thing, but that shouldn't stop the bounty team from communicating the status to the researcher. In fact, that should be the simplest part of the whole process, as it's completely within the purview of the bug bounty team. If they handle that right and build some trust, they might be able to successfully ask the researcher for an extension on disclosure.

Case in point, Apple likely could have come out of this looking much better if they didn't ignore and then actively lie to illusionofchaos. That really isn't a very high bar to clear.


it feels like we're conflating two issues here: fixing the bug on time and paying out the researcher. at the point where the bug is too complicated to fix within SLA and the exception has been escalated to senior leadership, surely the bug bounty team can pay the researcher?


I think they don’t want to admit such a big security breach publicly. At some extent privacy is their business.


It doesn’t matter how big of a company they are, the only thing that matters financially is whether they’re growing or not.


It's interesting to me that in this entire thread, nobody is even mentioning or considering the possibility that COVID has impacted Apple's operations.

It obviously has. It has affected every tech company. Certainly it has affected mine. Whether this is an example of that, I don't know, of course, but I think it's plausible.


So what? Everyone has been affected. Apple doesn't somehow get a pass at not doing their basic duties.

Sitting at home looking at a monitor and typing is largely the same doing the same thing in an office. They're not service sector workers, doctors, nurses, or truck drivers who actually have had to deal with the impact of this head on.


I see no reason COVID has anything to do with Apple’s poor response to external reporters, which has been something that has been a problem for decades.


I'm curious. Would you accept it if Apple came out and said that the reason this is happening is because of the COVID pandemic affecting their operations?

Surely even if it were true, that is no excuse for a company like Apple?


Did I say I "accept" it? No.

Did I say it was an "excuse"? No.

Please don't put words in my mouth. The -4 downvotes made your point well enough. I get it: people want to trash Apple by any means necessary and that's way more important than a free and open discussion of the issue. Thanks.


Your reaction to my comment is quite unfair.

After claiming that I am putting words in your mouth, you go ahead and accuse me of only wanting to trash Apple and not caring about having a discussion.

I would have preferred it if you had simply told me to fuck off.


Apple could have chosen to simply not release a new iPhone or iOS version this year, if it wanted to. To the extent that they prioritized that over fixing up their security infra, that's on them.


Hackers aren't going to stop because of COVID. Apple has a duty to their customers to keep their products secure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: