Troubles With Apple’s Bug Bounty Program

I used some of the Washington Post’s reporting on the Pegasus Project in my piece about its revelations and lessons, but I never really addressed the Post’s article. I hope you will read what I wrote, especially since this website was down for about five hours today around the time it started picking up traction. Someone kicked the plug out at my web host; what can I say?

Anyway, the Post’s story is also worth reading, despite its headline: “Despite the hype, iPhone security no match for NSO spyware”. iPhone security is not made of “hype” and marketing. On the contrary, the reason this malware is notable is because of its sophistication and capability in an operating system that, while imperfect, is far more secure than almost any consumer device before it, as the Post acknowledged just a few years ago when it claimed Apple was “protecting a terrorist’s iPhone”. According to the Post, the iPhone is both way too locked down for a consumer product and also all of its security is mere hype.

Below the miserable headline and between the typically cynical Reed Albergotti framing, there is a series of worthwhile interviews with current and former Apple employees claiming that the company’s security responses are too often driven by marketing response and the annual software release cycle. The Post:

Current and former Apple employees and people who work with the company say the product release schedule is harrowing, and, because there is little time to vet new products for security flaws, it leads to a proliferation of new bugs that offensive security researchers at companies like NSO Group can use to break into even the newest devices.


Apple also was a relative latecomer to “bug bounties,” where companies pay independent researchers for finding and disclosing software flaws that could be used by hackers in attacks.

Krstić, Apple’s top security official, pushed for a bug bounty program that was added in 2016, but some independent researchers say they have stopped submitting bugs through the program because Apple tends to pay small rewards and the process can take months or years.

Apple disputes the Post’s characterization of its security processes, quality of its bug bounty program, involvement of marketing in its responses, and overall relationship with security researchers.

However, a suddenly very relevant post from Nicolas Brunner, writing last week, indicates that Apple’s bug bounty program is simply not good enough:

In my understanding, the idea behind the bounty program is that developers report bugs directly to Apple and remain silent about them until fixed in exchange for a security bounty pay. They also state very clearly, what issues do qualify for the bounty program payout on their homepage. Unfortunately, in my case, Apple never fulfilled their part of the deal (until now).

To be frank: Right now, I feel robbed. However I still hope, that the security bounty program turns out to be a win-win situation for both parties. In my current understanding however, I do not see any reason, why developers like myself should continue to contribute to it. In my case, Apple was very slow with responses (the entire process took 14 months), then turned me away without elaborating on the reasons and stopped answering e-mails.

A similarly frustrating experience with Apple’s security team was reported last month by Laxman Muthiyah:

The actual bounty mentioned for iCloud account takeover in Apple’s website is $100,000 USD. Extracting sensitive data from locked Apple device is $250,000 USD. My report covered both the scenarios (assuming the passcode endpoint was patched after my report). Even if they chose to award the maximum impact out of the two cases, it should still be $250,000 USD.

Selling these kind of vulnerabilities to government agencies or private bounty programs could have made a lot more money. But I chose the ethical way and I didn’t expect anything more than the outlined bounty amounts by Apple.


But $18,000 USD is not even close to the actual bounty. Lets say all my assumptions are wrong and Apple passcode verifying endpoint wasn’t vulnerable before my report. Even then the given bounty is not fair looking at the impact of the vulnerability as given below.

Apple says that it pays one million dollars for a “zero-click remote chain with full kernel execution and persistence” — and 50% more than that for a zero-day in a beta version — pales compared to the two million dollars that Zerodium is paying for the same kind of exploit.

Steven Troughton-Smith, via Michael Tsai:

I’m not sure why one of the richest companies in the world feels like it needs to be so stingy with its bounty program; it feels far more like a way to keep security issues hidden & unfixed under NDA than a way to find & fix them. More micro-payouts would incentivize researchers.

Security researchers should not have to grovel to get paid for reporting a vulnerability, no matter how small it may seem. Buy why would anyone put themselves through this process when there are plenty of companies out there paying far more?

The good news is that Apple can get most of the way toward fixing this problem by throwing money at it. Apple has deep pockets; it can keep increasing payouts until the grey market cannot possibly compete. That may seem overly simplistic, but at least this security problem is truly very simple for Apple to solve.