Sometimes, itâ€™s just my job to find vulnerabilities, and while thatâ€™s kind of fun, itâ€™s also a little unexciting compared to the thrill of finding bugs in other peopleâ€™s software and getting an actual â€śthank youâ€ť, whether monetarily or just a brief word.
About a year ago, I found a minor Cross-Site Scripting (XSS) flaw in a major companyâ€™s web page, and while it wasnâ€™t a huge issue, I decided to report it, as I had a few years back with a similar issue in the same web site. I was pleased to find that the company was offering a bounty programme, and simply emailing them would submit my issue.
The first thing to notice, as with all XSS issues, is that there were protections in place that had to be got around. In this case, some special characters or sequences were being blocked. But not all. And itâ€™s really telling that there are still many websites which have not implemented widespread input validation / output encoding as their XSS protection. So, while the WAF slowed me down even when I knew the flaw existed, it only added about 20 minutes to the exploit time. So, my example had to use â€śconfirm()â€ť instead of â€śalert()â€ť or â€śprompt()â€ť. But really, if I was an attacker, my exploit wouldnâ€™t have any of those functions, and would probably include an encoded script that wouldnâ€™t be detected by the WAF either. WAFs are great for preventing specific attacks, but arenâ€™t a strong protection against an adversary with a little intelligence and understanding.
My email resulted in an answer that same day, less than an hour after my initial report. A simple â€śthank youâ€ť, and â€śweâ€™re forwarding this to our developersâ€ť goes a long way to keeping a security researcher from idly playing with the thought of publishing their findings and moving on to the next game.
If theyâ€™d told me â€śhey, weâ€™re putting in a WAF rule while we work on fixing the actual bugâ€ť, I wouldnâ€™t have been so eager to grump back at them and say they hadnâ€™t fixed the issue by applying their WAF and by the way, hereâ€™s another URL to exploit it. But they did at least respond to my grump and reassure me that, yes, they were still going to fix the application.
I heard nothing after that, until in February of this year, over six months later, I replied to the original thread and asked if the report qualified for a bounty, since I noticed that they had actually fixed the vulnerability.
No response. Thinking of writing this up as an example of how security researchers still get shafted by businesses â€“ bear in mind that my approach is not to seek out bounties for reward, but that I really think itâ€™s common courtesy to thank researchers for reporting to you rather than pwning your website and/or your customers.
About a month later, while looking into other things, I found that the company exists on HackerOne, where they run a bug bounty. This renewed my interest in seeing this fixed. So I reported the email exchange from earlier, noted that the bug was fixed, and asked if it constituted a rewardable finding. Again, a simple â€śthanks for the report, but this doesnâ€™t really rise to the level of a bountyâ€ť is something Iâ€™ve been comfortable with from many companies (though it is nice when you do get something, even if itâ€™s just a keychain or a t-shirt, or a bag full of stickers).
3/14: I got a reply the next day, indicating that â€śwe are investigatingâ€ť.
3/28: Then nothing for two weeks, so I posted another response asking where things were going.
4/3: Then a week later, a response. â€śWeâ€™re looking into this and will be in touch soon with an update.â€ť
4/18: Me: Ping?
5/7: Me: Hey, how are we doing?
5/16: Anything happening?
5/18: Finally, over two months after my report to the company through HackerOne, and ten months after my original email to the first bug bounty address, itâ€™s addressed.
5/19: The severity of the bug report is lowered (quite rightly, the questionnaire they used pushed me to a priority of â€śhighâ€ť, which was by no means warranted). A very welcome bounty, and a bonus for my patience – unexpected but welcome, are issued.
The cheapest way to learn things is from someone elseâ€™s mistakes. So I decided to share with my readers the things I picked up from this experience.
Here are a few other lessons Iâ€™ve picked up from bug bounties Iâ€™ve observed:
If you start a bug bounty, consider how ready you might be. Are you already fixing all the security bugs you can find for yourself? Are you at least fixing those bugs faster than you can find more? Do your developers actually know how to fix a security bug, or how to verify a vulnerability report? Do you know how to expand on an exploit, and find occurrences of the same class of bug? [If you donâ€™t, someone will milk your bounty programme by continually filing variations on the same basic flaw]
How many security vulnerabilities do you think you have? Multiply that by an order of magnitude or two. Now multiply that by the average bounty you expect to offer. Add the cost of the personnel who are going to handle incoming bugs, and the cost of the projects they could otherwise be engaged in. Add the cost of the developers, whose work will be interrupted to fix security bugs, and add the cost of the features that didnâ€™t get shipped on time before they were fixed. Sure, some of that is just a normal cost of doing business, when a security report could come at you out of the blue and interrupt development until itâ€™s fixed, but starting a bug bounty paints a huge target on you.
Hiring a penetration tester, or renting a tool to scan for programming flaws, has a fixed cost â€“ you can simply tell them how much youâ€™re willing to pay, and theyâ€™ll work for that long. A bug bounty may result in multiple orders of magnitude more findings than you expected. Are you going to pay them all? What happens when your bounty programme runs out of money?
Finding bugs internally using bug bashes, software scanning tools or dedicated development staff, has a fixed cost, which is probably still smaller than the amount of money youâ€™re considering on putting into that bounty programme.
Thatâ€™s not to say bug bounties are always going to be uneconomical. At some point, in theory at least, your development staff will be sufficiently good at resolving and preventing security vulnerabilities that are discovered internally, that they will be running short of security bugs to fix. They still exist, of course, but theyâ€™re more complex and harder to find. This is where it becomes economical to lure a bunch of suckers â€“ excuse me, security researchers â€“ to pound against your brick walls until one of them, either stronger or smarter than the others, finds the open window nobody saw, and reports it to you. And you give them a few hundred bucks â€“ or a few thousand, if itâ€™s a really good find â€“ for the time that they and their friends spent hammering away in futility until that one successful exploit.
At that point, your bug bounty programme is actually the least expensive tool in your arsenal.