Iâve been a little absent from this blog for a while, mostly because Iâve been settling in to a new job where Iâve briefly changed my focus almost completely from application security to being a software developer.
The blog absence is going to change now, and Iâd like to start that with a renewed effort to write something every week. In addition to whatever grabs my attention from the security news feeds I still suck up, I want to get across some of knowledge and approaches Iâve used while working as an application security guy. Iâll likely be an application security guy in my next job, whenever that is, so itâll stand me in good stead to write what I think.
The phrase âOne Simple Thingâ underscores what I try to return to repeatedly in my work â that if you can get to the heart of what youâre working on, everything else flows easily and smoothly.
This does not mean that thereâs only one thing to think about with regards to security, but that when you start asking clarifying questions about the âone simple thingâ that drives â or stops â a project in the moment, itâs a great way to make tremendous progress.
Iâll start by discussing the One Simple Thing I pick up by default whenever Iâm given a security challenge.
What are we protecting?
This is the first question I ask on joining a new security team â often as early as the first interviews. Everyone has a different answer, and itâs a great way to find out what approaches youâre likely to encounter. The question also has several cling-on questions that it demands be asked and answered at the same time:
Why are we protecting it?
Who are we protecting it from?
Why do they want it?
Why shouldnât they get it?
What are our resources?
These come very quickly out of the One Simple Thing of âwhat are we protecting?â.
Hereâs some typical answers:
You can see from the selection of answers that not everyone has anything like the same approach, and that they donât all line up exactly under the typical buckets of Confidentiality, Integrity and Availability.
Do you think someone can solve your security issues or set up a security team without first finding out what it is youâre protecting?
Do you think you can engage with a team on security issues without understanding what they think theyâre supposed to be protecting?
Youâve seen from my [short] list above that there are many answers to be had between different organisations and companies.
Iâd expect there to be different answers within an organisation, within a team, within a meeting room, and even depending on the time I ask the question.
âWhat are we protectingâ on the day of the Equifax leak quickly becomes a conversation on personal data, and the damaging effect of a leak to âcustomersâ. [I prefer to call them âdata subjectsâ, because they arenât always your customers.]
On the day that Yahoo gets bought by Verizon for substantially less than initially offered, the answer becomes more about company value, and even perhaps executive stability.
Next time youâre confused by a security problem, step back and ask yourself â and others â âWhat are we protecting?â and see how much it clarifies your understanding.
Sometimes, itâs just my job to find vulnerabilities, and while thatâs kind of fun, itâs also a little unexciting compared to the thrill of finding bugs in other peopleâs software and getting an actual âthank youâ, whether monetarily or just a brief word.
About a year ago, I found a minor Cross-Site Scripting (XSS) flaw in a major companyâs web page, and while it wasnât a huge issue, I decided to report it, as I had a few years back with a similar issue in the same web site. I was pleased to find that the company was offering a bounty programme, and simply emailing them would submit my issue.
The first thing to notice, as with all XSS issues, is that there were protections in place that had to be got around. In this case, some special characters or sequences were being blocked. But not all. And itâs really telling that there are still many websites which have not implemented widespread input validation / output encoding as their XSS protection. So, while the WAF slowed me down even when I knew the flaw existed, it only added about 20 minutes to the exploit time. So, my example had to use âconfirm()â instead of âalert()â or âprompt()â. But really, if I was an attacker, my exploit wouldnât have any of those functions, and would probably include an encoded script that wouldnât be detected by the WAF either. WAFs are great for preventing specific attacks, but arenât a strong protection against an adversary with a little intelligence and understanding.
My email resulted in an answer that same day, less than an hour after my initial report. A simple âthank youâ, and âweâre forwarding this to our developersâ goes a long way to keeping a security researcher from idly playing with the thought of publishing their findings and moving on to the next game.
If theyâd told me âhey, weâre putting in a WAF rule while we work on fixing the actual bugâ, I wouldnât have been so eager to grump back at them and say they hadnât fixed the issue by applying their WAF and by the way, hereâs another URL to exploit it. But they did at least respond to my grump and reassure me that, yes, they were still going to fix the application.
I heard nothing after that, until in February of this year, over six months later, I replied to the original thread and asked if the report qualified for a bounty, since I noticed that they had actually fixed the vulnerability.
No response. Thinking of writing this up as an example of how security researchers still get shafted by businesses â bear in mind that my approach is not to seek out bounties for reward, but that I really think itâs common courtesy to thank researchers for reporting to you rather than pwning your website and/or your customers.
About a month later, while looking into other things, I found that the company exists on HackerOne, where they run a bug bounty. This renewed my interest in seeing this fixed. So I reported the email exchange from earlier, noted that the bug was fixed, and asked if it constituted a rewardable finding. Again, a simple âthanks for the report, but this doesnât really rise to the level of a bountyâ is something Iâve been comfortable with from many companies (though it is nice when you do get something, even if itâs just a keychain or a t-shirt, or a bag full of stickers).
3/14: I got a reply the next day, indicating that âwe are investigatingâ.
3/28: Then nothing for two weeks, so I posted another response asking where things were going.
4/3: Then a week later, a response. âWeâre looking into this and will be in touch soon with an update.â
4/18: Me: Ping?
5/7: Me: Hey, how are we doing?
5/16: Anything happening?
5/18: Finally, over two months after my report to the company through HackerOne, and ten months after my original email to the first bug bounty address, itâs addressed.
5/19: The severity of the bug report is lowered (quite rightly, the questionnaire they used pushed me to a priority of âhighâ, which was by no means warranted). A very welcome bounty, and a bonus for my patience – unexpected but welcome, are issued.
The cheapest way to learn things is from someone elseâs mistakes. So I decided to share with my readers the things I picked up from this experience.
Here are a few other lessons Iâve picked up from bug bounties Iâve observed:
If you start a bug bounty, consider how ready you might be. Are you already fixing all the security bugs you can find for yourself? Are you at least fixing those bugs faster than you can find more? Do your developers actually know how to fix a security bug, or how to verify a vulnerability report? Do you know how to expand on an exploit, and find occurrences of the same class of bug? [If you donât, someone will milk your bounty programme by continually filing variations on the same basic flaw]
How many security vulnerabilities do you think you have? Multiply that by an order of magnitude or two. Now multiply that by the average bounty you expect to offer. Add the cost of the personnel who are going to handle incoming bugs, and the cost of the projects they could otherwise be engaged in. Add the cost of the developers, whose work will be interrupted to fix security bugs, and add the cost of the features that didnât get shipped on time before they were fixed. Sure, some of that is just a normal cost of doing business, when a security report could come at you out of the blue and interrupt development until itâs fixed, but starting a bug bounty paints a huge target on you.
Hiring a penetration tester, or renting a tool to scan for programming flaws, has a fixed cost â you can simply tell them how much youâre willing to pay, and theyâll work for that long. A bug bounty may result in multiple orders of magnitude more findings than you expected. Are you going to pay them all? What happens when your bounty programme runs out of money?
Finding bugs internally using bug bashes, software scanning tools or dedicated development staff, has a fixed cost, which is probably still smaller than the amount of money youâre considering on putting into that bounty programme.
Thatâs not to say bug bounties are always going to be uneconomical. At some point, in theory at least, your development staff will be sufficiently good at resolving and preventing security vulnerabilities that are discovered internally, that they will be running short of security bugs to fix. They still exist, of course, but theyâre more complex and harder to find. This is where it becomes economical to lure a bunch of suckers â excuse me, security researchers â to pound against your brick walls until one of them, either stronger or smarter than the others, finds the open window nobody saw, and reports it to you. And you give them a few hundred bucks â or a few thousand, if itâs a really good find â for the time that they and their friends spent hammering away in futility until that one successful exploit.
At that point, your bug bounty programme is actually the least expensive tool in your arsenal.
Iâm pretty much unhappy with the use of âSecurity Questionsâ â things like âwhatâs your motherâs maiden nameâ, or âwhat was your first petâ. These questions are sometimes used to strengthen an existing authentication control (e.g. âyouâve entered your password on a device that wasnât recognised, from a country you normally donât visit â please answer a security questionâ), but far more often they are used as a means to recover an account after the password has been lost, stolen or changed.
Iâve been asked a few times, given that these are pretty widely used, to explain objectively why I have such little disregard for them as a security measure. Hereâs the Too Long; Didnât Read summary:
Letâs take them one by one:
Whatâs your favourite colour? Blue, or Green. At the outside, red, yellow, orange or purple. That covers most peopleâs choices, in less than 3 bits of entropy.
Whatâs your favourite NBA team? Thereâs 29 of those â 30, if you count the 76ers. Thatâs 6 bits of entropy.
Obviously, there are questions that broaden this, but are relatively easy to guess with a small number of tries â particularly when you can use the next fact about Security Questions.
Whatâs your motherâs maiden name? Itâs a matter of public record.
What school did you go to? If we know where you grew up, itâs easy to guess this, since there were probably only a handful of schools you could possibly have attended.
Who was your first boyfriend/girlfriend? Many people go on about this at length in Facebook posts, Iâm told. Or thereâs this fact:
Whatâs your porn name? Whatâs your Star Wars name? Whatâs your Harry Potter name?
All these stupid quizzes, and they get you to identify something about yourself â the street you grew up on, the first initial of your secret crush, how old you were when you first heard saxophones.
And, of course, because of the next fact, all I really have to do is convince you that you want a free account at my site.
Every site that you visit asks you variants of the same security questions â which means that youâll have told multiple sites the same answers.
Youâve been told over and over not to share your password across multiple sites â but here you are, sharing the security answers that will reset your password, and doing so across multiple sites that should not be connected.
And do you think those answers (and the questions they refer back to) are kept securely by these various sites? No, because:
Thereâs regulatory protection, under regimes such as PCI, etc, telling providers how to protect your passwords.
There is no such advice for protecting security questions (which are usually public) and the answers to them, which are at least presumed to be stored in a back-end database, but are occasionally sent to the client for comparison against the answers! Thatâs truly a bad security measure, because of course youâre telling the attacker.
Even assuming the security answers are stored in a database, theyâre generally stored in plain text, so that they can be accessed by phone support staff to verify your answers when you call up crying that youâve forgotten your password. [Awesome pen-testing trick]
And because the answers are shared everywhere, all it takes is a breach at one provider to make the security questions and answers they hold have no security value at all any more.
Thereâs an old joke in security circles, âmy password got hacked, and now I have to rename my dogâ. Itâs really funny, because there are so many of these security answers which are matters of historical fact â while you can choose different questions, you canât generally choose a different answer to the same question.
Well, obviously, you can, but then youâve lost the point of a security question and answer â because now you have to remember what random lie you used to answer that particular question on that particular site.
Yes, I know you can lie, you can put in random letters or phrases, and the system may take them (âYour place of birth cannot contain spacesâ â so, Las Vegas, New York, Lake Windermere are all unusable). But then youâve just created another password to remember â and the point of these security questions is to let you log on once youâve forgotten your password.
So, youâve forgotten your password, but to get it back, you have to remember a different password, one that you never used. Thereâs not much point there.
Security questions and answers, when used for password recovery / reset, are complete rubbish.
Security questions are low-entropy, predictable and discoverable password substitutes that are shared across multiple sites, are under- or un-protected, and (like fingerprints) really canât be changed if they become exposed. This makes them totally unsuited to being used as password equivalents in account recovery / password reset schemes.
If you have to implement an account recovery scheme, find something better to use. In an enterprise, as Iâve said before, your best bet is to use something that the enterprise does well â the management hierarchy. Every time you forget your password, you have to get your manager, or someone at the next level up from them, to reset your password for you, or to vouch for you to tech support. That way, someone who knows you, and can affect your behaviour in a positive way, will know that you keep forgetting your password and could do with some assistance. In a social network, require the
Also, password hints are bullshit. Many of the Adobe breachâs âpassword hintsâ were actually just the password in plain-text. And, because Adobe didnât salt their password hashes, you could sort the list of password hashes, and pick whichever of the password hints was either the password itself, or an easy clue for the password. So, even if you didnât use the password hint yourself, or chose a really cryptic clue, some other idiot came up with the same password, and gave a âDaily Express Quick Crosswordâ quality clue.
Sometimes I think that title is the job of the Security Engineer â as a Subject Matter Expert, weâre supposed to meet with teams and tell them how their dreams are going to come crashing down around their ears because of something they hadnât thought of, but which is obvious to us.
This can make us just a little bit unpopular.
But being argumentative and sceptical isnât entirely a bad trait to have.
Sometimes it comes in handy when other security guys spread their various statements of doom and gloom â or joy and excitement.
âRename your administrator account so itâs more secureâ â or lengthen the password and achieve the exact same effect without breaking scripts or requiring extra documentation so people know what the administrator is called this week.
âEncrypt your data at rest by using automatic database encryptionâ â which means any app that authenticates to the database can read that data back out, voiding the protection that was the point of encrypting at rest. If fields need encrypting, maybe they need field-level access control, too.
âComplex passwords, one lower case, one upper case, one number, one symbol, no repeated lettersâ â or else, measure strength in more interesting ways, and display to users how strong their password is, so that a longish phrase, used by a competent typist, becomes an acceptable password.
Now Iâm going to commit absolute heresy, as Iâm going against the biggest recent shock news in security advice.
I understand the arguments, and I know Iâm frequently irritated with the unnecessary requirements to change my password after sixty days, and even more so, I know that the reasons behind password expiration settings are entirely arbitrary.
Thereâs a good side to password expiry.
These arenât the only ways in which passwords are discovered.
The method that frequently gets overlooked is when they are deliberately shared.
âBobâs out this week, because his mother died, and he has to arrange details in another state. He didnât have time to set up access control changes before he left, but he gave me a sticky-note with his password on it, so that we donât need to bother him for anythingâ
âEveryone on this team has to monitor and interact with so many shared service accounts, we just print off a list of all the service account passwords. You can photocopy my laminated card with the list, if you like.â
Yes, those are real situations Iâve dealt with, and they have some pretty obvious replacement solutions:
Bob (or Bobâs manager, if Bob is too distraught to talk to anyone, which isnât at all surprising) should notify a system administrator who can then respond to requests to open up ACLs as needed, rather than someone using Bobâs password. But he didnât.
When Bob comes back, is he going to change his password?
No, because he trusts his assistant, Dave, with his communications.
But, of course, Dave handed out his password to the sales VP, because it was easier for him than fetching up the document she wanted. And sales VPs just canât be trusted. Now the entire sales team knows Bobâs password. And then one of them gets fired, or hired on at a new competitor. The temptation to log on to Bobâs account â just once â is immense, because that list of customers is just so enticing. And really, who would ever know? And if they did know, everyone has Bobâs password, so itâs not like they could prosecute you, because they couldnât prove it was you.
Whatâs going to save Bob is if he is required to change his password when he returns.
Yes, this also happened. Because we found the photocopy of the laminated sheet folded up on the floor of a hallway outside the lavatory door.
There was some disciplining involved. Up to, and possibly including, the termination of employment, as policy allows.
Then the bad stuff happened.
The team who shared all these passwords pointed out that, as well as these being admin-level accounts, they had other special privileges, including the avoidance of any requirement to change passwords.
These passwords hadnât changed in six years.
And the team had no idea what would break if they changed the passwords.
Maybe one of those passwords is hard-coded into a script somewhere, and vital business processes would grind to a halt if the password was changed.
When I left six months later, they were still (successfully) arguing that it would be too dangerous to try changing the passwords.
Iâm not familiar with any company that acknowledges in policy that users share passwords, nor the expected behaviour when they do [log when you shared it, and who you shared it with, then change it as soon as possible after it no longer needs to be shared].
Once you accept that passwords are shared for valid reasons, even if you donât enumerate what those reasons are, you can come up with processes and tools to make that sharing more secure.
If there was a process for Bob to share with Dave what his password is, maybe outlining the creation of a temporary password, reading Dave in on when Dave can share the password (probably never), and how Dave is expected to behave, and becomes co-responsible for any bad things done in Bobâs account, suddenly thereâs a better chance Daveâs not going to share. âI canât give you Bobâs password, but I can get you that document youâre afterâ
If there was a tool in which the team managing shared service accounts could find and unlock access to passwords, that tool could also be configured to distribute changed passwords to the affected systems after work had been performed.
If you donât have these processes or tools, the only protection you have against password sharing (apart from the obviously failing advice to âjust donât do itâ) is regular password expiry.
Iâm also fond of talking about password expiration as being a means to train your users.
Certificates expire once a year, and as a result, programmers write code as if itâs never going to happen. After all, thereâs plenty of time between now and next year to write the ârenew certificateâ logic, and by the time itâs needed, Iâll be working on another project anyway.
If passwords donât expire â or donât expire often enough â users will not have changed their password anything like recently enough to remember how to do so if they have to in an emergency.
So, when a keylogger has been discovered to be caching all the logons in the conference room, or the password hashes have been posted on Pastebin, most of your users â even the ones approving the company-wide email request for action â will fight against a password change request, because they just donât know how to do it, or what might break when they do.
Unless theyâve been through it before, and it were no great thing. In which case, theyâll briefly sigh, and then change their password.
This is where I equivocate and say, yes, I broadly think the advice to reduce or remove password expiration is appropriate. My arguments above are mainly about things weâve forgotten to remember that are reasons why we might have included password expiry to begin with.
Here, in closing, are some ways in which password expiry is bad, just to balance things out:
Right now, this is where we stand:
As a result, especially of this last item, I donât think businesses can currently afford to remove password expiry from their accounts.
But any fool can see which way the wind is blowing â at some point, you will be able to excuse your company from password expiry, but just in case your compliance standard requires it, you should have a very clear and strong story about how you have addressed the risks that were previously resolved by expiring passwords as frequently as once a quarter.
There are many reasons why Information Security hasnât had as big an impact as it deserves. Some are external â lack of funding, lack of concern, poor management, distractions from valuable tasks, etc, etc.
But the ones we inflict on ourselves are probably the most irritating. They make me really cross.
We shoot ourselves in the foot by confusing our customers between Cross-Site Scripting, Cross-Site Request Forgery & Cross-Frame Scripting.
â Alun Jones (@ftp_alun) February 26, 2016
OK, âcrossâ is an English term for âangryâ, or âirateâ, but as with many other English words, itâs got a few other meanings as well.
It can mean to wrong someone, or go against them â âI canât believe you crossed Fingers MacGeeâ.
It can mean to make the sign of a cross â âDid you just cross your fingers?â
It can mean a pair of items, intersecting one another â âIâm drinking at the sign of the Skull and Cross-bonesâ.
It can mean to breed two different subspecies into a third â âWhat do you get if you cross a mountaineer with a mosquito? Nothing, you canât cross a scaler and a vector.â
Or it can mean to traverse something â âI donât care what Darth Vader says, I always cross the road hereâ.
Itâs this last sense that InfoSec people seem obsessed about, to the extent that every other attack seems to require it as its first word.
These are just a list of the attacks at OWASP that begin with the word âCrossâ.
Yesterday I had a meeting to discuss how to address three bugs found in a scan, and I swear I spent more than half the meeting trying to ensure that the PM and the Developer in the room were both discussing the same bug. [And here, I paraphrase]
âHow long will it take you to fix the Cross-Frame Scripting bug?â
âWe just told you, itâs going to take a couple of days.â
âNo, that was for the Cross-Site Scripting bug. Iâm talking about the Cross-Frame Scripting issue.â
âOh, that should only take a couple of days, because all we need to do is encode the contents of the field.â
âNo, again, thatâs the Cross-Site Scripting bug. We already discussed that.â
âI wish youâd make it clear what youâre talking about.â
Yeah, me too.
The whole point of the word âCrossâ as used in the descriptions of these bugs is to indicate that someone is doing something they shouldnât â and in that respect, itâs pretty much a completely irrelevant word, because weâre already discussing attack types.
In many of these cases, the words âCross-Siteâ bring absolutely nothing to the discussion, and just make things confusing. Am I crossing a site from one page to another, or am I saying this attack occurs between sites? What if thereâs no other site involved, is that still a cross-site scripting attack? [Yes, but thatâs an irrelevant question, and by asking it, or thinking about asking/answering it, youâve reduced your mental processing abilities to handle the actual issue.]
Check yourself when you utter âcrossâ as the first word in the description of an attack, and ask if youâre communicating something of use, or just âsounding like a proper InfoSec toolâ. Consider whether thereâs a better term to use.
Cross-Frame Scripting is really Click-Jacking (and yes, that doesnât exclude clickjacking activities done by a keyboard or other non-mouse source).
Cross-Site Request Forgery is more of a Forced Action â an attacker can guess what URL would cause an action without further user input, and can cause a user to visit that URL in a hidden manner.
Cross-Site History Manipulation is more of a browser failure to protect SOP â Iâm not an expert in that field, so Iâll leave it to them to figure out a non-confusing name.
Cross-Site Tracing is just getting silly â itâs Cross-Site Scripting (excuse me, HTML Injection) using the TRACE verb instead of the GET verb. If you allow TRACE, youâve got bigger problems than XSS.
Cross-User Defacement crosses all the way into crosstalk, requiring as it does that two users be sharing the same TCP connection with no adequate delineation between them. This isnât really common enough to need a name that gets capitalised. Itâs HTTP Response-Splitting over a shared proxy with shitty user segregation.
I donât remotely anticipate that Iâll change the names people give to these vulnerabilities in scanning tools or in pen-test reports.
But I do hope youâll be able to use these to stop confusion in its tracks, as I did:
âNever mind cross-whatever, letâs talk about how long itâs going to take you to address the clickjacking issue.â
Hereâs the TL;DR version of the web post:
Prevent or interrupt confusion by referring to bugs using the following non-confusing terms:
|Confusing||Not Confusing Much, Probably|
|Cross-Site History Manipulation||[Not common enough to name]|
|Cross-Site Tracing||TRACE is enabled|
|Cross-Site Request Forgery||Forced User Action|
|Cross-Site Scripting||HTML Injection
|Cross-User Defacement||Crappy proxy server|
Back when I started developing code, and that was a fairly long time ago, the vast majority of developers I interacted with had taken that job because they were excited to be working with technology, and enjoyed instructing and controlling computers to an extent that was perhaps verging on the creepy.
Much of what I read about application security strongly reflects this even today, where developers are exhorted to remember that security is an aspect of the overall quality of your work as a developer.
This is great â for those developers who care about the quality of their work. The artisans, if you like.
For every artisan I meet when talking to developers, thereâs about another two or three who are more like labourers.
They turn up on time, they do their daily grind, and they leave on time. Even if the time expected / demanded of them is longer than the usual eight hours a day.
By itself, this isnât a bad thing. When you need another pair of âOKâ and âCancelâ buttons, you want someone to hammer them out, not hand-craft them in bronze. When you need an API to a back-end database, you want it thin and functional, not baroque and beautiful.
Itâs important to note that these guys mostly do what they are told. They are clever, and can be told to do complex things, but they are not single-mindedly interested in the software they are building, except in as much as you will reward them for delivering it.
If these developers will build only the software theyâre told to build, what are you telling them to build?
At any stage, are you actively telling your developers that they have to adhere to security policies, or that they have to build in any kind of âsecurity best practiceâ, or even to âthink like an attackerâ (much as I hate that phrase) â Iâd rather you tell them to âthink about all the ways every part of your code can fail, and work to prevent themâ [âthink like a defenderâ]?
Some of your developers will interject their own ideas of quality.
– But –
Most of your developers will only do as they have been instructed, and as their examples tell them.
The first thing to note is that you wonât reach these developers just with optional training, and you might not even reach them just with mandatory training. They will turn up to mandatory training, because it is required of them, and they may turn up to optional training because they get a dayâs pay for it. But all the appeals to them to take on board the information youâre giving them will fall upon deaf ears, if they return to their desks and donât get follow-up from their managers.
When your AppSec program makes training happen, your developersâ managers must make it clear to their developers that they are expected to take part, they are expected to learn something, and they are expected to come back and start using and demonstrating what they have learned.
Curiously enough, thatâs also helpful for the artisans.
Second, donât despair about these developers. They are useful and necessary, and as with all binary distinctions, the lines are not black and white, they are a spectrum of colours. There are developers at all stages between the âI turn up at 10, I work until 6 (as far as you know), and I do exactly what Iâm toldâ end and the âI love this software as if it were my own child, and I want to mould it into a perfect shining paragon of perfectionâ end.
Donât despair, but be realistic about who you have hired, and who you will hire as a result of your interview techniques.
Third, if you want more artisans and fewer labourers, the only way to do that is to change your hiring and promotion techniques.
Screen for quality-biased developers during the interview process. Ask them âwhatâs wrong with the codeâ, and reward them for saying âitâs not very easy to understand, the comments are awful, it uses too many complex constructs for the job itâs doing, etcâ.
Reward quality where you find it. âWe had feedback from one of the other developers on the team that you spent longer on this project than expected, but produced code that works flawlessly and is easy to maintain â you exceed expectations.â
Labourers as opposed to artisans have no internal âquality itchâ to scratch, which means quality bars must be externally imposed, measured, and enforced.
What are you doing to reward developers for securing their development?
The first problem any security project has is to get executive support. The second problem is to find a way to make use of and direct that executive support.
Developers should be prepared to defend against a Manager in the Middle attack.
— Alun Jones (@ftp_alun) November 9, 2015
So, that was the original tweet that seems to have been a little popular (not fantastically popular, but then I only have a handful of followers).
Iâm sure a lot of people thought it was just an amusing pun, but itâs actually a realisation on my part that thereâs a real thing that needs naming here.
By and large, the companies Iâve worked for and/or with in the last few years have experienced a glacial but certain shift in perspective.
Where once the security team seemed to be perceived as a necessary nuisance to the executive layers, it seems clear now that there have been sufficient occurrences of bad news (and CEOs being forced to resign) that executives come TO the security team for reassurance that they wonât become the next âŠ well, the next whatever the last big incident was.
Obviously, those executives still have purse strings to manage, and most security professionals like to get paid, because thatâs largely what distinguishes them from security amateurs. So security canât get ALL the dollars, but itâs generally easier to get the money and the firepower for security than it ever was in the past.
So executives support security. Some of them even ask what more they can do â and they seem sincere.
Well, some of them do, but thatâs a topic for another post.
There are sufficient numbers of developers who care about quality and security these days, that thereâs less of a need to be pushing the security message to developers quite how we used to.
Weâve mostly reached those developers who are already on our side.
And those developers can mentor other developers who arenât so keen on security.
The security-motivated developers want to learn more from us, theyâre aware that security is an issue, and for the most part, theyâre capable of finding and even distinguishing good security solutions to use.
If the guys at the top, and the guys at the bottom (sorry devs, but the way the org structure goes, you donât manage anyone, so ipso facto you are at the bottom, along with the cleaners, the lawyers, and the guy who makes sure the building doesnât get stolen in the middle of the night) care about security, why are we still seeing sites get attacked successfully? Why are apps still being successfully exploited?
Why is it that I can exploit a web site with SQL injection, an attack that has been around for as long as many of the developers at your company have been aware of computers?
Someone is getting in the way.
Ask anyone in your organisation if they think security is important, and youâll get varying answers, most of which are acknowledging that without security in the software being developed, so itâs clear that you canât actually poll people that way for the right answer.
Often itâs the security team â because itâs really hard to fill out a security team, and to stretch out around the organisation.
But thatâs not the whole answer.
Ask the security-conscious developers whatâs preventing them from becoming a security expert to their team, and theyâll make it clear â theyâre rewarded and/or punished at their annual review times by the code they produce that delivers features.
And because managers are driving behaviour through performance reviews, it actually doesnât matter what the manager tells their underlings, even if they give their devs a weekly spiel about how important security is. Even if you have an executive show up at their meetings and tell them security is âJob #1â. Even if he means it.
Those developers will return to their desks, and theyâll look at the goals against which theyâll be reviewed come performance review time.
If managers donât specifically reward good security behaviour, most developers will not produce secure code.
This is the Manager in the Middle Attack. Note that it applies in the event that no manager is present (thanks, Dan Kaminsky!)
â Dan Kaminsky(@dakami) November 10, 2015
Because I never like to point out a problem without proposing a solution:
Managers have to actively manage their developers into changing their behaviours. Some performance goals will help, along with the support (financial and moral) to make them achievable.
Here are a few sample goals:
Thatâs quite a bunch of security-related goals for developers, which managers can implement. All of them can be measured, and Iâm not so crass as to suggest that I know which numbers will be appropriate to your appetite for risk, or the size of hole out of which you have to dig yourself.
Every year, in October, we celebrate National Cyber Security Awareness Month.
Normally, Iâm dismissive of anything with the word âCyberâ in it. This is no exception â the adjective âcyberâ is a manufactured word, without root, without meaning, and with only a tenuous association to the world it endeavours to describe.
But thatâs not the point.
And I do it from a very basic level.
This is not the place for me to assume youâve all been reading and understanding security for years â this is where I appeal to readers with only a vague understanding that thereâs a âsecurityâ thing out there that needs addressing.
This first week is all about Information Security â Cyber Security, as the government and military put it â as our shared responsibility.
Iâm a security professional, in a security team, and my first responsibility is to remind the thousands of other employees that I canât secure the company, our customers, our managers, and our continued joint success, without everyone pitching in just a little bit.
Iâm also a customer, with private data of my own, and I have a responsibility to take reasonable measures to protect that data, and by extension, my identity and its association with me. But I also need others to take up their responsibility in protecting me.
This year, Iâve had my various identifying factors â name, address, phone number, Social Security Number (if youâre not from the US, thatâs a government identity number thatâs rather inappropriately used as proof of identity in too many parts of life) â misappropriated by others, and used in an attempt to buy a car, and to file taxes in my name. So, Iâve filed reports of identity theft with a number of agencies and organisations.
Just today, another breach report arrives, from a company I do business with, letting me know that more data has been lost â this time from one of the organisations charged with actually protecting my identity and protecting my credit.
While companies can â and should â do much more to protect customers (and putative customers), and their data, itâs also incumbent on the customers to protect themselves.
Every day, thousands of new credit and debit cards get issued to eager recipients, many of them teenagers and young adults.
Excited as they are, many of these youths share pictures of their new cards on Twitter or Facebook. Occasionally with both sides. Thereâs really not much your bank can do if youâre going to react in such a thoughtless way, with a casual disregard for the safety of your data.
Sure, youâre only liable for the first $50 of any use of your credit card, and perhaps of your debit card, but itâs actually much better to not have to trace down unwanted charges and dispute them in the first place.
So, Iâm going to buy into the first message of National Cyber Security Awareness Month â and Iâm going to suggest you do the same:
This is really the base part of all security â before doing a thing, stop a moment. Think about whether itâs a good thing to do, or has negative consequences you hadnât considered. Connect with other people to find out what they think.
Iâll finish tonight with some examples where stopping a moment to think, and connecting with others to pool knowledge, will improve your safety and security online. More tomorrow.
The most common password is â12345678â, or âpasswordâ. This means that many people are using that simple a password. Many more people are using more secure passwords, but they still make mistakes that could be prevented with a little thought.
Passwords leak â either from their owners, or from the systems that use those passwords to recognise the owners.
When they do, those passwords â and data associated with them â can then be used to log on to other sites those same owners have visited. Either because their passwords are the same, or because they are easily predicted. If my password at Adobe is âThis is my Adobe passwordâ, well, thatâs strong(ish), but it also gives a hint as to what my Amazon password is â and when you crack the Adobe password leak (thatâs already available), you might be able to log on to my Amazon account.
Creating unique passwords â and yes, writing them down (or better still, storing them in a password manager), and keeping them safe â allows you to ensure that leaks of your passwords donât spread to your other accounts.
There are exciting events which happen to us every day, and which we want to share with others.
Thatâs great, and itâs what Twitter and Facebook are there FOR. All kinds of social media available for you to share information with your friends.
Unfortunately, itâs also where a whole lot of bad people hang out â and some of those bad people are, unfortunately, your friends and family.
Be careful what you share, and if youâre sharing about others, get their permission too.
If youâre sharing about children, contemplate that there are predators out there looking for the information you may be giving out. Thereâs one living just up the road, I can assure you. Theyâre almost certainly safely withdrawn, and youâre protected from them by natural barriers and instincts. But you have none of those instincts on Facebook unless you stop, think and connect.
So donât post addresses, locations, your childâs phone number, and really limit things like names of children, friends, pets, teachers, etc â imagine that someone will use that as âproofâ to your child of their safety. âItâs OK, I was sent by Aunt Josie, whoâs waiting for you to come and see Dobbie the catâ
Bobâs going off on vacation for a month.
Just in case, while heâs gone, heâs left you his password, so that you can log on and access various files.
Two months later, and the office gets raided by the police. Theyâve traced a child porn network to your company. To Bob.
Well, actually, to Bob and to you, because the system canât tell the difference between Bob and you.
Donât share accounts. Make Bob learn (with the IT departmentâs help) how to share portions of his networked files appropriately. Itâs really not all that hard.
I develop software. The first thing I write is always a basic proof of concept.
The second thing I write â well, whoâs got time for a second thing?
Make notes in comments every time you skip a security decision, and make those notes in such a way that you can revisit them and address them â or at least, count them â prior to release, so that you know how badly youâre in the mess.
The recent hack of Ashley Madison, and the subsequent discussion, reminded me of something Iâve been meaning to talk about for some time.
This is usually expressed, as my title suggests, by a user asking the web site who hosted that userâs account (and usually directly as a result of a data breach) why that web site still had the userâs data.
This can be because the user deliberately deleted their account, or simply because they havenât used the service in a long time, and only remembered that they did by virtue of a breach notification letter (or a web site such as Troy Huntâs haveibeenpwned.com).
Web sites do not see it as a good idea to have a âdeleteâ feature for their user accounts â after all, what youâre asking is for a site to devote developer resources to a feature that specifically curtails the ability of that web site to continue to make money from the user.
To an accountantâs eye (or a shareholderâs), thatâs money out the door with the prospect of reducing money coming in.
To a userâs eye, itâs a matter of security and trust. If the developer deliberately misses a known part of the userâs lifecycle (sunset and deprecation are both terms developers should be familiar with), itâs fairly clear that there are other things likely to be missing or skimped on. If a site allows users to disconnect themselves, to close their accounts, thereâs a paradox that says more users will choose to continue their service, because they donât feel trapped.
So, letâs assume there is a âdeleteâ or âclose my accountâ feature â and that itâs easy to use and functional.
In the aftermath of the Ashley Madison hack, Iâm sure thereâs going to be a few partners who are going to engage in retributive behaviours. Those behaviours could include connecting to any accounts that the partners have shared, and cause them to be closed, deleted and destroyed as much as possible. Itâs the digital equivalent of cutting the sleeves off the cheating partnerâs suit jackets. Probably.
Assuming youâve finally settled down and broken/made up, youâll want those accounts back under your control.
So there might need to be a feature to allow for âremorseâ over the deletion of an account. Maybe not for the jealous partner reason, even, but perhaps just because you forgot about a service you were making use of by that account, and which you want to resurrect.
OK, so many sites have a âresurrectâ function, or a âcool-downâ period before actually terminating an account.
Facebook, for instance, will not delete your account until youâve been inactive for 30 days.
Letâs say youâre a terrorist. Or a violent criminal, or a drug baron, or simply someone who needs to be sued for slanderous / libelous statements made online.
OK, in this case, you donât WANT the server to keep your history â but to satisfy warrants of this sort, a lawyer is likely to tell the serverâs operators that they have to keep history for a specific period of time before discarding them. This allows for court orders and the like to be executed against the server to enforce the rule of law.
So your server probably has to hold onto that data for more than the 30 day inactive period. Local laws are going to put some kind of statute on how long a service provider has to hold onto your data.
As an example, a retention notice served under the UKâs rather steep RIPA law could say the service provider has to hold on to some types of data for as much as 12 months after the data is created.
If youâve paid for the service being provided, those transaction details have to be held for possible accounting audits for the next several years (in the US, between 3 and 7 years, depending on the nature of the business, last time I checked).
Obviously, youâre not going to expect an audit to go into complete investigations of all your individual service requests â unless youâre billed to that level. Still, this record is going to consist of personal details of every user in the system, amounts paid, service levels given, a la carte services charged for, and some kind of demonstration that service was indeed provided.
So, even if Ashley Madison, or whoever, provided a âfull deleteâ service, thereâs a record that they have to keep somewhere that says you paid them for a service at some time in the past.
I donât think eternal data retention is appropriate or desirable. Itâs important for developers to know data retention periods ahead of time, and to build them into the tools and services they provide.
Hackers fetch data from online services. Offline services â truly offline services â are fundamentally impossible to steal over the network. An attacker would have to find the facility where theyâre stored, or the truck the tapes/drives are traveling in, and steal the data physically.
Not that thatâs impossible, but itâs a different proposition from guessing someoneâs password and logging into their servers to steal data.
Once data is no longer required for online use, and can be stored, move it into a queue for offline archiving. Developers should make sure their archivist has a data destruction policy in place as well, to get rid of data thatâs just too old to be of use. Occasionally (once a year, perhaps), they should practice a data recovery, just to make sure that they can do so when the auditors turn up. But they should also make sure that they have safeguards in place to prevent/limit illicit viewing / use of personal data while checking these backups.
Different classifications of data have different retention periods, something I alluded to above. Financial records are at the top end with seven years or so, and the minutiae of day-to-day conversations can probably be deleted remarkably quickly. Some services actually hype that as a value of the service itself, promising the messages will vanish in a snap, or like a ghost.
When developing a service, you should consider how youâre going to classify data so that you know what to keep and what to delete, and under what circumstances. You may need a lawyer to help with that.
If you lay the frameworks in place when developing a service, so that data is classified and has a documented lifecycle, your service naturally becomes more loosely coupled. This makes it smoother to implement, easier to change, and more compartmentalised. This helps speed future development.
Users who know they can quit are more likely to remain loyal (Apple aside). If a user feels hemmed in and locked in place, all thatâs required is for someone to offer them a reason to change, and theyâll do so. Often your own staff will provide the reason to change, because if youâre working hard to keep customers by locking them in, it demonstrates that you donât feel like your customers like your service enough to stay on their own.
Yeah, I know, âto whom you give dataâ, thanks, grammar pedants.
Remember some basic rules here:
Yeah, and Richard Stallmannâs windows want to be broken.
Data doesnât want anything, but the appearance is that it does, because when data is disseminated, it essentially cannot be returned. Just like if you go to RMSâs house and break all his windows, you canât then put the glass fragments back into the frames.
Developers want to possess and collect data â itâs an innate passion, it seems. So if you give data to a developer (or the developerâs proxy, any application theyâve developed), you canât actually get it back â in the sense that you canât tell if the developer no longer has it.
Occasionally developers will collect and keep data that they know they shouldnât. Sometimes theyâll go and see which famous celebrities used their service recently, or their ex-partners, or their âfriendsâ and acquaintances.
EU data protection laws start from the basic assumption that factual data describing a person is essentially the non-transferrable property of the person it describes. It can be held for that person by a data custodian, a business with whom the person has a business relationship, or which has a legal right or requirement to that data. But because the data belongs to the person, that person can ask what data is held about them, and can insist on corrections to factual mistakes.
The US, and many other countries, start from the premise that whoever has collected data about a person actually owns that data, or at least that copy of the data. As a result, thereâs less emphasis on openness about what data is held about you, and less access to information about yourself.
Ideally, when the revolution comes and we have a socialist government (or something in that direction), the US will pick up this idea and make it clear that service providers are providing a service and acting only as a custodian of data about their customers.
Until then, remember that US citizens have no right to discover whoâs holding their data, how wrong it might be, or to ask for it to be corrected.
Developers should also think about this â you canât leak data you donât hold. Similarly, if a user doesnât give data, or gives incorrect or value-less data, if it leaks, that data is fundamentally worthless.
The fallout from the Ashley Madison leak is probably reduced significantly by the number of pseudonyms and fake names in use. Probably.
Hey, if you used your real name on a cheating web site, thatâs hardly smart. But then, as I said earlier today, sometimes security is about protecting bad people from bad things happening to them.
You might use the same nickname at several places; you might provide information thatâs similar to your real information; you might link multiple pseudonymous accounts together. If your data leaks, can you afford to âburnâ the identity attached to the pseudonym?
If you have a long message history, you have almost certainly identified yourself pretty firmly in your pseudonymous posts, by spelling patterns, word usages, etc.
Leaks of pseudonymous data are less problematic than leaks of eponymous data, but they still have their problems. Unless youâre really good at OpSec.
Finally, I was disappointed earlier tonight to see that Troy had already covered some aspects of this topic in his weekly series at Windows IT Pro, but I think youâll see that his thoughts are from a different direction than mine.
Not much has been released about exactly how Premera got attacked, and certainly nothing from anyone with recognised insider knowledge.
Disclaimer: I worked at Premera in the Information Security team, but itâs so so long ago that any of my internal knowledge is incorrect â so Iâll only talk about those things that I have seen published.
I am, above all, a customer of Premeraâs, from 2004 until just a few weeks ago. But Iâm a customer with a strong background in Information Security.
Almost everything boils down rather simply to one article as the source of what we know.
February 4, 2015: News stories break about Anthemâs breach (formerly Wellpoint).
January 29, 2015: The date given by Premera as the date when they were first made aware that theyâd been attacked.
I donât think that itâs a coincidence that these dates are so close together. In my opinion, these dates imply that Anthem / Wellpoint found their own issues, notified the network of other health insurance companies, and then published to the news outlets.
As a result of this, Premera recognised the same attack patterns in their own systems.
This suggests that any other health insurance companies attacked by the same group (alleged to be âDeep Pandaâ) will discover and disclose it shortly.
Iâve kind of driven in the idea that Anthem used to be called Wellpoint, and the reason Iâm bringing this out is that a part of the attack documented by ThreatConnect was to create a site called âwe11point.comâ â thatâs âwellpoint.comâ, but with the two letter âelsâ replaced with two âoneâ digits.
Thatâs relevant because the ThreatConnect article also called out that there was a web site called âprennera.comâ created by the same group.
So, given a domain name similar to that of a site you wish to attack, how would you get full access to the company behind that site?
Hereâs just one way you might mount that attack. There are other ways to do this, but this is the most obvious, given the limited information above.
If youâre concerned that Iâm telling attackers how to do this, remember that this is obvious stuff. This is already a well known attack strategy, âhomograph attacksâ. This is what a penetration tester will do if you hire one to test your susceptibility to social engineering.
Thereâs no vulnerability involved, thereâs no particularly obvious technical failing here, itâs just the age-old tactic of giving someone a screen that looks like their logon page, and telling them theyâve failed to logon. I saw this basic form of attack in the eighties, itâs that old.
If youâve been reading my posts to date, youâll know that Iâm aware that security offence is sexy and exciting, but security defence is really where the clever stuff belongs.
I have a few simple recommendations that I think apply in this case:
Another tack thatâs taken by companies is to engage a reputation management company, to register domain names that are homoglyphs to your own (those that look the same in a browser address bar). Or, to file lawsuits that take down such domains when they appear. Whichever is cheaper. My perspective on this is that it costs money, and is doomed to fail whenever a new TLD arises, or your company creates a new brand.
[Not that reputation management companies canât help you with your domain names, mind you â they can prevent you, for instance, from releasing a product with a name thatâs already associated with a domain name owned by another company.]
These three steps are somewhat interdependent, and they may cause a certain degree of inconvenience, but they will prevent exactly the kind of attacks Iâve described. [Yes, there are other potential attacks, but none introduced by the suggested changes]