Security Awareness – Tales from the Crypto

Security Awareness

1 2 3 6

One Simple Thing

I’ve been a little absent from this blog for a while, mostly because I’ve been settling in to a new job where I’ve briefly changed my focus almost completely from application security to being a software developer.

The blog absence is going to change now, and I’d like to start that with a renewed effort to write something every week. In addition to whatever grabs my attention from the security news feeds I still suck up, I want to get across some of knowledge and approaches I’ve used while working as an application security guy. I’ll likely be an application security guy in my next job, whenever that is, so it’ll stand me in good stead to write what I think.

“One Simple Thing”

The phrase “One Simple Thing” underscores what I try to return to repeatedly in my work – that if you can get to the heart of what you’re working on, everything else flows easily and smoothly.

This does not mean that there’s only one thing to think about with regards to security, but that when you start asking clarifying questions about the “one simple thing” that drives – or stops – a project in the moment, it’s a great way to make tremendous progress.

The Simplest Security Thing

I’ll start by discussing the One Simple Thing I pick up by default whenever I’m given a security challenge.

What are we protecting?

This is the first question I ask on joining a new security team – often as early as the first interviews. Everyone has a different answer, and it’s a great way to find out what approaches you’re likely to encounter. The question also has several cling-on questions that it demands be asked and answered at the same time:

Why are we protecting it?

Who are we protecting it from?

Why do they want it?

Why shouldn’t they get it?

What are our resources?

These come very quickly out of the One Simple Thing of “what are we protecting?”.

Here’s some typical answers:

  • Our systems need to continue running and be accessible at all times
  • We need to keep making money
  • We need to stop [the risk of] losing money
  • Our customers trust us, and we need that to continue
  • We operate in a regulatory compliance climate, and need to keep our lawsuits down to close to zero
  • We don’t even have a product yet, and haven’t a clue what we need to secure
  • We don’t want a repeat of last week’s / month’s / year’s very public security / privacy debacle
  • We want to balance the amount we spend on security staff against the benefit it has to our bottom line – and demonstrate results

You can see from the selection of answers that not everyone has anything like the same approach, and that they don’t all line up exactly under the typical buckets of Confidentiality, Integrity and Availability.

Do you think someone can solve your security issues or set up a security team without first finding out what it is you’re protecting?

Do you think you can engage with a team on security issues without understanding what they think they’re supposed to be protecting?

Answers vary

You’ve seen from my [short] list above that there are many answers to be had between different organisations and companies.

I’d expect there to be different answers within an organisation, within a team, within a meeting room, and even depending on the time I ask the question.

“What are we protecting” on the day of the Equifax leak quickly becomes a conversation on personal data, and the damaging effect of a leak to “customers”. [I prefer to call them “data subjects”, because they aren’t always your customers.]

On the day that Yahoo gets bought by Verizon for substantially less than initially offered, the answer becomes more about company value, and even perhaps executive stability.

Next time you’re confused by a security problem, step back and ask yourself – and others – “What are we protecting?” and see how much it clarifies your understanding.

Lessons from my first official bounty

Sometimes, it’s just my job to find vulnerabilities, and while that’s kind of fun, it’s also a little unexciting compared to the thrill of finding bugs in other people’s software and getting an actual “thank you”, whether monetarily or just a brief word.

About a year ago, I found a minor Cross-Site Scripting (XSS) flaw in a major company’s web page, and while it wasn’t a huge issue, I decided to report it, as I had a few years back with a similar issue in the same web site. I was pleased to find that the company was offering a bounty programme, and simply emailing them would submit my issue.

Lesson 1: The WAF
 it does nothing!

The first thing to notice, as with all XSS issues, is that there were protections in place that had to be got around. In this case, some special characters or sequences were being blocked. But not all. And it’s really telling that there are still many websites which have not implemented widespread input validation / output encoding as their XSS protection. So, while the WAF slowed me down even when I knew the flaw existed, it only added about 20 minutes to the exploit time. So, my example had to use “confirm()” instead of “alert()” or “prompt()”. But really, if I was an attacker, my exploit wouldn’t have any of those functions, and would probably include an encoded script that wouldn’t be detected by the WAF either. WAFs are great for preventing specific attacks, but aren’t a strong protection against an adversary with a little intelligence and understanding.

Lesson 2: A quick response is always welcome

My email resulted in an answer that same day, less than an hour after my initial report. A simple “thank you”, and “we’re forwarding this to our developers” goes a long way to keeping a security researcher from idly playing with the thought of publishing their findings and moving on to the next game.

Lesson 3: The WAF
 still does nothing!

In under a week, I found that the original demo exploit was being blocked by the WAF – but if I replaced “onclick” with “oclick”, “onmouseover” with “omouseover”, and “confirm” with “cofirm”, I found the blocking didn’t get in the way. Granted, since those aren’t real event handlers or JavaScript functions, I can’t use those in a real exploit, but it does mean that once again, all the WAF does is block the original example of the attack, and it took only a few minutes again to come up with another exploit string.

Lesson 4: Keep communicating

If they’d told me “hey, we’re putting in a WAF rule while we work on fixing the actual bug”, I wouldn’t have been so eager to grump back at them and say they hadn’t fixed the issue by applying their WAF and by the way, here’s another URL to exploit it. But they did at least respond to my grump and reassure me that, yes, they were still going to fix the application.

Lesson 5: Keep communicating

I heard nothing after that, until in February of this year, over six months later, I replied to the original thread and asked if the report qualified for a bounty, since I noticed that they had actually fixed the vulnerability.

Lesson 6: Keep communicating

No response. Thinking of writing this up as an example of how security researchers still get shafted by businesses – bear in mind that my approach is not to seek out bounties for reward, but that I really think it’s common courtesy to thank researchers for reporting to you rather than pwning your website and/or your customers.

Lesson 7: Use a broker

About a month later, while looking into other things, I found that the company exists on HackerOne, where they run a bug bounty. This renewed my interest in seeing this fixed. So I reported the email exchange from earlier, noted that the bug was fixed, and asked if it constituted a rewardable finding. Again, a simple “thanks for the report, but this doesn’t really rise to the level of a bounty” is something I’ve been comfortable with from many companies (though it is nice when you do get something, even if it’s just a keychain or a t-shirt, or a bag full of stickers).

Lesson 8: Keep communicating

3/14: I got a reply the next day, indicating that “we are investigating”.

3/28: Then nothing for two weeks, so I posted another response asking where things were going.

4/3: Then a week later, a response. “We’re looking into this and will be in touch soon with an update.”

4/18: Me: Ping?

5/7: Me: Hey, how are we doing?

5/16: Anything happening?

Lesson 9: Deliver

5/18: Finally, over two months after my report to the company through HackerOne, and ten months after my original email to the first bug bounty address, it’s addressed.

5/19: The severity of the bug report is lowered (quite rightly, the questionnaire they used pushed me to a priority of “high”, which was by no means warranted). A very welcome bounty, and a bonus for my patience – unexpected but welcome, are issued.

Lesson 10: Learn

The cheapest way to learn things is from someone else’s mistakes. So I decided to share with my readers the things I picked up from this experience.

Here are a few other lessons I’ve picked up from bug bounties I’ve observed:

Lesson 11: Don’t start until you’re ready

If you start a bug bounty, consider how ready you might be. Are you already fixing all the security bugs you can find for yourself? Are you at least fixing those bugs faster than you can find more? Do your developers actually know how to fix a security bug, or how to verify a vulnerability report? Do you know how to expand on an exploit, and find occurrences of the same class of bug? [If you don’t, someone will milk your bounty programme by continually filing variations on the same basic flaw]

Lesson 12: Bug Bounty programmes may be the most expensive way to find vulnerabilities

How many security vulnerabilities do you think you have? Multiply that by an order of magnitude or two. Now multiply that by the average bounty you expect to offer. Add the cost of the personnel who are going to handle incoming bugs, and the cost of the projects they could otherwise be engaged in. Add the cost of the developers, whose work will be interrupted to fix security bugs, and add the cost of the features that didn’t get shipped on time before they were fixed. Sure, some of that is just a normal cost of doing business, when a security report could come at you out of the blue and interrupt development until it’s fixed, but starting a bug bounty paints a huge target on you.

Hiring a penetration tester, or renting a tool to scan for programming flaws, has a fixed cost – you can simply tell them how much you’re willing to pay, and they’ll work for that long. A bug bounty may result in multiple orders of magnitude more findings than you expected. Are you going to pay them all? What happens when your bounty programme runs out of money?

Finding bugs internally using bug bashes, software scanning tools or dedicated development staff, has a fixed cost, which is probably still smaller than the amount of money you’re considering on putting into that bounty programme.

12.1: They may also be the least expensive way to find vulnerabilities

That’s not to say bug bounties are always going to be uneconomical. At some point, in theory at least, your development staff will be sufficiently good at resolving and preventing security vulnerabilities that are discovered internally, that they will be running short of security bugs to fix. They still exist, of course, but they’re more complex and harder to find. This is where it becomes economical to lure a bunch of suckers – excuse me, security researchers – to pound against your brick walls until one of them, either stronger or smarter than the others, finds the open window nobody saw, and reports it to you. And you give them a few hundred bucks – or a few thousand, if it’s a really good find – for the time that they and their friends spent hammering away in futility until that one successful exploit.

At that point, your bug bounty programme is actually the least expensive tool in your arsenal.

Security Questions are Bullshit

 

I’m pretty much unhappy with the use of “Security Questions” – things like “what’s your mother’s maiden name”, or “what was your first pet”. These questions are sometimes used to strengthen an existing authentication control (e.g. “you’ve entered your password on a device that wasn’t recognised, from a country you normally don’t visit – please answer a security question”), but far more often they are used as a means to recover an account after the password has been lost, stolen or changed.

 

I’ve been asked a few times, given that these are pretty widely used, to explain objectively why I have such little disregard for them as a security measure. Here’s the Too Long; Didn’t Read summary:

  1. Most questions are equivalent to a low-entropy password
  2. Answers to many of these questions can be found online in public documents
  3. Answers can be squeezed out of you, by fair means or foul
  4. The same questions – and answers – are shared at multiple sites
  5. Questions (and answers) don’t get any kind of regulatory protection
  6. Because the best answers are factual and unchanging, you cannot change them if they are exposed

Let’s take them one by one:

Questions are like a low-entropy password

What’s your favourite colour? Blue, or Green. At the outside, red, yellow, orange or purple. That covers most people’s choices, in less than 3 bits of entropy.

What’s your favourite NBA team? There’s 29 of those – 30, if you count the 76ers. That’s 6 bits of entropy.

Obviously, there are questions that broaden this, but are relatively easy to guess with a small number of tries – particularly when you can use the next fact about Security Questions.

The Answers are available online

What’s your mother’s maiden name? It’s a matter of public record.

What school did you go to? If we know where you grew up, it’s easy to guess this, since there were probably only a handful of schools you could possibly have attended.

Who was your first boyfriend/girlfriend? Many people go on about this at length in Facebook posts, I’m told. Or there’s this fact:

You’ll tell people the answers

What’s your porn name? What’s your Star Wars name? What’s your Harry Potter name?

All these stupid quizzes, and they get you to identify something about yourself – the street you grew up on, the first initial of your secret crush, how old you were when you first heard saxophones.

And, of course, because of the next fact, all I really have to do is convince you that you want a free account at my site.

Answers are shared everywhere

Every site that you visit asks you variants of the same security questions – which means that you’ll have told multiple sites the same answers.

You’ve been told over and over not to share your password across multiple sites – but here you are, sharing the security answers that will reset your password, and doing so across multiple sites that should not be connected.

And do you think those answers (and the questions they refer back to) are kept securely by these various sites? No, because:

Questions & Answers are not protected like passwords

There’s regulatory protection, under regimes such as PCI, etc, telling providers how to protect your passwords.

There is no such advice for protecting security questions (which are usually public) and the answers to them, which are at least presumed to be stored in a back-end database, but are occasionally sent to the client for comparison against the answers! That’s truly a bad security measure, because of course you’re telling the attacker.

Even assuming the security answers are stored in a database, they’re generally stored in plain text, so that they can be accessed by phone support staff to verify your answers when you call up crying that you’ve forgotten your password. [Awesome pen-testing trick]

And because the answers are shared everywhere, all it takes is a breach at one provider to make the security questions and answers they hold have no security value at all any more.

If they’re exposed, you can’t change them

There’s an old joke in security circles, “my password got hacked, and now I have to rename my dog”. It’s really funny, because there are so many of these security answers which are matters of historical fact – while you can choose different questions, you can’t generally choose a different answer to the same question.

Well, obviously, you can, but then you’ve lost the point of a security question and answer – because now you have to remember what random lie you used to answer that particular question on that particular site.

And for all you clever guys out there


Yes, I know you can lie, you can put in random letters or phrases, and the system may take them (“Your place of birth cannot contain spaces” – so, Las Vegas, New York, Lake Windermere are all unusable). But then you’ve just created another password to remember – and the point of these security questions is to let you log on once you’ve forgotten your password.

So, you’ve forgotten your password, but to get it back, you have to remember a different password, one that you never used. There’s not much point there.

In summary

Security questions and answers, when used for password recovery / reset, are complete rubbish.

Security questions are low-entropy, predictable and discoverable password substitutes that are shared across multiple sites, are under- or un-protected, and (like fingerprints) really can’t be changed if they become exposed. This makes them totally unsuited to being used as password equivalents in account recovery / password reset schemes.

If you have to implement an account recovery scheme, find something better to use. In an enterprise, as I’ve said before, your best bet is to use something that the enterprise does well – the management hierarchy. Every time you forget your password, you have to get your manager, or someone at the next level up from them, to reset your password for you, or to vouch for you to tech support. That way, someone who knows you, and can affect your behaviour in a positive way, will know that you keep forgetting your password and could do with some assistance. In a social network, require the

Password hints are bullshit, too

Also, password hints are bullshit. Many of the Adobe breach’s “password hints” were actually just the password in plain-text. And, because Adobe didn’t salt their password hashes, you could sort the list of password hashes, and pick whichever of the password hints was either the password itself, or an easy clue for the password. So, even if you didn’t use the password hint yourself, or chose a really cryptic clue, some other idiot came up with the same password, and gave a “Daily Express Quick Crossword” quality clue.

The Automatic Gainsaying of Anything the Other Person Says

Sometimes I think that title is the job of the Security Engineer – as a Subject Matter Expert, we’re supposed to meet with teams and tell them how their dreams are going to come crashing down around their ears because of something they hadn’t thought of, but which is obvious to us.

This can make us just a little bit unpopular.

But being argumentative and sceptical isn’t entirely a bad trait to have.

Sometimes it comes in handy when other security guys spread their various statements of doom and gloom – or joy and excitement.

Examples in a single line

“Rename your administrator account so it’s more secure” – or lengthen the password and achieve the exact same effect without breaking scripts or requiring extra documentation so people know what the administrator is called this week.

“Encrypt your data at rest by using automatic database encryption” – which means any app that authenticates to the database can read that data back out, voiding the protection that was the point of encrypting at rest. If fields need encrypting, maybe they need field-level access control, too.

“Complex passwords, one lower case, one upper case, one number, one symbol, no repeated letters” – or else, measure strength in more interesting ways, and display to users how strong their password is, so that a longish phrase, used by a competent typist, becomes an acceptable password.

Today’s big example – password expiration

Now I’m going to commit absolute heresy, as I’m going against the biggest recent shock news in security advice.

capture20160919063258336capture20160919063534856

I understand the arguments, and I know I’m frequently irritated with the unnecessary requirements to change my password after sixty days, and even more so, I know that the reasons behind password expiration settings are entirely arbitrary.

But


There’s a good side to password expiry.

capture20160919063926113

These aren’t the only ways in which passwords are discovered.

The method that frequently gets overlooked is when they are deliberately shared.

Sharing means scaring

“Bob’s out this week, because his mother died, and he has to arrange details in another state. He didn’t have time to set up access control changes before he left, but he gave me a sticky-note with his password on it, so that we don’t need to bother him for anything”

“Everyone on this team has to monitor and interact with so many shared service accounts, we just print off a list of all the service account passwords. You can photocopy my laminated card with the list, if you like.”

Yes, those are real situations I’ve dealt with, and they have some pretty obvious replacement solutions:

Bob

Bob (or Bob’s manager, if Bob is too distraught to talk to anyone, which isn’t at all surprising) should notify a system administrator who can then respond to requests to open up ACLs as needed, rather than someone using Bob’s password. But he didn’t.

When Bob comes back, is he going to change his password?

No, because he trusts his assistant, Dave, with his communications.

But, of course, Dave handed out his password to the sales VP, because it was easier for him than fetching up the document she wanted. And sales VPs just can’t be trusted. Now the entire sales team knows Bob’s password. And then one of them gets fired, or hired on at a new competitor. The temptation to log on to Bob’s account – just once – is immense, because that list of customers is just so enticing. And really, who would ever know? And if they did know, everyone has Bob’s password, so it’s not like they could prosecute you, because they couldn’t prove it was you.

What’s going to save Bob is if he is required to change his password when he returns.

Laminated Password List

Yes, this also happened. Because we found the photocopy of the laminated sheet folded up on the floor of a hallway outside the lavatory door.

There was some disciplining involved. Up to, and possibly including, the termination of employment, as policy allows.

Then the bad stuff happened.

The team who shared all these passwords pointed out that, as well as these being admin-level accounts, they had other special privileges, including the avoidance of any requirement to change passwords.

These passwords hadn’t changed in six years.

And the team had no idea what would break if they changed the passwords.

Maybe one of those passwords is hard-coded into a script somewhere, and vital business processes would grind to a halt if the password was changed.

When I left six months later, they were still (successfully) arguing that it would be too dangerous to try changing the passwords.

To change a behaviour, you must first accept it

I’m not familiar with any company that acknowledges in policy that users share passwords, nor the expected behaviour when they do [log when you shared it, and who you shared it with, then change it as soon as possible after it no longer needs to be shared].

Once you accept that passwords are shared for valid reasons, even if you don’t enumerate what those reasons are, you can come up with processes and tools to make that sharing more secure.

If there was a process for Bob to share with Dave what his password is, maybe outlining the creation of a temporary password, reading Dave in on when Dave can share the password (probably never), and how Dave is expected to behave, and becomes co-responsible for any bad things done in Bob’s account, suddenly there’s a better chance Dave’s not going to share. “I can’t give you Bob’s password, but I can get you that document you’re after”

If there was a tool in which the team managing shared service accounts could find and unlock access to passwords, that tool could also be configured to distribute changed passwords to the affected systems after work had been performed.

Without acceptance, it is expiry which saves you

If you don’t have these processes or tools, the only protection you have against password sharing (apart from the obviously failing advice to “just don’t do it”) is regular password expiry.

Password expiry as a Business Continuity Practice

I’m also fond of talking about password expiration as being a means to train your users.

Do you even know how to change your passwords?

Certificates expire once a year, and as a result, programmers write code as if it’s never going to happen. After all, there’s plenty of time between now and next year to write the “renew certificate” logic, and by the time it’s needed, I’ll be working on another project anyway.

If passwords don’t expire – or don’t expire often enough – users will not have changed their password anything like recently enough to remember how to do so if they have to in an emergency.

So, when a keylogger has been discovered to be caching all the logons in the conference room, or the password hashes have been posted on Pastebin, most of your users – even the ones approving the company-wide email request for action – will fight against a password change request, because they just don’t know how to do it, or what might break when they do.

Unless they’ve been through it before, and it were no great thing. In which case, they’ll briefly sigh, and then change their password.

So, password expiry – universally good?

This is where I equivocate and say, yes, I broadly think the advice to reduce or remove password expiration is appropriate. My arguments above are mainly about things we’ve forgotten to remember that are reasons why we might have included password expiry to begin with.

Here, in closing, are some ways in which password expiry is bad, just to balance things out:

  • Some people guard their passwords closely, and generate secure passwords, or use hardware token devices to store passwords. These people should be rewarded for their security skills, not punished under the same password expiry regimen as the guy who adds 1 to his password, and is now up to “p@55w0rd73”.
  • Expiring passwords too frequently drives inexorably to “p@55word73”, and to password reuse across sites, as it takes effort to think up complex passwords, and you’re not going to waste that effort thinking up unique words every couple of months.
  • Password expiration often means that users spend significant time interrupted in their normal process, by the requirement to think up a password and commit it to the multiple systems they use.

Where do we stand?

Right now, this is where we stand:

  • Many big security organisations – NIST, CESG, etc – talk about how password expiry is an outdated and unnecessary concept
  • I think they’re missing some good reasons why password expiry is in place (and some ways in which that could be fixed)
  • Existing security standards that businesses are expected to comply with have NOT changed their stance.

As a result, especially of this last item, I don’t think businesses can currently afford to remove password expiry from their accounts.

But any fool can see which way the wind is blowing – at some point, you will be able to excuse your company from password expiry, but just in case your compliance standard requires it, you should have a very clear and strong story about how you have addressed the risks that were previously resolved by expiring passwords as frequently as once a quarter.

Why am I so cross?

There are many reasons why Information Security hasn’t had as big an impact as it deserves. Some are external – lack of funding, lack of concern, poor management, distractions from valuable tasks, etc, etc.

But the ones we inflict on ourselves are probably the most irritating. They make me really cross.

Why cross?

OK, “cross” is an English term for “angry”, or “irate”, but as with many other English words, it’s got a few other meanings as well.

It can mean to wrong someone, or go against them – “I can’t believe you crossed Fingers MacGee”.

It can mean to make the sign of a cross – “Did you just cross your fingers?”

It can mean a pair of items, intersecting one another – “I’m drinking at the sign of the Skull and Cross-bones”.

It can mean to breed two different subspecies into a third – “What do you get if you cross a mountaineer with a mosquito? Nothing, you can’t cross a scaler and a vector.”

Or it can mean to traverse something – “I don’t care what Darth Vader says, I always cross the road here”.

Green_cross_man_take_it

It’s this last sense that InfoSec people seem obsessed about, to the extent that every other attack seems to require it as its first word.

Such a cross-patch

These are just a list of the attacks at OWASP that begin with the word “Cross”.

Yesterday I had a meeting to discuss how to address three bugs found in a scan, and I swear I spent more than half the meeting trying to ensure that the PM and the Developer in the room were both discussing the same bug. [And here, I paraphrase]

“How long will it take you to fix the Cross-Frame Scripting bug?”

“We just told you, it’s going to take a couple of days.”

“No, that was for the Cross-Site Scripting bug. I’m talking about the Cross-Frame Scripting issue.”

“Oh, that should only take a couple of days, because all we need to do is encode the contents of the field.”

“No, again, that’s the Cross-Site Scripting bug. We already discussed that.”

“I wish you’d make it clear what you’re talking about.”

Yeah, me too.

A modest proposal

The whole point of the word “Cross” as used in the descriptions of these bugs is to indicate that someone is doing something they shouldn’t – and in that respect, it’s pretty much a completely irrelevant word, because we’re already discussing attack types.

In many of these cases, the words “Cross-Site” bring absolutely nothing to the discussion, and just make things confusing. Am I crossing a site from one page to another, or am I saying this attack occurs between sites? What if there’s no other site involved, is that still a cross-site scripting attack? [Yes, but that’s an irrelevant question, and by asking it, or thinking about asking/answering it, you’ve reduced your mental processing abilities to handle the actual issue.]

Check yourself when you utter “cross” as the first word in the description of an attack, and ask if you’re communicating something of use, or just “sounding like a proper InfoSec tool”. Consider whether there’s a better term to use.

I’ve previously argued that “Cross-Site Scripting” is really a poor term for the conflation of HTML Injection and JavaScript Injection.

Cross-Frame Scripting is really Click-Jacking (and yes, that doesn’t exclude clickjacking activities done by a keyboard or other non-mouse source).

Cross-Site Request Forgery is more of a Forced Action – an attacker can guess what URL would cause an action without further user input, and can cause a user to visit that URL in a hidden manner.

Cross-Site History Manipulation is more of a browser failure to protect SOP – I’m not an expert in that field, so I’ll leave it to them to figure out a non-confusing name.

Cross-Site Tracing is just getting silly – it’s Cross-Site Scripting (excuse me, HTML Injection) using the TRACE verb instead of the GET verb. If you allow TRACE, you’ve got bigger problems than XSS.

Cross-User Defacement crosses all the way into crosstalk, requiring as it does that two users be sharing the same TCP connection with no adequate delineation between them. This isn’t really common enough to need a name that gets capitalised. It’s HTTP Response-Splitting over a shared proxy with shitty user segregation.

Even more modestly


I don’t remotely anticipate that I’ll change the names people give to these vulnerabilities in scanning tools or in pen-test reports.

But I do hope you’ll be able to use these to stop confusion in its tracks, as I did:

“Never mind cross-whatever, let’s talk about how long it’s going to take you to address the clickjacking issue.”

In Summary

Here’s the TL;DR version of the web post:

Prevent or interrupt confusion by referring to bugs using the following non-confusing terms:

Confusing Not Confusing Much, Probably
Cross-Frame Scripting Clickjacking
Cross-Site History Manipulation [Not common enough to name]
Cross-Site Tracing TRACE is enabled
Cross-Site Request Forgery Forced User Action
Cross-Site Scripting HTML Injection
JavaScript Injection
Cross-User Defacement Crappy proxy server

Artisan or Labourer?

Back when I started developing code, and that was a fairly long time ago, the vast majority of developers I interacted with had taken that job because they were excited to be working with technology, and enjoyed instructing and controlling computers to an extent that was perhaps verging on the creepy.

Much of what I read about application security strongly reflects this even today, where developers are exhorted to remember that security is an aspect of the overall quality of your work as a developer.

This is great – for those developers who care about the quality of their work. The artisans, if you like.

But who else is there?

For every artisan I meet when talking to developers, there’s about another two or three who are more like labourers.

They turn up on time, they do their daily grind, and they leave on time. Even if the time expected / demanded of them is longer than the usual eight hours a day.

By itself, this isn’t a bad thing. When you need another pair of “OK” and “Cancel” buttons, you want someone to hammer them out, not hand-craft them in bronze. When you need an API to a back-end database, you want it thin and functional, not baroque and beautiful.

Many – perhaps most – of your developers are there to do a job for pay, not because they love code.

And that’s what you interviewed them for, hired them for, and promoted them for.

It’s important to note that these guys mostly do what they are told. They are clever, and can be told to do complex things, but they are not single-mindedly interested in the software they are building, except in as much as you will reward them for delivering it.

What do you tell these guys?

If these developers will build only the software they’re told to build, what are you telling them to build?

At any stage, are you actively telling your developers that they have to adhere to security policies, or that they have to build in any kind of “security best practice”, or even to “think like an attacker” (much as I hate that phrase) – I’d rather you tell them to “think about all the ways every part of your code can fail, and work to prevent them” [“think like a defender”]?

Some of your developers will interject their own ideas of quality.

– But –

Most of your developers will only do as they have been instructed, and as their examples tell them.

How does this affect AppSec?

The first thing to note is that you won’t reach these developers just with optional training, and you might not even reach them just with mandatory training. They will turn up to mandatory training, because it is required of them, and they may turn up to optional training because they get a day’s pay for it. But all the appeals to them to take on board the information you’re giving them will fall upon deaf ears, if they return to their desks and don’t get follow-up from their managers.

Training requires management support, management enforcement, and management follow-through.

When your AppSec program makes training happen, your developers’ managers must make it clear to their developers that they are expected to take part, they are expected to learn something, and they are expected to come back and start using and demonstrating what they have learned.

Curiously enough, that’s also helpful for the artisans.

Second, don’t despair about these developers. They are useful and necessary, and as with all binary distinctions, the lines are not black and white, they are a spectrum of colours. There are developers at all stages between the “I turn up at 10, I work until 6 (as far as you know), and I do exactly what I’m told” end and the “I love this software as if it were my own child, and I want to mould it into a perfect shining paragon of perfection” end.

Don’t despair, but be realistic about who you have hired, and who you will hire as a result of your interview techniques.

Work with the developers you have, not the ones you wish you had.

Third, if you want more artisans and fewer labourers, the only way to do that is to change your hiring and promotion techniques.

Screen for quality-biased developers during the interview process. Ask them “what’s wrong with the code”, and reward them for saying “it’s not very easy to understand, the comments are awful, it uses too many complex constructs for the job it’s doing, etc”.

Reward quality where you find it. “We had feedback from one of the other developers on the team that you spent longer on this project than expected, but produced code that works flawlessly and is easy to maintain – you exceed expectations.”

Security is a subset of quality – encourage quality, and you encourage security.

Labourers as opposed to artisans have no internal “quality itch” to scratch, which means quality bars must be externally imposed, measured, and enforced.

What are you doing to reward developers for securing their development?

The Manager in the Middle Attack

The first problem any security project has is to get executive support. The second problem is to find a way to make use of and direct that executive support.


So, that was the original tweet that seems to have been a little popular (not fantastically popular, but then I only have a handful of followers).

I’m sure a lot of people thought it was just an amusing pun, but it’s actually a realisation on my part that there’s a real thing that needs naming here.

Executives support security

By and large, the companies I’ve worked for and/or with in the last few years have experienced a glacial but certain shift in perspective.

Where once the security team seemed to be perceived as a necessary nuisance to the executive layers, it seems clear now that there have been sufficient occurrences of bad news (and CEOs being forced to resign) that executives come TO the security team for reassurance that they won’t become the next 
 well, the next whatever the last big incident was.

TalkTalk had three security incidents in the last year

Obviously, those executives still have purse strings to manage, and most security professionals like to get paid, because that’s largely what distinguishes them from security amateurs. So security can’t get ALL the dollars, but it’s generally easier to get the money and the firepower for security than it ever was in the past.

So executives support security. Some of them even ask what more they can do – and they seem sincere.

Developers support security

Well, some of them do, but that’s a topic for another post.

There are sufficient numbers of developers who care about quality and security these days, that there’s less of a need to be pushing the security message to developers quite how we used to.

We’ve mostly reached those developers who are already on our side.

How developers communicate

And those developers can mentor other developers who aren’t so keen on security.

The security-motivated developers want to learn more from us, they’re aware that security is an issue, and for the most part, they’re capable of finding and even distinguishing good security solutions to use.

Why is security still so crap, then?

Pentester cat wins.

If the guys at the top, and the guys at the bottom (sorry devs, but the way the org structure goes, you don’t manage anyone, so ipso facto you are at the bottom, along with the cleaners, the lawyers, and the guy who makes sure the building doesn’t get stolen in the middle of the night) care about security, why are we still seeing sites get attacked successfully? Why are apps still being successfully exploited?

Why is it that I can exploit a web site with SQL injection, an attack that has been around for as long as many of the developers at your company have been aware of computers?

Someone is getting in the way.

So, who doesn’t support security?

Ask anyone in your organisation if they think security is important, and you’ll get varying answers, most of which are acknowledging that without security in the software being developed, so it’s clear that you can’t actually poll people that way for the right answer.

Ask who’s in the way, instead


Often it’s the security team – because it’s really hard to fill out a security team, and to stretch out around the organisation.

But that’s not the whole answer.

Ask the security-conscious developers what’s preventing them from becoming a security expert to their team, and they’ll make it clear – they’re rewarded and/or punished at their annual review times by the code they produce that delivers features.

There is no reward for security

And because managers are driving behaviour through performance reviews, it actually doesn’t matter what the manager tells their underlings, even if they give their devs a weekly spiel about how important security is. Even if you have an executive show up at their meetings and tell them security is “Job #1”. Even if he means it.

Those developers will return to their desks, and they’ll look at the goals against which they’ll be reviewed come performance review time.

The Manager in the Middle Attack

If managers don’t specifically reward good security behaviour, most developers will not produce secure code.

 

This is the Manager in the Middle Attack. Note that it applies in the event that no manager is present (thanks, Dan Kaminsky!)

Managers have to manage

Because I never like to point out a problem without proposing a solution:

Managers have to actively manage their developers into changing their behaviours. Some performance goals will help, along with the support (financial and moral) to make them achievable.

Here are a few sample goals:

  • Implement a security bug-scanning solution in your build/deploy process
  • Track the creation / destruction of bugs just like you track your feature burn-down rate.
    • It’ll be a burn-up rate to begin with, but you can’t incentivise a goal you can’t track
  • Prevent the insertion of new security bugs.
    • No, don’t just turn the graph from “trending up” to “trending less up” – actually ban checkins that add vulnerabilities detected by your scanning tools.
  • Reduce the number of security bugs in your existing code
    • Prioritise which ones to work on first.
      • Use OWASP, or whatever “top N list” floats your boat – until you’ve exhausted those.
      • Read the news, and see which flaws have been the cause of significant problems.
      • My list: Code injection (because if an attacker can run code on my site, it’s not my site); SQL Injection / data access flaws (because the attacker can steal, delete, or modify my data); other injection (including XSS, because it’s a sign you’re a freaking amateur web developer)
    • Don’t be afraid to game the system – if you can modularise your database access and remove all SQL injection bugs with a small change, do so. And call it as big of a win as it truly is!
  • Refactor to make bugs less likely
    • Find a widespread potentially unsecure behaviour and make it into a class or function, so it’s only unsecure in one place.
    • Then secure that place. And comment the heck out of why it’s done that way.
    • Ban checkins that use the old way of doing things.
    • Delete old, unused code, if you didn’t already do that earlier.
  • Share knowledge of security improvements
    • With your colleagues on your dev team
    • With other developers across the enterprise
    • Outside of the company
    • Become a sought-after expert (inside or outside the team or organisation) on a security practice – from a dev perspective
    • Mentor another more junior developer who wants to become a security hot-shot like yourself.

That’s quite a bunch of security-related goals for developers, which managers can implement. All of them can be measured, and I’m not so crass as to suggest that I know which numbers will be appropriate to your appetite for risk, or the size of hole out of which you have to dig yourself.

NCSAM post 1: That time again?

Every year, in October, we celebrate National Cyber Security Awareness Month.

Normally, I’m dismissive of anything with the word “Cyber” in it. This is no exception – the adjective “cyber” is a manufactured word, without root, without meaning, and with only a tenuous association to the world it endeavours to describe.

But that’s not the point.

In October, I teach my blog readers about security

And I do it from a very basic level.

This is not the place for me to assume you’ve all been reading and understanding security for years – this is where I appeal to readers with only a vague understanding that there’s a “security” thing out there that needs addressing.

Information Security as a shared responsibility

This first week is all about Information Security – Cyber Security, as the government and military put it – as our shared responsibility.

I’m a security professional, in a security team, and my first responsibility is to remind the thousands of other employees that I can’t secure the company, our customers, our managers, and our continued joint success, without everyone pitching in just a little bit.

I’m also a customer, with private data of my own, and I have a responsibility to take reasonable measures to protect that data, and by extension, my identity and its association with me. But I also need others to take up their responsibility in protecting me.

When we fail in that responsibility


This year, I’ve had my various identifying factors – name, address, phone number, Social Security Number (if you’re not from the US, that’s a government identity number that’s rather inappropriately used as proof of identity in too many parts of life) – misappropriated by others, and used in an attempt to buy a car, and to file taxes in my name. So, I’ve filed reports of identity theft with a number of agencies and organisations.

I have spent DAYS of time working on preventing further abuse of my identity, and that of my family

Just today, another breach report arrives, from a company I do business with, letting me know that more data has been lost – this time from one of the organisations charged with actually protecting my identity and protecting my credit.

And it’s not just the companies that are at fault

While companies can – and should – do much more to protect customers (and putative customers), and their data, it’s also incumbent on the customers to protect themselves.

Every day, thousands of new credit and debit cards get issued to eager recipients, many of them teenagers and young adults.

Excited as they are, many of these youths share pictures of their new cards on Twitter or Facebook. Occasionally with both sides. There’s really not much your bank can do if you’re going to react in such a thoughtless way, with a casual disregard for the safety of your data.

Sure, you’re only liable for the first $50 of any use of your credit card, and perhaps of your debit card, but it’s actually much better to not have to trace down unwanted charges and dispute them in the first place.

So, I’m going to buy into the first message of National Cyber Security Awareness Month – and I’m going to suggest you do the same:

Stop. Think. Connect.

This is really the base part of all security – before doing a thing, stop a moment. Think about whether it’s a good thing to do, or has negative consequences you hadn’t considered. Connect with other people to find out what they think.

I’ll finish tonight with some examples where stopping a moment to think, and connecting with others to pool knowledge, will improve your safety and security online. More tomorrow.

Example: passwords

The most common password is “12345678”, or “password”. This means that many people are using that simple a password. Many more people are using more secure passwords, but they still make mistakes that could be prevented with a little thought.

Passwords leak – either from their owners, or from the systems that use those passwords to recognise the owners.

When they do, those passwords – and data associated with them – can then be used to log on to other sites those same owners have visited. Either because their passwords are the same, or because they are easily predicted. If my password at Adobe is “This is my Adobe password”, well, that’s strong(ish), but it also gives a hint as to what my Amazon password is – and when you crack the Adobe password leak (that’s already available), you might be able to log on to my Amazon account.

Creating unique passwords – and yes, writing them down (or better still, storing them in a password manager), and keeping them safe – allows you to ensure that leaks of your passwords don’t spread to your other accounts.

Example: Twitter and Facebook

There are exciting events which happen to us every day, and which we want to share with others.

That’s great, and it’s what Twitter and Facebook are there FOR. All kinds of social media available for you to share information with your friends.

Unfortunately, it’s also where a whole lot of bad people hang out – and some of those bad people are, unfortunately, your friends and family.

Be careful what you share, and if you’re sharing about others, get their permission too.

If you’re sharing about children, contemplate that there are predators out there looking for the information you may be giving out. There’s one living just up the road, I can assure you. They’re almost certainly safely withdrawn, and you’re protected from them by natural barriers and instincts. But you have none of those instincts on Facebook unless you stop, think and connect.

So don’t post addresses, locations, your child’s phone number, and really limit things like names of children, friends, pets, teachers, etc – imagine that someone will use that as ‘proof’ to your child of their safety. “It’s OK, I was sent by Aunt Josie, who’s waiting for you to come and see Dobbie the cat”

Example: shared accounts

Bob’s going off on vacation for a month.

Lucky Bob.

Just in case, while he’s gone, he’s left you his password, so that you can log on and access various files.

Two months later, and the office gets raided by the police. They’ve traced a child porn network to your company. To Bob.

Well, actually, to Bob and to you, because the system can’t tell the difference between Bob and you.

Don’t share accounts. Make Bob learn (with the IT department’s help) how to share portions of his networked files appropriately. It’s really not all that hard.

Example: software development

I develop software. The first thing I write is always a basic proof of concept.

The second thing I write – well, who’s got time for a second thing?

Make notes in comments every time you skip a security decision, and make those notes in such a way that you can revisit them and address them – or at least, count them – prior to release, so that you know how badly you’re in the mess.

Why didn’t you delete my data?

The recent hack of Ashley Madison, and the subsequent discussion, reminded me of something I’ve been meaning to talk about for some time.

Can a web site ever truly delete your data?

This is usually expressed, as my title suggests, by a user asking the web site who hosted that user’s account (and usually directly as a result of a data breach) why that web site still had the user’s data.

This can be because the user deliberately deleted their account, or simply because they haven’t used the service in a long time, and only remembered that they did by virtue of a breach notification letter (or a web site such as Troy Hunt’s haveibeenpwned.com).

1. Is there a ‘delete’ feature?

Web sites do not see it as a good idea to have a ‘delete’ feature for their user accounts – after all, what you’re asking is for a site to devote developer resources to a feature that specifically curtails the ability of that web site to continue to make money from the user.

To an accountant’s eye (or a shareholder’s), that’s money out the door with the prospect of reducing money coming in.

To a user’s eye, it’s a matter of security and trust. If the developer deliberately misses a known part of the user’s lifecycle (sunset and deprecation are both terms developers should be familiar with), it’s fairly clear that there are other things likely to be missing or skimped on. If a site allows users to disconnect themselves, to close their accounts, there’s a paradox that says more users will choose to continue their service, because they don’t feel trapped.

So, let’s assume there is a “delete” or “close my account” feature – and that it’s easy to use and functional.

2. Is there a ‘whoops’ feature for the delete?

In the aftermath of the Ashley Madison hack, I’m sure there’s going to be a few partners who are going to engage in retributive behaviours. Those behaviours could include connecting to any accounts that the partners have shared, and cause them to be closed, deleted and destroyed as much as possible. It’s the digital equivalent of cutting the sleeves off the cheating partner’s suit jackets. Probably.

Assuming you’ve finally settled down and broken/made up, you’ll want those accounts back under your control.

So there might need to be a feature to allow for ‘remorse’ over the deletion of an account. Maybe not for the jealous partner reason, even, but perhaps just because you forgot about a service you were making use of by that account, and which you want to resurrect.

OK, so many sites have a ‘resurrect’ function, or a ‘cool-down’ period before actually terminating an account.

Facebook, for instance, will not delete your account until you’ve been inactive for 30 days.

3. Warrants to search your history

Let’s say you’re a terrorist. Or a violent criminal, or a drug baron, or simply someone who needs to be sued for slanderous / libelous statements made online.

OK, in this case, you don’t WANT the server to keep your history – but to satisfy warrants of this sort, a lawyer is likely to tell the server’s operators that they have to keep history for a specific period of time before discarding them. This allows for court orders and the like to be executed against the server to enforce the rule of law.

So your server probably has to hold onto that data for more than the 30 day inactive period. Local laws are going to put some kind of statute on how long a service provider has to hold onto your data.

As an example, a retention notice served under the UK’s rather steep RIPA law could say the service provider has to hold on to some types of data for as much as 12 months after the data is created.

4. Financial and Business records

If you’ve paid for the service being provided, those transaction details have to be held for possible accounting audits for the next several years (in the US, between 3 and 7 years, depending on the nature of the business, last time I checked).

Obviously, you’re not going to expect an audit to go into complete investigations of all your individual service requests – unless you’re billed to that level. Still, this record is going to consist of personal details of every user in the system, amounts paid, service levels given, a la carte services charged for, and some kind of demonstration that service was indeed provided.

So, even if Ashley Madison, or whoever, provided a “full delete” service, there’s a record that they have to keep somewhere that says you paid them for a service at some time in the past.

Eternal data retention – is it inevitable?

I don’t think eternal data retention is appropriate or desirable. It’s important for developers to know data retention periods ahead of time, and to build them into the tools and services they provide.

Data retention shouldn’t be online

Hackers fetch data from online services. Offline services – truly offline services – are fundamentally impossible to steal over the network. An attacker would have to find the facility where they’re stored, or the truck the tapes/drives are traveling in, and steal the data physically.

Not that that’s impossible, but it’s a different proposition from guessing someone’s password and logging into their servers to steal data.

Once data is no longer required for online use, and can be stored, move it into a queue for offline archiving. Developers should make sure their archivist has a data destruction policy in place as well, to get rid of data that’s just too old to be of use. Occasionally (once a year, perhaps), they should practice a data recovery, just to make sure that they can do so when the auditors turn up. But they should also make sure that they have safeguards in place to prevent/limit illicit viewing / use of personal data while checking these backups.

Not everything has to be retained

Different classifications of data have different retention periods, something I alluded to above. Financial records are at the top end with seven years or so, and the minutiae of day-to-day conversations can probably be deleted remarkably quickly. Some services actually hype that as a value of the service itself, promising the messages will vanish in a snap, or like a ghost.

When developing a service, you should consider how you’re going to classify data so that you know what to keep and what to delete, and under what circumstances. You may need a lawyer to help with that.

Managing your data makes service easier

If you lay the frameworks in place when developing a service, so that data is classified and has a documented lifecycle, your service naturally becomes more loosely coupled. This makes it smoother to implement, easier to change, and more compartmentalised. This helps speed future development.

Providing user lifecycle engenders trust and loyalty

Users who know they can quit are more likely to remain loyal (Apple aside). If a user feels hemmed in and locked in place, all that’s required is for someone to offer them a reason to change, and they’ll do so. Often your own staff will provide the reason to change, because if you’re working hard to keep customers by locking them in, it demonstrates that you don’t feel like your customers like your service enough to stay on their own.

So, be careful who you give data to

Yeah, I know, “to whom you give data”, thanks, grammar pedants.

Remember some basic rules here:

1. Data wants to be free

Yeah, and Richard Stallmann’s windows want to be broken.

Data doesn’t want anything, but the appearance is that it does, because when data is disseminated, it essentially cannot be returned. Just like if you go to RMS’s house and break all his windows, you can’t then put the glass fragments back into the frames.

Developers want to possess and collect data – it’s an innate passion, it seems. So if you give data to a developer (or the developer’s proxy, any application they’ve developed), you can’t actually get it back – in the sense that you can’t tell if the developer no longer has it.

2. Sometimes developers are evil – or just naughty

Occasionally developers will collect and keep data that they know they shouldn’t. Sometimes they’ll go and see which famous celebrities used their service recently, or their ex-partners, or their ‘friends’ and acquaintances.

3. Outside of the EU, your data doesn’t belong to you

EU data protection laws start from the basic assumption that factual data describing a person is essentially the non-transferrable property of the person it describes. It can be held for that person by a data custodian, a business with whom the person has a business relationship, or which has a legal right or requirement to that data. But because the data belongs to the person, that person can ask what data is held about them, and can insist on corrections to factual mistakes.

The US, and many other countries, start from the premise that whoever has collected data about a person actually owns that data, or at least that copy of the data. As a result, there’s less emphasis on openness about what data is held about you, and less access to information about yourself.

Ideally, when the revolution comes and we have a socialist government (or something in that direction), the US will pick up this idea and make it clear that service providers are providing a service and acting only as a custodian of data about their customers.

Until then, remember that US citizens have no right to discover who’s holding their data, how wrong it might be, or to ask for it to be corrected.

4. No one can leak data that you don’t give them

Developers should also think about this – you can’t leak data you don’t hold. Similarly, if a user doesn’t give data, or gives incorrect or value-less data, if it leaks, that data is fundamentally worthless.

The fallout from the Ashley Madison leak is probably reduced significantly by the number of pseudonyms and fake names in use. Probably.

Hey, if you used your real name on a cheating web site, that’s hardly smart. But then, as I said earlier today, sometimes security is about protecting bad people from bad things happening to them.

5. Even pseudonyms have value

You might use the same nickname at several places; you might provide information that’s similar to your real information; you might link multiple pseudonymous accounts together. If your data leaks, can you afford to ‘burn’ the identity attached to the pseudonym?

If you have a long message history, you have almost certainly identified yourself pretty firmly in your pseudonymous posts, by spelling patterns, word usages, etc.

Leaks of pseudonymous data are less problematic than leaks of eponymous data, but they still have their problems. Unless you’re really good at OpSec.

Finally

Finally, I was disappointed earlier tonight to see that Troy had already covered some aspects of this topic in his weekly series at Windows IT Pro, but I think you’ll see that his thoughts are from a different direction than mine.

Lessons to learn already from Premera – 2. Prevention

Not much has been released about exactly how Premera got attacked, and certainly nothing from anyone with recognised insider knowledge.

Disclaimer: I worked at Premera in the Information Security team, but it’s so so long ago that any of my internal knowledge is incorrect – so I’ll only talk about those things that I have seen published.

I am, above all, a customer of Premera’s, from 2004 until just a few weeks ago. But I’m a customer with a strong background in Information Security.

What have we read?

Almost everything boils down rather simply to one article as the source of what we know.

http://www.threatconnect.com/news/the-anthem-hack-all-roads-lead-to-china/

Some pertinent dates

February 4, 2015: News stories break about Anthem’s breach (formerly Wellpoint).

January 29, 2015: The date given by Premera as the date when they were first made aware that they’d been attacked.

I don’t think that it’s a coincidence that these dates are so close together. In my opinion, these dates imply that Anthem / Wellpoint found their own issues, notified the network of other health insurance companies, and then published to the news outlets.

As a result of this, Premera recognised the same attack patterns in their own systems.

This suggests that any other health insurance companies attacked by the same group (alleged to be “Deep Panda”) will discover and disclose it shortly.

Why I keep mentioning Wellpoint

I’ve kind of driven in the idea that Anthem used to be called Wellpoint, and the reason I’m bringing this out is that a part of the attack documented by ThreatConnect was to create a site called “we11point.com” – that’s “wellpoint.com”, but with the two letter “els” replaced with two “one” digits.

That’s relevant because the ThreatConnect article also called out that there was a web site called “prennera.com” created by the same group.

That’s sufficient for an attack

So, given a domain name similar to that of a site you wish to attack, how would you get full access to the company behind that site?

Here’s just one way you might mount that attack. There are other ways to do this, but this is the most obvious, given the limited information above.

  1. Create a domain, prennera.com
  2. Visit the premera.com sites used by employees from the external Internet (these would be VPN sites, Outlook Web Access and similar, HR sites – which often need to be accessed by ex-employees)
  3. Capture data related to the logon process – focus only on the path that represents a failed logon (because without passwords, you don’t know what a successful logon does)
  4. Replicate that onto your prennera.com network. Test it to make sure it looks good
  5. Now send email to employees, advising them to check on their email, their paystubs, whatever sites you’ve verified, but linking them to the prennera.com version
  6. Sit back and wait for people to send you their usernames and passwords – when they do, tell them they’ve got it wrong, and redirect them to the proper premera.com site
  7. Log on to the target network using the credentials you’ve got

If you’re concerned that I’m telling attackers how to do this, remember that this is obvious stuff. This is already a well known attack strategy, “homograph attacks”. This is what a penetration tester will do if you hire one to test your susceptibility to social engineering.

There’s no vulnerability involved, there’s no particularly obvious technical failing here, it’s just the age-old tactic of giving someone a screen that looks like their logon page, and telling them they’ve failed to logon. I saw this basic form of attack in the eighties, it’s that old.

How to defend against this?

If you’ve been reading my posts to date, you’ll know that I’m aware that security offence is sexy and exciting, but security defence is really where the clever stuff belongs.

I have a few simple recommendations that I think apply in this case:

  1. Separate employee and customer account databases. Maybe your employees are also customers, but their accounts should be separately managed and controlled. Maybe the ID is the same in both cases, but the accounts being referenced must be conceptually and architecturally in different account pools.
  2. Separate employee and customer web domains. This is just a generally good idea, because it means that a session ID or other security context from the customer site cannot be used on the employee site. Cross-Origin Security Policies apply to allow the browser to prevent sharing of credentials and access between the two domains. Separation of environments is one of the tasks a firewall can achieve relatively successfully, and with different domains, that job is assisted by your customers’ and employees’ browsers.
  3. External-to-internal access must be gated by a secondary authentication factor (2FA, MFA – standing for two/multi factor authentication). That way, an attacker who phishes your employees will not get a credential they can use from the outside.

Another tack that’s taken by companies is to engage a reputation management company, to register domain names that are homoglyphs to your own (those that look the same in a browser address bar).  Or, to file lawsuits that take down such domains when they appear. Whichever is cheaper. My perspective on this is that it costs money, and is doomed to fail whenever a new TLD arises, or your company creates a new brand.

[Not that reputation management companies can’t help you with your domain names, mind you – they can prevent you, for instance, from releasing a product with a name that’s already associated with a domain name owned by another company.]

These three steps are somewhat interdependent, and they may cause a certain degree of inconvenience, but they will prevent exactly the kind of attacks I’ve described. [Yes, there are other potential attacks, but none introduced by the suggested changes]

1 2 3 6