Security Awareness

The Manager in the Middle Attack

The first problem any security project has is to get executive support. The second problem is to find a way to make use of and direct that executive support.

So, that was the original tweet that seems to have been a little popular (not fantastically popular, but then I only have a handful of followers).

I’m sure a lot of people thought it was just an amusing pun, but it’s actually a realisation on my part that there’s a real thing that needs naming here.

Executives support security

By and large, the companies I’ve worked for and/or with in the last few years have experienced a glacial but certain shift in perspective.

Where once the security team seemed to be perceived as a necessary nuisance to the executive layers, it seems clear now that there have been sufficient occurrences of bad news (and CEOs being forced to resign) that executives come TO the security team for reassurance that they won’t become the next … well, the next whatever the last big incident was.

TalkTalk had three security incidents in the last year

Obviously, those executives still have purse strings to manage, and most security professionals like to get paid, because that’s largely what distinguishes them from security amateurs. So security can’t get ALL the dollars, but it’s generally easier to get the money and the firepower for security than it ever was in the past.

So executives support security. Some of them even ask what more they can do – and they seem sincere.

Developers support security

Well, some of them do, but that’s a topic for another post.

There are sufficient numbers of developers who care about quality and security these days, that there’s less of a need to be pushing the security message to developers quite how we used to.

We’ve mostly reached those developers who are already on our side.

How developers communicate

And those developers can mentor other developers who aren’t so keen on security.

The security-motivated developers want to learn more from us, they’re aware that security is an issue, and for the most part, they’re capable of finding and even distinguishing good security solutions to use.

Why is security still so crap, then?

Pentester cat wins.

If the guys at the top, and the guys at the bottom (sorry devs, but the way the org structure goes, you don’t manage anyone, so ipso facto you are at the bottom, along with the cleaners, the lawyers, and the guy who makes sure the building doesn’t get stolen in the middle of the night) care about security, why are we still seeing sites get attacked successfully? Why are apps still being successfully exploited?

Why is it that I can exploit a web site with SQL injection, an attack that has been around for as long as many of the developers at your company have been aware of computers?

Someone is getting in the way.

So, who doesn’t support security?

Ask anyone in your organisation if they think security is important, and you’ll get varying answers, most of which are acknowledging that without security in the software being developed, so it’s clear that you can’t actually poll people that way for the right answer.

Ask who’s in the way, instead…

Often it’s the security team – because it’s really hard to fill out a security team, and to stretch out around the organisation.

But that’s not the whole answer.

Ask the security-conscious developers what’s preventing them from becoming a security expert to their team, and they’ll make it clear – they’re rewarded and/or punished at their annual review times by the code they produce that delivers features.

There is no reward for security

And because managers are driving behaviour through performance reviews, it actually doesn’t matter what the manager tells their underlings, even if they give their devs a weekly spiel about how important security is. Even if you have an executive show up at their meetings and tell them security is “Job #1”. Even if he means it.

Those developers will return to their desks, and they’ll look at the goals against which they’ll be reviewed come performance review time.

The Manager in the Middle Attack

If managers don’t specifically reward good security behaviour, most developers will not produce secure code.


This is the Manager in the Middle Attack. Note that it applies in the event that no manager is present (thanks, Dan Kaminsky!)

Managers have to manage

Because I never like to point out a problem without proposing a solution:

Managers have to actively manage their developers into changing their behaviours. Some performance goals will help, along with the support (financial and moral) to make them achievable.

Here are a few sample goals:

  • Implement a security bug-scanning solution in your build/deploy process
  • Track the creation / destruction of bugs just like you track your feature burn-down rate.
    • It’ll be a burn-up rate to begin with, but you can’t incentivise a goal you can’t track
  • Prevent the insertion of new security bugs.
    • No, don’t just turn the graph from “trending up” to “trending less up” – actually ban checkins that add vulnerabilities detected by your scanning tools.
  • Reduce the number of security bugs in your existing code
    • Prioritise which ones to work on first.
      • Use OWASP, or whatever “top N list” floats your boat – until you’ve exhausted those.
      • Read the news, and see which flaws have been the cause of significant problems.
      • My list: Code injection (because if an attacker can run code on my site, it’s not my site); SQL Injection / data access flaws (because the attacker can steal, delete, or modify my data); other injection (including XSS, because it’s a sign you’re a freaking amateur web developer)
    • Don’t be afraid to game the system – if you can modularise your database access and remove all SQL injection bugs with a small change, do so. And call it as big of a win as it truly is!
  • Refactor to make bugs less likely
    • Find a widespread potentially unsecure behaviour and make it into a class or function, so it’s only unsecure in one place.
    • Then secure that place. And comment the heck out of why it’s done that way.
    • Ban checkins that use the old way of doing things.
    • Delete old, unused code, if you didn’t already do that earlier.
  • Share knowledge of security improvements
    • With your colleagues on your dev team
    • With other developers across the enterprise
    • Outside of the company
    • Become a sought-after expert (inside or outside the team or organisation) on a security practice – from a dev perspective
    • Mentor another more junior developer who wants to become a security hot-shot like yourself.

That’s quite a bunch of security-related goals for developers, which managers can implement. All of them can be measured, and I’m not so crass as to suggest that I know which numbers will be appropriate to your appetite for risk, or the size of hole out of which you have to dig yourself.

NCSAM post 1: That time again?

Every year, in October, we celebrate National Cyber Security Awareness Month.

Normally, I’m dismissive of anything with the word “Cyber” in it. This is no exception – the adjective “cyber” is a manufactured word, without root, without meaning, and with only a tenuous association to the world it endeavours to describe.

But that’s not the point.

In October, I teach my blog readers about security

And I do it from a very basic level.

This is not the place for me to assume you’ve all been reading and understanding security for years – this is where I appeal to readers with only a vague understanding that there’s a “security” thing out there that needs addressing.

Information Security as a shared responsibility

This first week is all about Information Security – Cyber Security, as the government and military put it – as our shared responsibility.

I’m a security professional, in a security team, and my first responsibility is to remind the thousands of other employees that I can’t secure the company, our customers, our managers, and our continued joint success, without everyone pitching in just a little bit.

I’m also a customer, with private data of my own, and I have a responsibility to take reasonable measures to protect that data, and by extension, my identity and its association with me. But I also need others to take up their responsibility in protecting me.

When we fail in that responsibility…

This year, I’ve had my various identifying factors – name, address, phone number, Social Security Number (if you’re not from the US, that’s a government identity number that’s rather inappropriately used as proof of identity in too many parts of life) – misappropriated by others, and used in an attempt to buy a car, and to file taxes in my name. So, I’ve filed reports of identity theft with a number of agencies and organisations.

I have spent DAYS of time working on preventing further abuse of my identity, and that of my family

Just today, another breach report arrives, from a company I do business with, letting me know that more data has been lost – this time from one of the organisations charged with actually protecting my identity and protecting my credit.

And it’s not just the companies that are at fault

While companies can – and should – do much more to protect customers (and putative customers), and their data, it’s also incumbent on the customers to protect themselves.

Every day, thousands of new credit and debit cards get issued to eager recipients, many of them teenagers and young adults.

Excited as they are, many of these youths share pictures of their new cards on Twitter or Facebook. Occasionally with both sides. There’s really not much your bank can do if you’re going to react in such a thoughtless way, with a casual disregard for the safety of your data.

Sure, you’re only liable for the first $50 of any use of your credit card, and perhaps of your debit card, but it’s actually much better to not have to trace down unwanted charges and dispute them in the first place.

So, I’m going to buy into the first message of National Cyber Security Awareness Month – and I’m going to suggest you do the same:

Stop. Think. Connect.

This is really the base part of all security – before doing a thing, stop a moment. Think about whether it’s a good thing to do, or has negative consequences you hadn’t considered. Connect with other people to find out what they think.

I’ll finish tonight with some examples where stopping a moment to think, and connecting with others to pool knowledge, will improve your safety and security online. More tomorrow.

Example: passwords

The most common password is “12345678”, or “password”. This means that many people are using that simple a password. Many more people are using more secure passwords, but they still make mistakes that could be prevented with a little thought.

Passwords leak – either from their owners, or from the systems that use those passwords to recognise the owners.

When they do, those passwords – and data associated with them – can then be used to log on to other sites those same owners have visited. Either because their passwords are the same, or because they are easily predicted. If my password at Adobe is “This is my Adobe password”, well, that’s strong(ish), but it also gives a hint as to what my Amazon password is – and when you crack the Adobe password leak (that’s already available), you might be able to log on to my Amazon account.

Creating unique passwords – and yes, writing them down (or better still, storing them in a password manager), and keeping them safe – allows you to ensure that leaks of your passwords don’t spread to your other accounts.

Example: Twitter and Facebook

There are exciting events which happen to us every day, and which we want to share with others.

That’s great, and it’s what Twitter and Facebook are there FOR. All kinds of social media available for you to share information with your friends.

Unfortunately, it’s also where a whole lot of bad people hang out – and some of those bad people are, unfortunately, your friends and family.

Be careful what you share, and if you’re sharing about others, get their permission too.

If you’re sharing about children, contemplate that there are predators out there looking for the information you may be giving out. There’s one living just up the road, I can assure you. They’re almost certainly safely withdrawn, and you’re protected from them by natural barriers and instincts. But you have none of those instincts on Facebook unless you stop, think and connect.

So don’t post addresses, locations, your child’s phone number, and really limit things like names of children, friends, pets, teachers, etc – imagine that someone will use that as ‘proof’ to your child of their safety. “It’s OK, I was sent by Aunt Josie, who’s waiting for you to come and see Dobbie the cat”

Example: shared accounts

Bob’s going off on vacation for a month.

Lucky Bob.

Just in case, while he’s gone, he’s left you his password, so that you can log on and access various files.

Two months later, and the office gets raided by the police. They’ve traced a child porn network to your company. To Bob.

Well, actually, to Bob and to you, because the system can’t tell the difference between Bob and you.

Don’t share accounts. Make Bob learn (with the IT department’s help) how to share portions of his networked files appropriately. It’s really not all that hard.

Example: software development

I develop software. The first thing I write is always a basic proof of concept.

The second thing I write – well, who’s got time for a second thing?

Make notes in comments every time you skip a security decision, and make those notes in such a way that you can revisit them and address them – or at least, count them – prior to release, so that you know how badly you’re in the mess.

Why didn’t you delete my data?

The recent hack of Ashley Madison, and the subsequent discussion, reminded me of something I’ve been meaning to talk about for some time.

Can a web site ever truly delete your data?

This is usually expressed, as my title suggests, by a user asking the web site who hosted that user’s account (and usually directly as a result of a data breach) why that web site still had the user’s data.

This can be because the user deliberately deleted their account, or simply because they haven’t used the service in a long time, and only remembered that they did by virtue of a breach notification letter (or a web site such as Troy Hunt’s

1. Is there a ‘delete’ feature?

Web sites do not see it as a good idea to have a ‘delete’ feature for their user accounts – after all, what you’re asking is for a site to devote developer resources to a feature that specifically curtails the ability of that web site to continue to make money from the user.

To an accountant’s eye (or a shareholder’s), that’s money out the door with the prospect of reducing money coming in.

To a user’s eye, it’s a matter of security and trust. If the developer deliberately misses a known part of the user’s lifecycle (sunset and deprecation are both terms developers should be familiar with), it’s fairly clear that there are other things likely to be missing or skimped on. If a site allows users to disconnect themselves, to close their accounts, there’s a paradox that says more users will choose to continue their service, because they don’t feel trapped.

So, let’s assume there is a “delete” or “close my account” feature – and that it’s easy to use and functional.

2. Is there a ‘whoops’ feature for the delete?

In the aftermath of the Ashley Madison hack, I’m sure there’s going to be a few partners who are going to engage in retributive behaviours. Those behaviours could include connecting to any accounts that the partners have shared, and cause them to be closed, deleted and destroyed as much as possible. It’s the digital equivalent of cutting the sleeves off the cheating partner’s suit jackets. Probably.

Assuming you’ve finally settled down and broken/made up, you’ll want those accounts back under your control.

So there might need to be a feature to allow for ‘remorse’ over the deletion of an account. Maybe not for the jealous partner reason, even, but perhaps just because you forgot about a service you were making use of by that account, and which you want to resurrect.

OK, so many sites have a ‘resurrect’ function, or a ‘cool-down’ period before actually terminating an account.

Facebook, for instance, will not delete your account until you’ve been inactive for 30 days.

3. Warrants to search your history

Let’s say you’re a terrorist. Or a violent criminal, or a drug baron, or simply someone who needs to be sued for slanderous / libelous statements made online.

OK, in this case, you don’t WANT the server to keep your history – but to satisfy warrants of this sort, a lawyer is likely to tell the server’s operators that they have to keep history for a specific period of time before discarding them. This allows for court orders and the like to be executed against the server to enforce the rule of law.

So your server probably has to hold onto that data for more than the 30 day inactive period. Local laws are going to put some kind of statute on how long a service provider has to hold onto your data.

As an example, a retention notice served under the UK’s rather steep RIPA law could say the service provider has to hold on to some types of data for as much as 12 months after the data is created.

4. Financial and Business records

If you’ve paid for the service being provided, those transaction details have to be held for possible accounting audits for the next several years (in the US, between 3 and 7 years, depending on the nature of the business, last time I checked).

Obviously, you’re not going to expect an audit to go into complete investigations of all your individual service requests – unless you’re billed to that level. Still, this record is going to consist of personal details of every user in the system, amounts paid, service levels given, a la carte services charged for, and some kind of demonstration that service was indeed provided.

So, even if Ashley Madison, or whoever, provided a “full delete” service, there’s a record that they have to keep somewhere that says you paid them for a service at some time in the past.

Eternal data retention – is it inevitable?

I don’t think eternal data retention is appropriate or desirable. It’s important for developers to know data retention periods ahead of time, and to build them into the tools and services they provide.

Data retention shouldn’t be online

Hackers fetch data from online services. Offline services – truly offline services – are fundamentally impossible to steal over the network. An attacker would have to find the facility where they’re stored, or the truck the tapes/drives are traveling in, and steal the data physically.

Not that that’s impossible, but it’s a different proposition from guessing someone’s password and logging into their servers to steal data.

Once data is no longer required for online use, and can be stored, move it into a queue for offline archiving. Developers should make sure their archivist has a data destruction policy in place as well, to get rid of data that’s just too old to be of use. Occasionally (once a year, perhaps), they should practice a data recovery, just to make sure that they can do so when the auditors turn up. But they should also make sure that they have safeguards in place to prevent/limit illicit viewing / use of personal data while checking these backups.

Not everything has to be retained

Different classifications of data have different retention periods, something I alluded to above. Financial records are at the top end with seven years or so, and the minutiae of day-to-day conversations can probably be deleted remarkably quickly. Some services actually hype that as a value of the service itself, promising the messages will vanish in a snap, or like a ghost.

When developing a service, you should consider how you’re going to classify data so that you know what to keep and what to delete, and under what circumstances. You may need a lawyer to help with that.

Managing your data makes service easier

If you lay the frameworks in place when developing a service, so that data is classified and has a documented lifecycle, your service naturally becomes more loosely coupled. This makes it smoother to implement, easier to change, and more compartmentalised. This helps speed future development.

Providing user lifecycle engenders trust and loyalty

Users who know they can quit are more likely to remain loyal (Apple aside). If a user feels hemmed in and locked in place, all that’s required is for someone to offer them a reason to change, and they’ll do so. Often your own staff will provide the reason to change, because if you’re working hard to keep customers by locking them in, it demonstrates that you don’t feel like your customers like your service enough to stay on their own.

So, be careful who you give data to

Yeah, I know, “to whom you give data”, thanks, grammar pedants.

Remember some basic rules here:

1. Data wants to be free

Yeah, and Richard Stallmann’s windows want to be broken.

Data doesn’t want anything, but the appearance is that it does, because when data is disseminated, it essentially cannot be returned. Just like if you go to RMS’s house and break all his windows, you can’t then put the glass fragments back into the frames.

Developers want to possess and collect data – it’s an innate passion, it seems. So if you give data to a developer (or the developer’s proxy, any application they’ve developed), you can’t actually get it back – in the sense that you can’t tell if the developer no longer has it.

2. Sometimes developers are evil – or just naughty

Occasionally developers will collect and keep data that they know they shouldn’t. Sometimes they’ll go and see which famous celebrities used their service recently, or their ex-partners, or their ‘friends’ and acquaintances.

3. Outside of the EU, your data doesn’t belong to you

EU data protection laws start from the basic assumption that factual data describing a person is essentially the non-transferrable property of the person it describes. It can be held for that person by a data custodian, a business with whom the person has a business relationship, or which has a legal right or requirement to that data. But because the data belongs to the person, that person can ask what data is held about them, and can insist on corrections to factual mistakes.

The US, and many other countries, start from the premise that whoever has collected data about a person actually owns that data, or at least that copy of the data. As a result, there’s less emphasis on openness about what data is held about you, and less access to information about yourself.

Ideally, when the revolution comes and we have a socialist government (or something in that direction), the US will pick up this idea and make it clear that service providers are providing a service and acting only as a custodian of data about their customers.

Until then, remember that US citizens have no right to discover who’s holding their data, how wrong it might be, or to ask for it to be corrected.

4. No one can leak data that you don’t give them

Developers should also think about this – you can’t leak data you don’t hold. Similarly, if a user doesn’t give data, or gives incorrect or value-less data, if it leaks, that data is fundamentally worthless.

The fallout from the Ashley Madison leak is probably reduced significantly by the number of pseudonyms and fake names in use. Probably.

Hey, if you used your real name on a cheating web site, that’s hardly smart. But then, as I said earlier today, sometimes security is about protecting bad people from bad things happening to them.

5. Even pseudonyms have value

You might use the same nickname at several places; you might provide information that’s similar to your real information; you might link multiple pseudonymous accounts together. If your data leaks, can you afford to ‘burn’ the identity attached to the pseudonym?

If you have a long message history, you have almost certainly identified yourself pretty firmly in your pseudonymous posts, by spelling patterns, word usages, etc.

Leaks of pseudonymous data are less problematic than leaks of eponymous data, but they still have their problems. Unless you’re really good at OpSec.


Finally, I was disappointed earlier tonight to see that Troy had already covered some aspects of this topic in his weekly series at Windows IT Pro, but I think you’ll see that his thoughts are from a different direction than mine.

Lessons to learn already from Premera – 2. Prevention

Not much has been released about exactly how Premera got attacked, and certainly nothing from anyone with recognised insider knowledge.

Disclaimer: I worked at Premera in the Information Security team, but it’s so so long ago that any of my internal knowledge is incorrect – so I’ll only talk about those things that I have seen published.

I am, above all, a customer of Premera’s, from 2004 until just a few weeks ago. But I’m a customer with a strong background in Information Security.

What have we read?

Almost everything boils down rather simply to one article as the source of what we know.

Some pertinent dates

February 4, 2015: News stories break about Anthem’s breach (formerly Wellpoint).

January 29, 2015: The date given by Premera as the date when they were first made aware that they’d been attacked.

I don’t think that it’s a coincidence that these dates are so close together. In my opinion, these dates imply that Anthem / Wellpoint found their own issues, notified the network of other health insurance companies, and then published to the news outlets.

As a result of this, Premera recognised the same attack patterns in their own systems.

This suggests that any other health insurance companies attacked by the same group (alleged to be “Deep Panda”) will discover and disclose it shortly.

Why I keep mentioning Wellpoint

I’ve kind of driven in the idea that Anthem used to be called Wellpoint, and the reason I’m bringing this out is that a part of the attack documented by ThreatConnect was to create a site called “” – that’s “”, but with the two letter “els” replaced with two “one” digits.

That’s relevant because the ThreatConnect article also called out that there was a web site called “” created by the same group.

That’s sufficient for an attack

So, given a domain name similar to that of a site you wish to attack, how would you get full access to the company behind that site?

Here’s just one way you might mount that attack. There are other ways to do this, but this is the most obvious, given the limited information above.

  1. Create a domain,
  2. Visit the sites used by employees from the external Internet (these would be VPN sites, Outlook Web Access and similar, HR sites – which often need to be accessed by ex-employees)
  3. Capture data related to the logon process – focus only on the path that represents a failed logon (because without passwords, you don’t know what a successful logon does)
  4. Replicate that onto your network. Test it to make sure it looks good
  5. Now send email to employees, advising them to check on their email, their paystubs, whatever sites you’ve verified, but linking them to the version
  6. Sit back and wait for people to send you their usernames and passwords – when they do, tell them they’ve got it wrong, and redirect them to the proper site
  7. Log on to the target network using the credentials you’ve got

If you’re concerned that I’m telling attackers how to do this, remember that this is obvious stuff. This is already a well known attack strategy, “homograph attacks”. This is what a penetration tester will do if you hire one to test your susceptibility to social engineering.

There’s no vulnerability involved, there’s no particularly obvious technical failing here, it’s just the age-old tactic of giving someone a screen that looks like their logon page, and telling them they’ve failed to logon. I saw this basic form of attack in the eighties, it’s that old.

How to defend against this?

If you’ve been reading my posts to date, you’ll know that I’m aware that security offence is sexy and exciting, but security defence is really where the clever stuff belongs.

I have a few simple recommendations that I think apply in this case:

  1. Separate employee and customer account databases. Maybe your employees are also customers, but their accounts should be separately managed and controlled. Maybe the ID is the same in both cases, but the accounts being referenced must be conceptually and architecturally in different account pools.
  2. Separate employee and customer web domains. This is just a generally good idea, because it means that a session ID or other security context from the customer site cannot be used on the employee site. Cross-Origin Security Policies apply to allow the browser to prevent sharing of credentials and access between the two domains. Separation of environments is one of the tasks a firewall can achieve relatively successfully, and with different domains, that job is assisted by your customers’ and employees’ browsers.
  3. External-to-internal access must be gated by a secondary authentication factor (2FA, MFA – standing for two/multi factor authentication). That way, an attacker who phishes your employees will not get a credential they can use from the outside.

Another tack that’s taken by companies is to engage a reputation management company, to register domain names that are homoglyphs to your own (those that look the same in a browser address bar).  Or, to file lawsuits that take down such domains when they appear. Whichever is cheaper. My perspective on this is that it costs money, and is doomed to fail whenever a new TLD arises, or your company creates a new brand.

[Not that reputation management companies can’t help you with your domain names, mind you – they can prevent you, for instance, from releasing a product with a name that’s already associated with a domain name owned by another company.]

These three steps are somewhat interdependent, and they may cause a certain degree of inconvenience, but they will prevent exactly the kind of attacks I’ve described. [Yes, there are other potential attacks, but none introduced by the suggested changes]

Lessons to learn already from Premera – 1. Notification

Last weekend, along with countless employees and ex-employees of Microsoft, Amazon, Expedia, and Premera itself, I received a breach notification signed by Premera’s President & CEO, Jeffrey Roe.

Here’s a few things I think can already be learned from this letter and the available public information:

Don’t claim “sophisticated”

Whenever I see the phrase “sophisticated cyberattack”, not only am I turned off by the meaningless prefix “cyber”, which seems to serve only to “baffle them with bullshit”, but I’m also convinced that the author is trying to convince me that, hey, this attack was just too amazing and science-fictiony to do anything to stop.

All that does is push me in the other direction – to assume that the attack was relatively simple, and should have been prevented and/or noticed.

Granted, my experience is in Information Security, and so I’m always fairly convinced that it’ll be the simple attacks, not the complex and difficult ones, that will be the most successful against any site I’m trying to protect. It’s a little pessimistic, but it’s been proven right time and again.

So, never say that an attack is “sophisticated” unless you really mean that the attack was way beyond what could have been reasonably imagined. You don’t have to say the attackers used simple methods to get in because your team are idiots, because that’s unlikely to be entirely true, either. Just don’t make it sound like it’s not your fault. And don’t make that your opening gambit, either – this was the very first sentence in Premera’s notification.

Don’t say “may” / “might”, say “was”

“some of your personal information may have been accessed”

Again, this phrasing simply makes me think “these guys have no idea what was accessed”, which really doesn’t inspire confidence.

Instead, you should say “the attackers had access to all our information, including your personal and medical data”. Then acknowledge that you don’t have tracking on what information was exported, so you have to act as if it all was.

Say “sorry we allowed your data to be lost”

The worst apologies on record all contain some variation of “I’m sorry you’re upset”, or “I’m sorry you took offence”.

Premera’s version of this is “We … regret the concern it may cause”. So, not even “sorry”. And to the extent that it’s an apology at all, it is that we, current and past customers, were “concerned”.

Parenthetical abbreviations mean this was written by a lawyer

Premera Blue Cross (“Premera”) …

… Information Technology (IT) systems

As if the lack of apology didn’t already tip us off that this document was prepared by a lawyer, the parenthetical creation of abbreviations to be used later on makes it completely clear.

If the letter had sounded more human, it would have been easier to receive as something other than a legal arse-covering exercise.

Speed is important

The letter acknowledges that the issue was discovered on January 29, 2015, and the letter is dated March 17, 2015. That’s nearly two months. And nearly a year since the attackers got in. That’s assuming that you’ve truly figured out the extent of the “sophisticated cyberattack”.

Actually, that’s pretty fast for security breach disclosure, but it still gives the impression to customers that you aren’t letting them know in enough time to protect themselves.

The reason given for this delay is that Premera wanted to ensure that their systems were safe before letting other attackers know about the issue – but it’s generally a fallacy to assume that attackers don’t know about your vulnerabilities. Premera, and the health insurance industry, do a great job of sharing security information with other health insurance providers – but the attackers do an even better job of sharing information about vulnerable systems and tools.

Which leads us to…

Preparation is key

If your company doesn’t have a prepared breach disclosure letter, approved by public relations, the security team and your lawyers, it’s going to take you at least a week, probably two, to put one together. And you’ll have missed something, because you’re preparing it in a rush, in a panic, and in a haze while you’re angry and scared about having been attacked.

Your prepared letter won’t be complete, and won’t be entirely applicable to whatever breach finally comes along and bites you, but it’ll put you that much closer to being ready to handle and communicate that breach. You’ll still need to review it and argue between Security, Legal and PR teams.

Have a plan for this review process, and know the triggers that will start it. Maybe even test the process once in a while.

If you believe that breaches could require a credit notification or ID tracking protection, negotiate this ahead of time, so that this will not slow you down in your announcement. Or write your notification letter with the intent of providing this information at a later time.

Finally, because your notification letter will miss something, make sure it includes the ability to update your customers – link to an FAQ online that can be updated, and provide a call-in number for people to ask questions of an informed team of responders.

More to come

There’s always more information coming out about this vulnerability, and I plan to blog a little more about it later.

Let me know in particular if there’s something you’d like me to cover on this topic.

Thoughts on a New Year

It’s about this time of year that I think…

  • Why do reporters talk so much about NSA spying and Advanced Persistent Threats, when half the websites in existence will cough up cookies if you search for "-alert(document.cookie)-" ?
  • How can we expect people to write secure code when:
    • they don’t know what it is?
    • they can’t recognise insecure code?
    • it’s easier (more clicks, more thinks, etc) to write insecure code?
  • What does it take for a developer to get:
    • fired?
    • a bad performance review?
    • just mildly discomforted?
  • What is it about developers that makes us all believe that nobody else has written this piece of code before? (or that we can write it better)
  • Every time a new fad comes along, whether it’s XML, PHP, Ruby, etc, why do we spend so much time recognising that it has the same issues as the old ones? But without fixes.
  • Can we have an article on “the death of passwords” which will explain what the replacement is – and without that replacement turning out to be “a layer in front of a big password”?
  • Should you let your application out (publish it, make it available on the Internet, etc) if it is so fragile that:
    • you can’t patch it?
    • you can’t update the framework or libraries on which it depends (aka patch them)?
    • you don’t want a security penetration test to be performed on it?
  • Is it right to hire developers on the basis that they can:
    • steer a whiteboard to a small function which looks like it might work?
    • understand an obfuscated sample that demonstrates an obscure feature of your favourite framework?
    • tell you how to weigh twelve coins, one of which might be a fake?
    • bamboozle the interviewer with tales of technological wonders the likes of which he/she cannot fathom?
    • sing the old school song?

Ah, who am I kidding, I think those kinds of things all the time.

Why don’t we do that?

Reading a story on the consequences of the theft of Adobe’s source code by hackers, I come across this startling phrase:

The hackers seem to be targeting vulnerabilities they find within the stolen code. The prediction is that they’re sifting through the code, attempting to find widespread weaknesses, intending to exploit them with maximum effect by using zero-day attacks.

What I’d love to know is why we aren’t seeing a flood of developers crying out to be educated in how they, too, can learn to sift through their own code, attempt to find widespread weaknesses, so they can shore them up and prevent their code from being exploited.

An example of the sort of comments we are seeing can be found here, and they are fairly predictable – “does this mean Open Source is flawed, if having access to the source code is a security risk”, schadenfreude at Adobe’s misfortune, all manner of assertions that Adobe weren’t a very secure company anyway, etc.

Something that’s missing is an acknowledgement that we are all subject to the same pool of developers.

And attackers.

So, if you’re in the business of developing software – whether to sell, licence, give away, or simply to use in your own endeavours, you’re essentially in the same boat as Adobe prior to the hackers breaching their defences. Possibly the same boat as Adobe after the breach, but prior to the discovery.

Unless you are doing something different to what Adobe did, you are setting yourself up to be the next Adobe.

Obviously, Adobe isn’t giving us entire details of their own security program, and what’s gone right or wrong with it, but previous stories (as early as mid-2009) indicated that they were working closely with Microsoft to create an SDL (Security Development Lifecycle) for Adobe’s development.

So, instead of being all kinds of smug that Adobe got hacked, and you didn’t, maybe you should spend your time wondering if you can improve your processes to even reach the level Adobe was at when they got hacked.

And, to bring the topic back to what started the discussion – are you even doing to your software what these unidentified attackers are doing to Adobe’s code?

Are you poring over your own source code to find flaws?

How long are you spending to do that, and what tools are you using to do so?

Government Shuts Down for Cyber Security

In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.

In case the DHS isn’t able to make things happen without funding, here’s what they originally had planned:


I’m sure you’ll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.

Maybe without the government in charge, we can stop using the “C” word to describe it.


The “C” word I’m referring to is, of course, “Cyber”. Bad word. Doesn’t mean anything remotely like what people using it think it means.


The main page of the DHS.GOV web site actually does carry a small banner indicating that there’s no activity happening at the web site today.


So, there may be many NCSAM events, but DHS will not be a part of them.

Training developers to write secure code

I’ve done an amount of training developers recently, and it seems like there are a number of different kinds of responses to my security message.

[You can safely assume that there’s also something that’s wrong with the message and the messenger, but I want to learn about the thing I likely can’t control or change – the supply of developers]

Here are some unfairly broad descriptions of stereotypes I’ve encountered along the way. The truth, as ever, is more nuanced, but I think if I can reach each of these target personas, I should have just about everyone covered.

Is there anyone I’ve missed?

The previous victim

I’m always happy to have one or more of these people in the room – the sort of developer who has some experience, and has been on a project that was attacked successfully at some point or another.

This kind of developer has likely quickly learned the lesson that even his own code is subject to attack, vulnerable and weak to the persistent probes of attackers. Perhaps his experience has also included examples of his own failures in more ordinary ways – mere bugs, with no particular security implications.

Usually, this will be an older developer, because experience is required – and his tales of terror, unrehearsed and true, can sometimes provide the “scared straight” lesson I try to deliver to my students.

The previous attacker

This guy is usually a smart, younger individual. He may have had some previous nefarious activity, or simply researched security issues by attacking systems he owns.

But for my purposes, this guy can be too clever, because he distracts from my talk of ‘least privilege’ and ‘defence in depth’ with questions about race conditions, side-channel attacks, sub-millisecond time deltas across multi-second latency routes, and the like. IF those were the worst problems we see in this industry, I’d focus on them – but sadly, sites are still vulnerable to simple attacks, like my favourite – Reflected XSS in the Search field. [Simple exercise – watch a commercial break, and see how many of the sites advertised there have this vulnerability in them.]

But I like this guy for other reasons – he’s a possible future hire for my team, and a probable future assistant in finding, reporting and addressing vulnerabilities. Keeping this guy interested and engaged is key to making sure that he tells me about his findings, rather than sharing them with friends on the outside, or exploiting them himself.

“I did a security class at college”

Unbelievably to me, there are people who “done a project on it”, and therefore know all they want to about security. If what I was about to tell them was important, they’d have been told it by their professor at college, because their professor knew everything of any importance.

I personally wonder if this is going to be the kind of SDE who will join us for a short while, and not progress – because the impression they give to me is that they’ve finished learning right before their last final exam.


Related to the previous category is the developer who only does what it takes to get paid and to receive a good performance review.

I think this is the developer I should work the hardest to try and reach, because this attitude lies at the heart of every developer on their worst days at their desk. When the passion wanes, or the task is uninteresting, the desire to keep your job, continue to get paid, and progress through your career while satisfying your boss is the grinding cog that keeps you moving forward like a wind-up toy.

This is why it is important to keep searching to find ways of measuring code quality, and rewarding people who exhibit it – larger rewards for consistent prolonged improvement, smaller but more frequent rewards to keep the attention of the developer who makes a quick improvement to even a small piece of code.

Sadly, this guy is in my class because his boss told him he ought to attend. So I tell him at the end of my class that he needs to report back to his boss the security lesson that he learned – that all of his development-related goals should have the adverb “securely” appended to them. So “develop feature X” becomes “develop feature X securely”. If that is the one change I can make to this developer’s goals, I believe it will make a difference.


I’ve been doing this for long enough that I see the same faces in the crowd over and over again. I know I used to be a fanboy myself, and so I’m aware that sometimes this is because these folks learn something new each time. That’s why I like to deliver a different talk each time, even if it’s on the same subject as a previous lesson.

Or maybe they just didn’t get it all last time, and need to hear it again to get a deeper understanding. Either way, repeat visitors are definitely welcome – but I won’t get anywhere if that’s all I get in my audience.


Some developers do the development thing because they can’t NOT write code. If they were independently wealthy and could do whatever they want, they’d be behind a screen coding up some fun little app.

I like the ones with a calling to this job, because I believe I can give them enough passion in security to make it a part of their calling as well. [Yes, I feel I have a calling to do security – I want to save the world from bad code, and would do it if I was independently wealthy.]

Stereotypical / The Surgeon

Sadly, the hardest person to reach – harder even than the Salaryman – is the developer who matches the stereotypical perception of the developer mindset.

Convinced of his own superiority and cleverness, even if he doesn’t express it directly in such conceited terms, this person will see every suggested approach as beneath him, and every example of poor code as yet more proof of his own superiority.

“Sure, you’ve had problems with other developers making stupid security mistakes,” he’ll think to himself, “But I’m not that dumb. I’ve never written code that bad.”

I certainly hope you won’t ever write code as bad as the examples I give in my classes – those are errant samples of code written in haste, and which I wouldn’t include in my class if they didn’t clearly illustrate my point. But my point is that your colleagues – everyone around you – are going to write this bad a piece of code one day, and it is your job to find it. It is also their job to find it in the code you write, so either you had better be truly as good as you think you are, or you had better apply good security practices so they don’t find you at your worst coding moment.

Playing with security blogs

I’ve found a new weekend hobby – it takes only a few minutes, is easily interruptible, and reminds me that the state of web security is such that I will never be out of a job.

I open my favourite search engine (I’m partial to Bing, partly because I get points, but mostly because I’ve met the guys who built it), search for “security blog”, and then pick one at random.

Once I’m at the security blog site – often one I’ve never heard of, despite it being high up in the search results – I find the search box and throw a simple reflected XSS attack at it.

If that doesn’t work, I view the source code for the results page I got back, and use the information I see there to figure out what reflected XSS attack will work. Then I try that.

[Note: I use reflected XSS, because I know I can only hurt myself. I don’t play stored XSS or SQL injection games, which can easily cause actual damage at the server end, unless I have permission and I’m being paid.]

Finally, I try to find who I should contact about the exploitability of the site.

It’s interesting just how many of these sites are exploitable – some of them falling to the simplest of XSS attacks – and even more interesting to see how many sites don’t have a good, responsive contact address (or prefer simply not to engage with vuln discoverers).

So, what do you find?

I clearly wouldn’t dream of disclosing any of the vulnerabilities I’ve found until well after they’re fixed. Of course, after they’re fixed, I’m happy to see a mention that I’ve helped move the world forward a notch on some security scale. [Not sure why I’m not called out on the other version of that changelog.] I might allude to them on my twitter account, but not in any great detail.

From clicking the link to exploit is either under ten minutes or not at all – and reporting generally takes another ten minutes or so, most of which is hunting for the right address. The longer portion of the game is helping some of these guys figure out what action needs to be taken to fix things.

Try using a WAF – NOT!

You can try using a WAF to solve your XSS problem, but then you’ve got two problems – a vulnerable web site, and that you have to manage your WAF settings. If you have a lot of spare time, you can use a WAF to shore up known-vulnerable fields and trap known attack strings. But it really doesn’t ever fix the problem.

Don’t echo my search query

If you can, don’t echo back to me what I sent you, because that’s how these attacks usually start. Don’t even include it in comments, because a good attack will just terminate the comment and start injecting HTML or script.

Remove my strange characters

Unless you’re running a source code site, you probably don’t need me to search for angle brackets, or a number of other characters. So take them out of my search – or plain reject my search if I include them in my search.

Encode everything

OK, so you don’t have to encode the basics – what are the basics? I tend to start with alphabetic and numeric characters, maybe also a space. Encode everything else.

Which encoding?

Yeah, that’s always the hard part. Encode it using the right encoding. That’s the short version. The long version is that you figure out what’s going to decode it, and make sure you encode for every layer that will decode. If you’re putting my text into a web page as a part of the page’s content, HTML encode it. If it’s in an attribute string, quote the characters using HTML attribute encoding – and make sure you quote the entire attribute value! If it’s an attribute string that will be used as a URL, you should URL encode it. Then you can HTML encode it, just to be sure.

[Then, of course, check that your encoding hasn’t killed the basic function of the search box!]

Respond to security reports

You should definitely respond to security reports – I understand that not everyone can have a 24/7 response team watching their blog (I certainly don’t) – you should try to respond within a couple of days, and anything under a week is probably going to be alright. Some vuln discoverers are upset if they don’t get a response much sooner, and see that as cause to publish their findings.

Me, I send a message first to ask if I’ve found the right place to send a security vulnerability report to, and only when I receive a positive acknowledgement do I send on the actual details of the exploit.

Be like Billy – Mind your XSS Manners!

I’ve said before that I wish programmers would respond to reports of XSS as if I’d told them I caught them writing a bubble sort implementation in Cobol. Full of embarrassment at being such a beginner.