Last weekend, along with countless employees and ex-employees of Microsoft, Amazon, Expedia, and Premera itself, I received a breach notification signed by Premeraâs President & CEO, Jeffrey Roe.
Hereâs a few things I think can already be learned from this letter and the available public information:
Whenever I see the phrase âsophisticated cyberattackâ, not only am I turned off by the meaningless prefix âcyberâ, which seems to serve only to âbaffle them with bullshitâ, but Iâm also convinced that the author is trying to convince me that, hey, this attack was just too amazing and science-fictiony to do anything to stop.
All that does is push me in the other direction â to assume that the attack was relatively simple, and should have been prevented and/or noticed.
Granted, my experience is in Information Security, and so Iâm always fairly convinced that itâll be the simple attacks, not the complex and difficult ones, that will be the most successful against any site Iâm trying to protect. Itâs a little pessimistic, but itâs been proven right time and again.
So, never say that an attack is âsophisticatedâ unless you really mean that the attack was way beyond what could have been reasonably imagined. You donât have to say the attackers used simple methods to get in because your team are idiots, because thatâs unlikely to be entirely true, either. Just donât make it sound like itâs not your fault. And donât make that your opening gambit, either â this was the very first sentence in Premeraâs notification.
âsome of your personal information may have been accessedâ
Again, this phrasing simply makes me think âthese guys have no idea what was accessedâ, which really doesnât inspire confidence.
Instead, you should say âthe attackers had access to all our information, including your personal and medical dataâ. Then acknowledge that you donât have tracking on what information was exported, so you have to act as if it all was.
The worst apologies on record all contain some variation of âIâm sorry youâre upsetâ, or âIâm sorry you took offenceâ.
Premeraâs version of this is âWe ⊠regret the concern it may causeâ. So, not even âsorryâ. And to the extent that itâs an apology at all, it is that we, current and past customers, were âconcernedâ.
Premera Blue Cross (âPremeraâ) âŠ
⊠Information Technology (IT) systems
As if the lack of apology didnât already tip us off that this document was prepared by a lawyer, the parenthetical creation of abbreviations to be used later on makes it completely clear.
If the letter had sounded more human, it would have been easier to receive as something other than a legal arse-covering exercise.
The letter acknowledges that the issue was discovered on January 29, 2015, and the letter is dated March 17, 2015. Thatâs nearly two months. And nearly a year since the attackers got in. Thatâs assuming that youâve truly figured out the extent of the âsophisticated cyberattackâ.
Actually, thatâs pretty fast for security breach disclosure, but it still gives the impression to customers that you arenât letting them know in enough time to protect themselves.
The reason given for this delay is that Premera wanted to ensure that their systems were safe before letting other attackers know about the issue â but itâs generally a fallacy to assume that attackers donât know about your vulnerabilities. Premera, and the health insurance industry, do a great job of sharing security information with other health insurance providers â but the attackers do an even better job of sharing information about vulnerable systems and tools.
Which leads us toâŠ
If your company doesnât have a prepared breach disclosure letter, approved by public relations, the security team and your lawyers, itâs going to take you at least a week, probably two, to put one together. And youâll have missed something, because youâre preparing it in a rush, in a panic, and in a haze while youâre angry and scared about having been attacked.
Your prepared letter wonât be complete, and wonât be entirely applicable to whatever breach finally comes along and bites you, but itâll put you that much closer to being ready to handle and communicate that breach. Youâll still need to review it and argue between Security, Legal and PR teams.
Have a plan for this review process, and know the triggers that will start it. Maybe even test the process once in a while.
If you believe that breaches could require a credit notification or ID tracking protection, negotiate this ahead of time, so that this will not slow you down in your announcement. Or write your notification letter with the intent of providing this information at a later time.
Finally, because your notification letter will miss something, make sure it includes the ability to update your customers â link to an FAQ online that can be updated, and provide a call-in number for people to ask questions of an informed team of responders.
Thereâs always more information coming out about this vulnerability, and I plan to blog a little more about it later.
Let me know in particular if thereâs something youâd like me to cover on this topic.
Itâs about this time of year that I thinkâŠ
"-alert(document.cookie)-"
? Ah, who am I kidding, I think those kinds of things all the time.
Reading a story on the consequences of the theft of Adobeâs source code by hackers, I come across this startling phrase:
The hackers seem to be targeting vulnerabilities they find within the stolen code. The prediction is that theyâre sifting through the code, attempting to find widespread weaknesses, intending to exploit them with maximum effect by using zero-day attacks.
What Iâd love to know is why we arenât seeing a flood of developers crying out to be educated in how they, too, can learn to sift through their own code, attempt to find widespread weaknesses, so they can shore them up and prevent their code from being exploited.
An example of the sort of comments we are seeing can be found here, and they are fairly predictable â âdoes this mean Open Source is flawed, if having access to the source code is a security riskâ, schadenfreude at Adobeâs misfortune, all manner of assertions that Adobe werenât a very secure company anyway, etc.
And attackers.
So, if youâre in the business of developing software â whether to sell, licence, give away, or simply to use in your own endeavours, youâre essentially in the same boat as Adobe prior to the hackers breaching their defences. Possibly the same boat as Adobe after the breach, but prior to the discovery.
Unless you are doing something different to what Adobe did, you are setting yourself up to be the next Adobe.
Obviously, Adobe isnât giving us entire details of their own security program, and whatâs gone right or wrong with it, but previous stories (as early as mid-2009) indicated that they were working closely with Microsoft to create an SDL (Security Development Lifecycle) for Adobeâs development.
So, instead of being all kinds of smug that Adobe got hacked, and you didnât, maybe you should spend your time wondering if you can improve your processes to even reach the level Adobe was at when they got hacked.
And, to bring the topic back to what started the discussion â are you even doing to your software what these unidentified attackers are doing to Adobeâs code?
How long are you spending to do that, and what tools are you using to do so?
In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.
In case the DHS isnât able to make things happen without funding, hereâs what they originally had planned:
Iâm sure youâll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.
Maybe without the government in charge, we can stop using the âCâ word to describe it.
The âCâ word Iâm referring to is, of course, âCyberâ. Bad word. Doesnât mean anything remotely like what people using it think it means.
The main page of the DHS.GOV web site actually does carry a small banner indicating that thereâs no activity happening at the web site today.
So, there may be many NCSAM events, but DHS will not be a part of them.
Iâve done an amount of training developers recently, and it seems like there are a number of different kinds of responses to my security message.
[You can safely assume that thereâs also something thatâs wrong with the message and the messenger, but I want to learn about the thing I likely canât control or change â the supply of developers]
Here are some unfairly broad descriptions of stereotypes Iâve encountered along the way. The truth, as ever, is more nuanced, but I think if I can reach each of these target personas, I should have just about everyone covered.
Is there anyone Iâve missed?
Iâm always happy to have one or more of these people in the room â the sort of developer who has some experience, and has been on a project that was attacked successfully at some point or another.
This kind of developer has likely quickly learned the lesson that even his own code is subject to attack, vulnerable and weak to the persistent probes of attackers. Perhaps his experience has also included examples of his own failures in more ordinary ways â mere bugs, with no particular security implications.
Usually, this will be an older developer, because experience is required â and his tales of terror, unrehearsed and true, can sometimes provide the âscared straightâ lesson I try to deliver to my students.
This guy is usually a smart, younger individual. He may have had some previous nefarious activity, or simply researched security issues by attacking systems he owns.
But for my purposes, this guy can be too clever, because he distracts from my talk of âleast privilegeâ and âdefence in depthâ with questions about race conditions, side-channel attacks, sub-millisecond time deltas across multi-second latency routes, and the like. IF those were the worst problems we see in this industry, Iâd focus on them â but sadly, sites are still vulnerable to simple attacks, like my favourite â Reflected XSS in the Search field. [Simple exercise â watch a commercial break, and see how many of the sites advertised there have this vulnerability in them.]
But I like this guy for other reasons â heâs a possible future hire for my team, and a probable future assistant in finding, reporting and addressing vulnerabilities. Keeping this guy interested and engaged is key to making sure that he tells me about his findings, rather than sharing them with friends on the outside, or exploiting them himself.
Unbelievably to me, there are people who âdone a project on itâ, and therefore know all they want to about security. If what I was about to tell them was important, theyâd have been told it by their professor at college, because their professor knew everything of any importance.
I personally wonder if this is going to be the kind of SDE who will join us for a short while, and not progress â because the impression they give to me is that theyâve finished learning right before their last final exam.
Related to the previous category is the developer who only does what it takes to get paid and to receive a good performance review.
I think this is the developer I should work the hardest to try and reach, because this attitude lies at the heart of every developer on their worst days at their desk. When the passion wanes, or the task is uninteresting, the desire to keep your job, continue to get paid, and progress through your career while satisfying your boss is the grinding cog that keeps you moving forward like a wind-up toy.
This is why it is important to keep searching to find ways of measuring code quality, and rewarding people who exhibit it â larger rewards for consistent prolonged improvement, smaller but more frequent rewards to keep the attention of the developer who makes a quick improvement to even a small piece of code.
Sadly, this guy is in my class because his boss told him he ought to attend. So I tell him at the end of my class that he needs to report back to his boss the security lesson that he learned â that all of his development-related goals should have the adverb âsecurelyâ appended to them. So âdevelop feature Xâ becomes âdevelop feature X securelyâ. If that is the one change I can make to this developerâs goals, I believe it will make a difference.
Iâve been doing this for long enough that I see the same faces in the crowd over and over again. I know I used to be a fanboy myself, and so Iâm aware that sometimes this is because these folks learn something new each time. Thatâs why I like to deliver a different talk each time, even if itâs on the same subject as a previous lesson.
Or maybe they just didnât get it all last time, and need to hear it again to get a deeper understanding. Either way, repeat visitors are definitely welcome â but I wonât get anywhere if thatâs all I get in my audience.
Some developers do the development thing because they canât NOT write code. If they were independently wealthy and could do whatever they want, theyâd be behind a screen coding up some fun little app.
I like the ones with a calling to this job, because I believe I can give them enough passion in security to make it a part of their calling as well. [Yes, I feel I have a calling to do security â I want to save the world from bad code, and would do it if I was independently wealthy.]
Sadly, the hardest person to reach â harder even than the Salaryman â is the developer who matches the stereotypical perception of the developer mindset.
Convinced of his own superiority and cleverness, even if he doesnât express it directly in such conceited terms, this person will see every suggested approach as beneath him, and every example of poor code as yet more proof of his own superiority.
âSure, youâve had problems with other developers making stupid security mistakes,â heâll think to himself, âBut Iâm not that dumb. Iâve never written code that bad.â
I certainly hope you wonât ever write code as bad as the examples I give in my classes â those are errant samples of code written in haste, and which I wouldnât include in my class if they didnât clearly illustrate my point. But my point is that your colleagues â everyone around you â are going to write this bad a piece of code one day, and it is your job to find it. It is also their job to find it in the code you write, so either you had better be truly as good as you think you are, or you had better apply good security practices so they donât find you at your worst coding moment.
Iâve found a new weekend hobby â it takes only a few minutes, is easily interruptible, and reminds me that the state of web security is such that I will never be out of a job.
I open my favourite search engine (Iâm partial to Bing, partly because I get points, but mostly because Iâve met the guys who built it), search for âsecurity blogâ, and then pick one at random.
Once Iâm at the security blog site â often one Iâve never heard of, despite it being high up in the search results â I find the search box and throw a simple reflected XSS attack at it.
If that doesnât work, I view the source code for the results page I got back, and use the information I see there to figure out what reflected XSS attack will work. Then I try that.
[Note: I use reflected XSS, because I know I can only hurt myself. I donât play stored XSS or SQL injection games, which can easily cause actual damage at the server end, unless I have permission and Iâm being paid.]
Finally, I try to find who I should contact about the exploitability of the site.
Itâs interesting just how many of these sites are exploitable â some of them falling to the simplest of XSS attacks â and even more interesting to see how many sites donât have a good, responsive contact address (or prefer simply not to engage with vuln discoverers).
I clearly wouldnât dream of disclosing any of the vulnerabilities Iâve found until well after theyâre fixed. Of course, after theyâre fixed, Iâm happy to see a mention that Iâve helped move the world forward a notch on some security scale. [Not sure why Iâm not called out on the other version of that changelog.] I might allude to them on my twitter account, but not in any great detail.
From clicking the link to exploit is either under ten minutes or not at all â and reporting generally takes another ten minutes or so, most of which is hunting for the right address. The longer portion of the game is helping some of these guys figure out what action needs to be taken to fix things.
You can try using a WAF to solve your XSS problem, but then youâve got two problems â a vulnerable web site, and that you have to manage your WAF settings. If you have a lot of spare time, you can use a WAF to shore up known-vulnerable fields and trap known attack strings. But it really doesnât ever fix the problem.
If you can, donât echo back to me what I sent you, because thatâs how these attacks usually start. Donât even include it in comments, because a good attack will just terminate the comment and start injecting HTML or script.
Unless youâre running a source code site, you probably donât need me to search for angle brackets, or a number of other characters. So take them out of my search â or plain reject my search if I include them in my search.
OK, so you donât have to encode the basics â what are the basics? I tend to start with alphabetic and numeric characters, maybe also a space. Encode everything else.
Yeah, thatâs always the hard part. Encode it using the right encoding. Thatâs the short version. The long version is that you figure out whatâs going to decode it, and make sure you encode for every layer that will decode. If youâre putting my text into a web page as a part of the pageâs content, HTML encode it. If itâs in an attribute string, quote the characters using HTML attribute encoding â and make sure you quote the entire attribute value! If itâs an attribute string that will be used as a URL, you should URL encode it. Then you can HTML encode it, just to be sure.
[Then, of course, check that your encoding hasnât killed the basic function of the search box!]
You should definitely respond to security reports â I understand that not everyone can have a 24/7 response team watching their blog (I certainly donât) â you should try to respond within a couple of days, and anything under a week is probably going to be alright. Some vuln discoverers are upset if they donât get a response much sooner, and see that as cause to publish their findings.
Me, I send a message first to ask if Iâve found the right place to send a security vulnerability report to, and only when I receive a positive acknowledgement do I send on the actual details of the exploit.
Iâve said before that I wish programmers would respond to reports of XSS as if Iâd told them I caught them writing a bubble sort implementation in Cobol. Full of embarrassment at being such a beginner.
Iâm putting this post in the âProgrammer Hubrisâ section, but itâs really not the programmers this time, itâs the managers. And the lawyers, apparently.
Well, yeah, it always does, and this time what set me off is an NPR article by Tom Gjelten in a series theyâre currently doing on âcybersecurityâ.
This article probably had a bunch of men talking to NPR with expressions such as âhell, yeah!â and âitâs about time!â, or even the more balanced âwell, the best defence is a good offenceâ.
Absolute rubbish. Pure codswallop.
Kind of, and no.
Weâre certainly not being âattackedâ in the means being described by analogy in the article.
"If you’re just standing up taking blows, the adversary will ultimately hit you hard enough that you fall to the ground and lose the match. You need to hit back." [says Dmitri Alperovitch, CrowdStrike’s co-founder.]
Yeah, except weâre not taking blows, and this isnât boxing, and theyâre not hitting us hard.
"What we need to do is get rid of the attackers and take away their tools and learn where their hideouts are and flush them out," [says Greg Hoglund, co-founder of HBGary, another firm known for being hacked by a bunch of anonymous nerds that he bragged about being all over]
Thatâs far closer to reality, but the people whose job it is to do that is the duly appointed law enforcement operatives who are able to enforce law.
"It’s [like] the government sees a missile heading for your company’s headquarters, and the government just yells, ‘Incoming!’ " Alperovitch says. "It’s doing nothing to prevent it, nothing to stop it [and] nothing to retaliate against the adversary." [says Alperovitch again]
No, itâs not really like that at all.
There is no missile. There is no boxer. Thereâs a guy sending you postcards.
Yep, pretty much exactly that.
Every packet that comes at you from the Internet is much like a postcard. Itâs got a from address (of sorts) and a to address, and all the information inside the packet is readable. [Thatâs why encryption is applied to all your important transactions]
Thereâs a number of ways. You might be receiving far more postcards than you can legitimately handle, making it really difficult to assess which are the good postcards, and which are the bad ones. So, you contact the postman, and let him know this, and he tracks down (with the aid of the postal inspectors) whoâs sending them, and stops carrying those postcards to you. In the meantime, you learn how to spot the obvious crappy postcards and throw them away â and when you use a machine to do this, itâs a lot less of a problem. Thatâs a denial of service attack.
Then thereâs an attack against your web site. Pretty much, that equates to the postcard sender learning that thereâs someone reading the postcards, whose job it is to do pretty much what the postcards tell them to do. So he sends postcards that say âpunch the nearest person to you really hard in the faceâ. Obviously a few successes of this sort lead you to firing the idiot whoâs punching his co-workers, and instead training the next guy as to what jobs heâs supposed to do on behalf of the postcard senders.
Iâm sure that my smart readers can think up their own postcard-based analogies of other attacks that go on, now that youâve seen these two examples.
Sure, send postcards, but unless you want the postman to be discarding all your outgoing mail, or the law enforcement types to turn up at your doorstep, those postcards had better not be harassing or inappropriate.
Even if you think youâre limiting your behaviour to that which the postman wonât notice as abusive, thereâs the other issue with postcards. Thereâs no guarantee that they were sent from the address stated, and even if they were sent from there, there is no reason to believe that they were official communications.
All it takes is for some hacker to launch an attack from a hospitalâs network space, and youâre now responsible for attacking an innocent target where lives could actually be at risk. [Sure, if that were the case, the hospital has shocking security issues of its own, but can you live with that rationalisation if your response to someone attacking your site winds up killing someone?]
I donât think that counterattack on the Internet is ethical or appropriate.
Saw this update in my Windows Update list recently:
http://support.microsoft.com/kb/2574819
As it stands right now, this is what it says (in part):
OK, so I started off feeling good about this â whatâs not to like about the idea that DTLS, a security layer for UDP that works roughly akin to TLS / SSL for TCP, now can be made a part of Windows?
Sure, you could say âwhat about downstream versionsâ, but then again, thereâs a point where a developer should say âupgrading has its privilegesâ. I donât support Windows 3.1 any more, and I donât feel bad about that.
No, the part I dislike is this one:
Note DTLS provides TLS functionalities that are based on the User Datagram Protocol (UDP) protocol. Because TLS is based on the Transmission Control Protocol (TCP) protocol, DTLS performs better than TLS.
Wow.
Thatâs just plain wrong. Actually, Iâm not even sure it qualifies as wrong, and itâs quite frankly the sort of mis-statement and outright guff that made me start responding to networking posts in the first place, and which propelled me in the direction of eventually becoming an MVP.
Yes, I was the nerdy guy complaining that there were already too many awful networking applications, and that promulgating stupid myths like âUDP performs better than TCPâ or âthe Nagle algorithm is slowing your app down, just disable itâ causes there to be more of the same.
But I think thatâs really the point â you actually do want nerds of that calibre writing your network applications, because network programming is not easy â itâs actually hard. As I have put it on a number of occasions, when youâre writing a program that works over a network, youâre only writing one half of the application (if that). The other half is written by someone else â and that person may have read a different RFC (or a different version of the protocol design), may have had a different interpretation of ambiguous (or even completely clear) sections, or could even be out to destroy your program, your data, your company, and anyone who ever trusted your application.
Surviving in those circumstances requires an understanding of the purity of good network code.
Bicycle messengers are faster than the postal service, too. Fast isnât always what youâre looking for. In the case comparing UDP and TCP, if it was just a matter of âUDP is faster than TCPâ, all the worldâs web sites would be running on some protocol other than HTTP, because HTTP is rooted in TCP. Why donât they?
Because UDP repeats packets, loses packets, repeats packets, and first of all, re-orders packets. And when your web delivery over UDP protocol retransmits those lost packets, correctly orders packets, drops repeated packets, and thereby gives you the full web experience without glitches, itâs re-written large chunks of the TCP stack over UDP â and done so with worse performance.
Donât get me wrong â UDP is useful in and of itself, just not for the same tasks TCP is useful for. UDP is great for streaming audio and video, because youâd rather drop frames or snippets of sound than wait for them to arrive later (as they would do with TCP requesting retransmission, say). If you can afford to lose a few packets here and there in the interest of timely delivery of those packets that do get through, your application protocol is ideally suited to UDP. If itâs more important to occasionally wait a little in order to get the whole stream, TCP will outperform UDP every time.
Never choose UDP over TCP because you heard it goes faster.
Choose UDP over TCP because youâd rather have packets dropped at random by the network layer than have them arrive any later than the absolute fastest they can get there.
Choose TCP over UDP because youâd rather have all the packets that were sent, in the order that they were sent, than get most / many / some of them earlier.
And whether you use TCP or UDP, you can now add TLS-style security protection.
I await the arrival of encrypted UDP traffic with some interest.
Iâve been consistently amazed by human behaviours for many years, and through many employers.
One of the behaviours that always astonishes me is when I let someone know that theyâre violating security policy, or simply behaving in an insecure manner, and rather than changing their behaviour or defending their own actions per se, they respond with some variation of âsure, but such-and-such team/person is already doing that and far worseâ.
Maybe itâs my grammar school upbringing, in which it was clear that the response of âbut Sir, Jenkins minor was also chewing gumâ was not only going to get Jenkins into trouble, but also get me into more trouble (if only when Jenkins found out who snitched) â I really canât see that thereâs any appropriate response to such statements other than to say âwell, thank you for drawing my attention to that other infraction, which I will decide to address at my convenience â now, back to your caseâŠâ â or perhaps less usefully, âthat may be, but itâs you that I caught.â
I readily acknowledge that institutional behaviours are as much learned from the actions of oneâs peers, so that it is important to curb widespread culturally-ingrained wrongness.
But I donât see what people who use this argument expect will happen â is there really a circumstance in their past in which someone said âreally? Oh, well then, thatâs alright. Carry on.â
So, I’ve submitted my information for re-awarding as an MVP – we’ll see whether I’ve done enough this year to warrant being admitted again into the MVP ranks.
Next week is the MVP Summit, where I visit Microsoft in Bellevue and Redmond for a week of brainwashing and meet-n-greet. I joke about this being a bit of a junket, but in reality, I get more information out of this than from most of the other conferences I’ve attended – perhaps mostly because the content is so tightly targeted.
That’s not always the case, of course – sometimes you’re scheduled to hear a talk that you’ve already heard three different times this year, but for those occasions, my advice would be to find another one that’s going on at the same time that you do want to hear. Talk to other MVPs not in your speciality, and find out what they’re attending. If you feel like you really want to get approval, ask your MVP lead if it’s OK to switch to the other session.
Very rarely a talk will be so strictly NDA-related that you will be blocked from entering, but not often.
Oh, and trade swag with other MVPs. Very frequently your fellow MVPs will be willing to trade swag that they got for their speciality for yours – or across regions. Make friends and talk to people – and don’t assume that the ‘industry luminaries’ aren’t willing to talk to you.
Also this week, comes news that I’ve been recognised for authoring the TechNet Wiki article of the Week, for my post on Microsoft’s excellent Elevation of Privilege Threat Modeling card game. Since that post was made two years ago, I’ve used the deck in a number of environments and with a few different game styles, but the goal each time has remained the same, and been successfully met – to make developers think about the threats that their application designs are subject to, without having to have those developers be security experts or have any significant experience of security issues.