Monthly Archives: February 2010

Samples that suck: IfModifiedSince

I’ve been trying to improve my IFetch application’s overall performance, and it’s clear that the best thing that could be done to improve it immediately is to cache the information being returned from the BBC Radio web site, so that next time around, the application doesn’t have to reload all the information from the web, slow as it often is.

My first thought was quite simple – store the RDF (Resource Description Format) XML files in a cache directory – purge them if they haven’t been accessed in, say, a month, and only fetch them again from the web if the web page has been updated since the file was last modified.

Sadly, I wrote the code before I discovered that the BBC Radio web site acts as if all these RDF files were modified this last second.

Happily, I wrote the code before I went and looked at the documentation for the HttpWebRequest.IfModifiedSince Property.

Here’s the sample code (with source colouring and line numbers):

01    // Create a new ‘Uri’ object with the mentioned string.
02    Uri myUri =new Uri("http://www.contoso.com");           
03    // Create a new ‘HttpWebRequest’ object with the above ‘Uri’ object.
04    HttpWebRequest myHttpWebRequest= (HttpWebRequest)WebRequest.Create(myUri);
05    // Create a new ‘DateTime’ object.
06    DateTime today= DateTime.Now;
07    if (DateTime.Compare(today,myHttpWebRequest.IfModifiedSince)==0)
08    {
09        // Assign the response object of ‘HttpWebRequest’ to a ‘HttpWebResponse’ variable.
10        HttpWebResponse myHttpWebResponse=(HttpWebResponse)myHttpWebRequest.GetResponse();
11        Console.WriteLine("Response headers \n{0}\n",myHttpWebResponse.Headers);
12        Stream streamResponse=myHttpWebResponse.GetResponseStream();
13        StreamReader streamRead = new StreamReader( streamResponse );
14        Char[] readBuff = new Char[256];
15        int count = streamRead.Read( readBuff, 0, 256 );
16        Console.WriteLine("\nThe contents of Html Page are :  \n");   
17        while (count > 0)
18        {
19            String outputData = new String(readBuff, 0, count);
20            Console.Write(outputData);
21            count = streamRead.Read(readBuff, 0, 256);
22        }
23        // Close the Stream object.
24        streamResponse.Close();
25        streamRead.Close();
26        // Release the HttpWebResponse Resource.
27        myHttpWebResponse.Close();
28        Console.WriteLine("\nPress ‘Enter’ key to continue……………..");   
29        Console.Read();
30    }
31    else
32    {
33        Console.WriteLine("\nThe page has been modified since "+today);
34    }

So, what’s the problem?

First, there’s the one that should be fairly obvious – by comparing (in line 7) for equality to “DateTime.Now” (assigned in line 6), the programmer has essentially said that this sample is designed to differentiate pages modified after the run from pages modified before the run. Now this will have one of two effects – on a site where the If-Modified-Since request header works properly, all results will demonstrate that the page has not been modified; on a site where If-Modified-Since always returns that the page has been modified, it will of course always state that the page has been modified. That alone makes this not a very useful sample, even if the rest of the code was correct.

But the greater error is that the IfModifiedSince value is a request header, and yet it is being compared against a target date, as if it already contains the value of the page’s last modification. How would it get that value (at line 7), when the web site isn’t actually contacted until the call to GetResponse() in line 10?

Also irritating, in that the .NET Framework makes far too much use of exceptions, is that instead of the NotModified response type being a simple category of response, it’s an exception.

How should this code be changed?

My suggestion is as follows – let me know if I’ve screwed anything else up:

01 // Create a new ‘Uri’ object with the mentioned string.
02 Uri myUri = new Uri("http://www.google.com/intl/en/privacy.html");
03 // Create a new ‘HttpWebRequest‘ object with the above ‘Uri’ object.
04 HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create(myUri);
05 // Create a new ‘DateTime‘ object.
06 DateTime today = DateTime.Now;
07 today=today.AddDays(-21.0); // Test for pages modified in the last three weeks.
08 myHttpWebRequest.IfModifiedSince = today;
09 try
10 {
11     // Assign the response object of ‘HttpWebRequest‘ to aHttpWebResponse‘ variable.
12     HttpWebResponse myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
13     Console.WriteLine("Page modified recently\nResponse headers \n{0}\n", myHttpWebResponse.Headers);
14     Stream streamResponse = myHttpWebResponse.GetResponseStream();
15     StreamReader streamRead = new StreamReader(streamResponse);
16     Char[] readBuff = new Char[256];
17     int count = streamRead.Read(readBuff, 0, 256);
18     Console.WriteLine("\nThe contents of Html Page are :  \n");
19     while (false && count > 0)
20     {
21         String outputData = new String(readBuff, 0, count);
22         Console.Write(outputData);
23         count = streamRead.Read(readBuff, 0, 256);
24     }
25     // Close the Stream object.
26     streamResponse.Close();
27     streamRead.Close();
28     Console.WriteLine("\nPress ‘Enter’ key to continue……………..");
29     Console.Read();
30     // Release the HttpWebResponse Resource.
31     myHttpWebResponse.Close();
32 }
33 catch (System.Net.WebException e)
34 {
35     if (e.Response != null)
36     {
37         if (((HttpWebResponse)e.Response).StatusCode == HttpStatusCode.NotModified)
38             Console.WriteLine("\nThe page has not been modified since " + today);
39         else
40             Console.WriteLine("\nUnexpected status code " + ((HttpWebResponse)e.Response).StatusCode);
41     }
42     else
43     {
44         Console.WriteLine("\nUnexpected Web Exception " + e.Message);
45     }
46 } 

So, why are sucky samples relevant to a blog about security?

The original sample doesn’t cause any obvious security problems, except for the obvious lack of exception handling.

But other samples I have seen do contain significant security flaws, or at least behaviours that tend to lead to security flaws – not checking the size of buffers, concatenating strings and passing them to SQL, etc. It’s worth pointing out that while writing this blog post, I couldn’t find any such samples at Microsoft’s MSDN site, so they’ve obviously done at least a moderately good job at fixing up samples.

This sample simply doesn’t work, though, and that implies that the tech writer responsible didn’t actually test it in the two most obvious cases (page modified, page not modified) to see that it works.

Bad Names: Windows Phone Mobile Compact Edition Seven Series Pocket PC

OK, admittedly, the name isn’t really that long, but even though I’m spending this week on Microsoft’s home turf, I can’t say that I’ve met two people who can trip off their tongue the proper name of the new version of Windows Mobile:

Windows Phone Seven Series zhone1

Seriously? Every single word there is a generic term, and will have large numbers of inappropriate matches when you go searching for them.

Right now, while the hype is high, a search for those terms brings back mostly matches for the Windows Phone, but in a few weeks, it’s anyone’s guess what you’ll find.

ipadprototype Search for iPhone, or iPad, by comparison, and although you’ll find a pile of parody sites, at least those parodies are parodies of the products in question. Every search result is relevant to the iPhone.

Why can’t Microsoft come up with a simple, single, searchable brand name for their products? We see this all the time, with Bookshelf, Access, Excel, Word, Windows, Bob, etc.

What would be so difficult about picking up on the idea that this is, essentially, a Zune phone? Call it a “Zhone”, give it an interesting pronunciation (think “Zh is to Sh as Z is to S” – like the french “J” sound), and you’ve made for immediate cool, cemented the link with the Zune (hmm… could depend on how people like the Zune – personally, I’m so impressed by the Zune HD that I wish I could justify one to the wife), and made the product immediately searchable and identifiable. (Or if that name’s taken, Zuphone, Phozune, Phune, etc)

But no, seriously dorky names are en vogue at Microsoft, always have been and probably always will be. Of course, why should you listen to me, a security guy who dabbles in development and has no marketing ability, when instead you’ve got all those highly paid marketers who tell you that “Windows Phone Seven Series from Kyocera [or Dell, Samsung, etc]” will sell?

The bottom line

Notice, however, that the only thing I have to diss this phone on is its name. Having briefly played with a Zune HD, if it follows the promise of being the same kind of device with phone capabilities added on, this will be a trouser-changing experience. [I’m told the expression to use is “game-changing experience”, but the Zune HD combined with phone would simply be that good.]

Malware blue-screens when patched

The Microsoft update MS10-015 recently demonstrated rather dramatically that unauthorised patches of the operating system make your operating system significantly unstable and unreliable.

In this case, the unauthorised patch is a rootkit called, among other things, “Alureon”, which alters some low-level drivers supplied with Windows.

Those of us who have been in this industry for a while may remember how unreliable updates used to be – but now, we find that patches are far easier to trust. The recent blue-screen of death (BSoD) errors associated with MS10-015 caused people to reassess that idea, as always happens when a patch is associated with crashes or malfunctions.

It’s very nice to see that those doubts are unfounded in this case, and that MS10-015 received as good a round of testing as any of the other patches issued by Microsoft, and that these BSOD errors are the result of a third-party developer failing to anticipate the prospect that Microsoft might make changes while patching for other issues.

Of course, in this case, it’s a malware writer, and we can be forgiven for thinking that this is to be expected because malware writers are sloppy. Of course, the truth is that some malware writers are not. It’s how they remain undetected, it’s how they continue to extract value from the systems in which they have made their incursions, and it’s how they manage to keep spreading. There has even been some speculation that there are some attackers who will patch and fix systems they infiltrate in order to keep their malware running. Obviously, it’s not a sound business strategy to allow someone to breach your systems in the hope that they’ll maintain those systems reliably running.

No, the message here is that the operating system on a Windows computer belongs to Microsoft, and they document well those places where you are expected to modify it. Step outside those boundaries of safe patching, and you run a good risk that a patch will trigger significant adverse behaviour. I believe I said something along those lines back when the antivirus vendors were complaining about PatchGuard, the technology in 64-bit Windows designed to prevent unauthorised patching of the Windows kernel.

If you experience problems as a result of applying a Microsoft security patch, or if you are experiencing what appears to be a security flaw in a Microsoft product, don’t forget that you can get free phone support in most countries. The US / Canada number to call for security issues is 1-866-PC SAFETY (1-866-727-2338).

MVP Summit Next Week

Today, I’ve been reminding many people at work that I’ll be out next week for the MVP Summit.


In previous years, the questions I’ve received in response have been mainly about “what’s that?”, “does that mean you work for Microsoft?”, “what are you going to be learning about?” etc.


This year, the questions have moved on to “what kind of stuff do you get from that?”, “are they going to give you a Zune?”, “do you all get a new Windows Phone?” and so on.


While that would certainly be a really cool thing, I think it is worth pointing out that Microsoft’s MVP programme is suffering from the credit crunch just as much as anyone. When I first joined, I can remember the hotel room I stayed in for the MVP Summit was huge – there was a phone in the bathroom, which was necessary because you had to call for a taxi to get to the bed. Now, we’re expected to double up on room occupancy. Previous summits have been in Seattle at the conference centre, this summit is in Bellevue. As has been revealed in numerous places, there’s no concept of “MVP Bucks” that we get to spend each year at the company store any more. Many of the program group dinners are now held in Microsoft cafeterias, nice though they are, rather than in restaurants and bars around the Redmond area.


So, no, I don’t anticipate getting a Zune or a Tablet PC (but wouldn’t it be funny if Steve Jobs were to offer us all iPads?) – though we might hear something about the much rumoured Zune Phone, if it really exists at all, but then we probably would be told to keep it a secret.


What I do anticipate is getting a look into some of the attitudes that are being brought to the design of Windows 8, IE 9, IIS 8, ADFS, etc. With luck, I’ll learn something I can bring back not only to work, but also to readers of my blog, and to the newsgroups I still hang out in. [Occasionally I'll hit the web forums, but they're still too painfully slow and cumbersome to read and respond to on a regular basis.]


And that’s well worth the price of admission.


Did I say there’s a price to being an MVP? Yes, there is, and it is that you help the community of Microsoft customers. Because it’s a retrospective award, and the criteria is based on something like “conspicuously more than others in the field”, it’s not really something you can evaluate ahead of time – and true to that, most of the MVPs would “pay that price” even in the absence of an MVP programme. It’s just that with membership of the programme, it’s a little easier to give the right advice.

Are you rugged?

He's Brawny, not Rugged.

As a developer, I’ve heard a number of adjectives applied to those who practice my craft.

“Rugged” isn’t one I expect to hear very often.

Granted, there are a few who alternate their brief stints of coding with explorations of the far-flung hinterland, but even these never quite seem to fill the “rugged” ideal.

Until today.

Today, I and many of my fellow developers can finally declare ourselves to be “rugged” under the new “Rugged Software Manifesto”.

I’m not sure how attractive that adjective will be to the target audience of developers, but I can do nothing but applaud the goals of the project.

Plain and simple, the project aims to turn “feature-eager” developers into writers of robust and secure code.

That just can’t be a bad thing.

I’m always whining about developers who have been taught how to add features to their program, and who get heaped with praise for doing so, when the end result is that the program has piles of unexpected features added in, by virtue of the developer’s not wanting to ensure that the software can’t be exploited.

The Rugged Software Manifesto seems to be about reminding developers that they have a duty to declare that a feature isn’t finished until it not only does what it is expected to do by its designed, but also does not do what it is not expected to do.

The Manifesto emphasises that the attackers may be cleverer and more persistent than the developer – no surprise, because a developer has to be “finished” with his software at some point, but the attacker never has to be finished attacking it, until everyone has stopped using it.

I’m not sure that I agree with comments in news interviews that the Rugged Software Manifesto will naturally butt heads with the Agile Manifesto – the really good Agile adherents recognise that security is a feature that needs to be developed like any other before shipping a final product, and ideally needs to be developed at each sprint, for each feature.

Only those people who are desperate to cling to the latest fad are Agile to the point of being Fragile. And since Rugged may now be the latest fad, the Rugged Software folks will only be too pleased to welcome those bandwagon-riders on board. Maybe they’ll learn a thing or two about writing secure code.

That Manifesto in Full

For those of you that don’t click through to the links, here’s the full text of the Rugged Software Manifesto. Hand on heart, straight face on front of head, repeat after me:

The Rugged Software Manifesto

TLS Renegotiation attack – Microsoft workaround/patch

Hidden by the smoke and noise of thirteen (13! count them!) security bulletins, with updates for 26 vulnerabilities and a further 4 third-party ActiveX Killbits (software that other companies have asked Microsoft to kill because of security flaws), we find the following, a mere security advisory:

Microsoft Security Advisory (977377): Vulnerability in TLS/SSL Could Allow Spoofing

It’s been a long time coming, this workaround – which disables TLS / SSL renegotiation in Windows, not just IIS.

Disabling renegotiation in IIS is pretty easy – you simply disable client certificates or mutual authentication on the web server. This patch gives you the ability to disable renegotiation system-wide, even in the case where the renegotiation you’re disabling is on the client side. I can’t imagine for the moment why you might need that, but when deploying fixes for symmetrical behaviour, it’s best to control it using switches that work in either direction.

The long-term fix is yet to arrive – and that’s the creation and implementation of a new renegotiation method that takes into account the traffic that has gone on before.

To my mind, even this is a bit of a concession to bad design of HTTPS, in that HTTPS causes a “TOC/TOU” (Time-of-check/Time-of-use) vulnerability, by not recognising that correct use of TLS/SSL requires authentication and then resource request, rather than the other way around. But that’s a debate that has enough clever adherents on both sides to render any argument futile.

Suffice it to say that this can be fixed most easily by tightening up renegotiation at the TLS layer, and so that’s where it will be fixed.

Should I apply this patch to my servers?

I’ll fall back to my standard answer to all questions: it depends.

If your servers do not use client auth / mutual auth, you don’t need this patch. Your server simply isn’t going to accept a renegotiation request.

If your servers do use client authentication / mutual authentication, you can either apply this patch, or you can set the earlier available SSLAlwaysNegoClientCert setting to require client authentication to occur on initial connection to the web server.

One or other of these methods – the patch, or the SSLAlwaysNegoClientCert setting – will work for your application, unless your application strictly requires renegotiation in order to perform client auth. In that case, go change your application, and point them to documentation of the attack, so that they can see the extent of the problem.

Be sure to read the accompanying KB article to find out not only how to turn on or off the feature to disable renegotiation, but also to see which apps are, or may be, affected adversely by this change – to date, DirectAccess, Exchange ActiveSync, IIS and IE.

How is Microsoft’s response?

Speed

I would have to say that on the speed front, I would have liked to see Microsoft make this change far quicker. Disabling TLS/SSL renegotiation should not be a huge amount of code, and while it has some repercussions, and will impact some applications, as long as the change did not cause instability, there may be some institutions who would want to disable renegotiation lock, stock and barrel in a hurry out of a heightened sense of fear.

I’m usually the first to defend Microsoft’s perceived slowness to patch, on the basis that they do a really good job of testing the fixes, but for this, I have to wonder if Microsoft wasn’t a little over-cautious.

Accuracy

While I have no quibbles with the bulletin, there are a couple of statements in the MSRC blog entry that I would have to disagree with:

IIS 6, IIS 7, IIS 7.5 not affected in default configuration

Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.

Well, of course – in the default setting on most Windows systems, IIS is not installed, so it’s not vulnerable.

That’s clearly not what they meant.

Did they mean “the default configuration with IIS installed and turned on, with a certificate installed”?

Clearly, but that’s hardly “the default configuration”. It may not even be the most commonly used configuration for IIS, as many sites escape without needing to use certificates.

Sadly, if I add “and mutual authentication enabled”, we’re only one checkbox away from the “default configuration” to which this article refers, and we’re suddenly into vulnerable territory.

In other words, if you require client / mutual authentication, then the default configuration of IIS that will achieve that is vulnerable, and you have to make a decided change to non-default configuration (the SSLAlwaysNegoClientCert setting), in order to remain non-vulnerable without the 977377 patch.

The other concern I have is over the language in the section “Likelihood of the vulnerability being exploited in general case”, which discusses only the original CSRF-like behaviour exploited under the initial reports of this problem.

There are other ways to exploit this, some of which require a little asinine behaviour on the part of the administrator, and others of which are quite surprisingly efficient. I was particularly struck by the ability to redirect a client, and make it appear that the server is the one doing the redirection.

I think that Eric and Maarten understate the likelihood of exploit – and they do not sufficiently emphasise that the chief reason this won’t be exploited is that it requires a MITM (Man-in-the-middle) attack to have already successfully taken place without being noticed. That’s not trivial or common – although there are numerous viruses and bots that achieve it in a number of ways.

Clarity

It’s a little unclear on first reading the advisory whether this affects just IIS or all TLS/SSL users on the affected system. I’ve asked if this can be addressed, and I’m hoping to see the advisory change in the coming days.

Summary

I’ve rambled on for long enough – the point here is that if you’re worried about SSL / TLS client certificate renegotiation issues that I’ve reported about in posts 1, 2 and 3 of my series, by all means download and try this patch.

Be warned that it may kill behaviour your application relies upon – if that is the case, then sorry, you’ll have to wait until TLS is fixed, and then drag your server and your clients up to date with that fix.

The release of this advisory is by no means the end of the story for this vulnerability – there will eventually be a supported and tested protocol fix, which will probably also be a mere advisory, followed by updates and eventually a gradual move to switch to the new TLS versions that will support this change.

This isn’t a world-busting change, but it should demonstrate adequately that changes to encryption protocols are not something that can happen overnight – or even in a few short months.

So I deleted it without reading it.

email security 


 

A golden rule of performance improvement

The Rule: Performance optimizations are not worth making for anything less than 10% improvement in speed.

Corollary: Performance optimizations must be measured before and after, and changes reverted if they do not cause significant performance improvement.

Converse: If you are pushing back on implementing a feature “because it will make the app unbearably slow”, particularly if that feature is deemed a security requirement, you had better be able to demonstrate that loss in performance, and it had better be significant.

I come about this insight through years of work as a developer, in which I’ve seen far too many mistakes introduced through people either being “clever” or taking shortcuts – and the chief reason given for both of these behaviours is that the developer was “trying to make the program faster” (or leaner, or smaller).

I have also seen developers disable security features, or insist that they shouldn’t implement security measures (SSL is a common example) because they “will make the application slower”.

I have yet to see a project complete a proper implementation of SSL for security that significantly slowed the application. In many cases, the performance testing that was done to ensure that SSL had no significant effect demonstrated that the bottleneck was already somewhere else.