Things I learned at Microsoft – Tales from the Crypto

Things I learned at Microsoft

1 2 3 5

Microsoft’s (new!) SDL Threat Modeling Tool 2014

Amid almost no fanfare whatsoever, Microsoft yesterday released a tool I’ve been begging them for over the last five or six years.

[This is not unusual for me to be so persistently demanding, as I’ve found it’s often the only way to get what I want.]

As you’ve guessed from the title, this tool is the “SDL Threat Modeling Tool 2014”. Sexy name, indeed.

Don’t they already have one of those?

Well, yeah, kind of. There’s the TAM Threat Analysis & Modeling Tool, which is looking quite creaky with age now, and which I never found to be particularly usable (though some people have had success with it, so I’m not completely dismissive of it). Then there’s the previous versions of the SDL Threat Modeling Tool.

These have had their uses – and certainly it’s noticeable that when I work with a team of developers, one of whom has worked at Microsoft, it’s encouraging to ask “show me your threat model” and have them turn around with something useful to dissect.

So what’s wrong with the current crop of TM tools?

In a word, Cost.

Threat modeling tools from other than Microsoft are pretty pricey. If you’re a government or military contractor, they’re probably great and wonderful. Otherwise, you’ll probably draw your DFDs in PowerPoint (yes, that’s one of the easier DFD tools available to most of you!), and write your threat models in Word.

Unless, of course, you download and use the Microsoft SDL Threat Modeling Tool, which has always been free.

So where’s the cost?

The SDL TM tool itself was free, but it had a rather significant dependency.

Visio.

Visio is not cheap.

As a result, those of us who championed threat modeling at all in our enterprises found it remarkably difficult to get approval to use a free tool that depended on an expensive tool that nobody was going to use.

What’s changed today?

With the release of Microsoft SDL Threat Modeling Tool 2014, Microsoft has finally delivered a tool that allows for the creation of moderately complex DFDs (you don’t want more complex DFDs than that, anyway!), and a threat library-based analysis of those DFDs, without making it depend on anything more expensive or niche than Windows and .NET. [So, essentially, just Windows.]

Yes, that means no Visio required.

Is there anything else good about this new tool?

A quick bullet list of some of the features you’ll like, besides the lack of Visio requirement:

  • Imports from the previous SDL Threat Modeling Tool (version 3), so you don’t have to re-work
  • Multiple diagrams per model, for different levels of DFD
  • Analysis is per-interaction, rather than per-object [scary, but functionally equivalent to per-object]
  • The file format is XML, and is reasonably resilient to modification
  • Objects and data flows can represent multiple types, defined in an XML KnowledgeBase
  • These types can have customised data elements, also defined in XML
  • The rules about what threats to generate are also defined in XML
  • [These together mean an enterprise can create a library of threats for their commonly-used components]
  • Trust boundaries can be lines, or boxes (demonstrating that trust boundaries surround regions of objects)
  • Currently supported by a development team who are responsive to feature requests

Call to Action?

Yes, every good blog post has to have one of these, doesn’t it? What am I asking you to do with this information?

Download the tool. Try it out on a relatively simple project, and see how easy it is to generate a few threats.

Once you’re familiar with the tool, visit the KnowledgeBase directory in the tool’s installation folder, and read the XML files that were used to create your threats.

Add an object type.

Add a data flow type.

Add custom properties that describe your custom types.

Use those custom properties in a rule you create to generate one of the common threats in your environment.

Work with others in your security and development teams to generate a good threat library, and embody it in XML rules that you can distribute to other users of the threat modeling tool in your enterprise.

Document and mitigate threats. Measure how successful you are, at predicting threats, at reducing risk, and at impacting security earlier in your development cycle.

Then do a better job on each project.

Multiple CA0053 errors with Visual Studio 11 Beta

I hate it when the Internet doesn’t know the answer – and doesn’t even have the question – to a problem I’m experiencing.

Because it was released during the MVP Summit, I was able to download the Visual Studio 11 Beta and run it on a VS2010 project.

There’s no “conversion wizard”, which bodes well, because it suggests that I will be able to use this project in either environment (Visual Studio 2010 or the new VS11 beta) without any problems. And certainly, the project I selected to try worked just fine in Visual Studio 11 and when I switched back to Visual Studio 2010.

Unfortunately, one of the things that I noticed when building my project is that the code analysis phase crapped out with fourteen instances of the CA0053 error:

imageAs you can see, this is all about being unable to load rule assemblies from the previous version of Visual Studio – and is more than likely related to me installing the x64 version of Visual Studio 11 Beta, which therefore can’t load the 32-bit (x86) DLLs from Visual Studio 2010.

Curiously this problem only exists on one of the projects in my multi-project solution, and of course I couldn’t find anywhere in the user interface to reset this path.

I thought for a moment I had hit on something when I checked the project’s options, and found the Code Analysis tab, but it didn’t seem to matter what I did to change the rule set, there was no place to select the path to that rule set.

Then I decided to go searching for the path in the source tree.

There it was, in the project’s “.csproj” file – two entries in the XML file, CodeAnalysisRuleSetDirectories and CodeAnalysisRuleDirectories. These consisted of the simple text:

<CodeAnalysisRuleSetDirectories>;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets</CodeAnalysisRuleSetDirectories>

<CodeAnalysisRuleDirectories>;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules</CodeAnalysisRuleDirectories>

As you can imagine, I wouldn’t normally suggest editing files by hand that the interface normally takes care of for you, but it’s clear that in this case, the interface wasn’t helping.

So, I just closed all currently open copies of Visual Studio (all versions), and edited the file in notepad. I kept the entries themselves, but deleted the paths:

<CodeAnalysisRuleSetDirectories></CodeAnalysisRuleSetDirectories>

<CodeAnalysisRuleDirectories></CodeAnalysisRuleDirectories>

Errors gone; problem solved.

You’re welcome, Internet.

MVP news

My MVP award expires on March 31

So, I’ve submitted my information for re-awarding as an MVP – we’ll see whether I’ve done enough this year to warrant being admitted again into the MVP ranks.

MVP Summit

Next week is the MVP Summit, where I visit Microsoft in Bellevue and Redmond for a week of brainwashing and meet-n-greet. I joke about this being a bit of a junket, but in reality, I get more information out of this than from most of the other conferences I’ve attended – perhaps mostly because the content is so tightly targeted.

That’s not always the case, of course – sometimes you’re scheduled to hear a talk that you’ve already heard three different times this year, but for those occasions, my advice would be to find another one that’s going on at the same time that you do want to hear. Talk to other MVPs not in your speciality, and find out what they’re attending. If you feel like you really want to get approval, ask your MVP lead if it’s OK to switch to the other session.

Very rarely a talk will be so strictly NDA-related that you will be blocked from entering, but not often.

Oh, and trade swag with other MVPs. Very frequently your fellow MVPs will be willing to trade swag that they got for their speciality for yours – or across regions. Make friends and talk to people – and don’t assume that the ‘industry luminaries’ aren’t willing to talk to you.

Featured TechNet Wiki article

Also this week, comes news that I’ve been recognised for authoring the TechNet Wiki article of the Week, for my post on Microsoft’s excellent Elevation of Privilege Threat Modeling card game. Since that post was made two years ago, I’ve used the deck in a number of environments and with a few different game styles, but the goal each time has remained the same, and been successfully met – to make developers think about the threats that their application designs are subject to, without having to have those developers be security experts or have any significant experience of security issues.

On Full Disclosure

I’ve written before on “Full Disclosure”:

Recent events have me thinking once again about “full disclosure”, its many meanings, and how it makes me feel when bugs are disclosed publicly without allowing for the vendor or developer to address the bug for themselves.

The post that reminded me to write on this topic was Tavis Ormandy’s revelation of the Help Control Protocol vulnerability, but it could be anyone that triggered me to write this.

How you disclose implies your motivation

Securing the users

If your motivation is to help secure users and their systems, then I think your disclosure pattern should roughly be:

  1. Find the world-renowned experts in the code (usually including the software’s developers) where the vulnerability lies.
  2. Discuss the extent of the flaw, and methods to fix and/or work around it.
  3. Get consensus.
  4. Test workarounds and fixes, so as to ensure that your fix is sufficient, as well as that it does not kill more important functionality.
  5. Publicise only as much demonstration as is required to show that the problem exists, and that it is serious.
  6. Release patches and workarounds, and work with affected users to assist them in deploying these.
  7. After a reasonable amount of time, publicise the exploit in full detail, so as to encourage developers not to cause similar mistakes, and to ensure that slow users are given good reason to upgrade their systems.
  8. Only if the vendor refuses to work with you at all do you publish without their involvement.

[Obviously, some of the timing moves up if and when the exploit appears in the wild, but the order is essentially the same.]

Disadvantages:

  • The bad guys may already have the vulnerability.
    • This only makes sense with relatively obvious vulnerabilities, and even then, working with the vendor allows you and the vendor to quantify its extent beyond what you know on your own, and beyond what the bad guys currently know, so that the bug can be fixed properly. Believe it or not, enterprises get really pissed when you release a “bug fix”, and then release another fix for the same bug, and then another fix for the same bug. For every time you revise the bug fix, you decrease the number of users applying the fix.
  • Someone else may publish ahead of you.
    • That’s okay, you’re smart and you’ll get the next one – besides, most vendors you’re working with will say in their bug report that you reported it to them, rather than the guy who publishes half-cocked.
    • Your bug report, collaborating with the vendor/developer, will be correct, whereas the other guy’s report will be full of its own holes, which you and the vendor can happily poke holes in.

Personal publicity

It’s fairly clear that there are some people in the security research industry whose main goal is that of self-publicity. These are the show-offs, whether they are publicising their company or their services or just themselves.

For these people the disclosure pattern would be:

  1. Demonstrate how clever I am by detailing the depth of the exploit with full examples.
  2. Watch while everything else happens.
  3. Occasionally interject that others don’t understand how important this vulnerability is.

Disadvantages:

  • This really makes the vendor hate you – which is great if you don’t ever need their assistance.
  • Occasionally, you’ll report something stupid – something that demonstrates that not only are you clueless about the software, but you’re loudly clueless.
  • It’s obvious that you’re in this for the publicity, rather than to help the user community get secure; as a result, users don’t come to you as much for help in securing their systems. Which is a shame if that’s the job you’re trying to get publicity for.

Just for the money

When all you’re in it for is the money, the answer is clear – you shop around, describing your bugs to Tipping Point and the like, then selling your bug to the highest bidder.

Disadvantages:

  • You may not necessarily get the publicity that brings future contracts and job interest.
  • There’s a chance that the person / group buying your bug doesn’t share your motives.
  • You get no further control over the progress of your bug.

Sometimes this isn’t so bad – you get the money, and many of the vulnerability buyers will work with vendors to address the bug – all the while, protecting their subset of users with their security tool.

To punish the vendor

What a noble goal – you’re trying to make it clear to users that they have chosen the wrong vendor.

Here, the disclosure pattern is simple:

  1. Release full details of the vulnerability, with a wormable exploit that requires as little user interaction as possible.
  2. Decry the security of a vendor that would be so stupid as to produce such an obvious bug and not find it before release.
  3. Wait and watch as your posse takes up the call and similarly disses your chosen target.

Disadvantages:

  • Again, you can look like an idiot if your research isn’t quite up to snuff.
  • Actually, you can look like an idiot anyway with this approach, especially when you pick on vendors whose security has improved significantly.
  • Vendors have their own posse:
    • People who work at the vendor
    • People who admire the vendor
    • People who share the vendor’s position, and don’t want people like you being shitty to them either.
  • You have to ask yourself – what am I looking for in a vendor before I determine that they are no longer subject to punishment?
    • Or are all vendors equally complicit in evil?
    • [Or only those who are fallible enough to let a bug slip through their testing?]

Here’s the lesson

You may agree or disagree with a lot of what I’ve written above – but if you’re going to publish vulnerability research, you have to deal with the prospect that people will be watching what you post, when you post it, how you post it – and they will infer from that (even if you think you haven’t implied anything of the sort) a motive and a personality. What are your posts and your published research going to say about your motives? Is that what you want them to say? Are you going to have to spend your time explaining that this is not really what you intended?

As Tavis is discovering, you can also find it difficult to separate your private vulnerability research from your employer – this is perhaps harder in Tavis’ case to draw the line, since he is apparently employed in the capacity of vulnerability. If your employer is understanding and you have an agreement as to what is personal work and what is work work, that’s not a big problem – but it can be a significant headache if that has not been addressed ahead of time.

Samples that suck: IfModifiedSince

I’ve been trying to improve my IFetch application’s overall performance, and it’s clear that the best thing that could be done to improve it immediately is to cache the information being returned from the BBC Radio web site, so that next time around, the application doesn’t have to reload all the information from the web, slow as it often is.

My first thought was quite simple – store the RDF (Resource Description Format) XML files in a cache directory – purge them if they haven’t been accessed in, say, a month, and only fetch them again from the web if the web page has been updated since the file was last modified.

Sadly, I wrote the code before I discovered that the BBC Radio web site acts as if all these RDF files were modified this last second.

Happily, I wrote the code before I went and looked at the documentation for the HttpWebRequest.IfModifiedSince Property.

Here’s the sample code (with source colouring and line numbers):

01    // Create a new ‘Uri’ object with the mentioned string.
02    Uri myUri =new Uri("http://www.contoso.com");           
03    // Create a new ‘HttpWebRequest’ object with the above ‘Uri’ object.
04    HttpWebRequest myHttpWebRequest= (HttpWebRequest)WebRequest.Create(myUri);
05    // Create a new ‘DateTime’ object.
06    DateTime today= DateTime.Now;
07    if (DateTime.Compare(today,myHttpWebRequest.IfModifiedSince)==0)
08    {
09        // Assign the response object of ‘HttpWebRequest’ to a ‘HttpWebResponse’ variable.
10        HttpWebResponse myHttpWebResponse=(HttpWebResponse)myHttpWebRequest.GetResponse();
11        Console.WriteLine("Response headers \n{0}\n",myHttpWebResponse.Headers);
12        Stream streamResponse=myHttpWebResponse.GetResponseStream();
13        StreamReader streamRead = new StreamReader( streamResponse );
14        Char[] readBuff = new Char[256];
15        int count = streamRead.Read( readBuff, 0, 256 );
16        Console.WriteLine("\nThe contents of Html Page are :  \n");   
17        while (count > 0)
18        {
19            String outputData = new String(readBuff, 0, count);
20            Console.Write(outputData);
21            count = streamRead.Read(readBuff, 0, 256);
22        }
23        // Close the Stream object.
24        streamResponse.Close();
25        streamRead.Close();
26        // Release the HttpWebResponse Resource.
27        myHttpWebResponse.Close();
28        Console.WriteLine("\nPress ‘Enter’ key to continue……………..");   
29        Console.Read();
30    }
31    else
32    {
33        Console.WriteLine("\nThe page has been modified since "+today);
34    }

So, what’s the problem?

First, there’s the one that should be fairly obvious – by comparing (in line 7) for equality to “DateTime.Now” (assigned in line 6), the programmer has essentially said that this sample is designed to differentiate pages modified after the run from pages modified before the run. Now this will have one of two effects – on a site where the If-Modified-Since request header works properly, all results will demonstrate that the page has not been modified; on a site where If-Modified-Since always returns that the page has been modified, it will of course always state that the page has been modified. That alone makes this not a very useful sample, even if the rest of the code was correct.

But the greater error is that the IfModifiedSince value is a request header, and yet it is being compared against a target date, as if it already contains the value of the page’s last modification. How would it get that value (at line 7), when the web site isn’t actually contacted until the call to GetResponse() in line 10?

Also irritating, in that the .NET Framework makes far too much use of exceptions, is that instead of the NotModified response type being a simple category of response, it’s an exception.

How should this code be changed?

My suggestion is as follows – let me know if I’ve screwed anything else up:

01 // Create a new ‘Uri’ object with the mentioned string.
02 Uri myUri = new Uri("http://www.google.com/intl/en/privacy.html");
03 // Create a new ‘HttpWebRequest‘ object with the above ‘Uri’ object.
04 HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create(myUri);
05 // Create a new ‘DateTime‘ object.
06 DateTime today = DateTime.Now;
07 today=today.AddDays(-21.0); // Test for pages modified in the last three weeks.
08 myHttpWebRequest.IfModifiedSince = today;
09 try
10 {
11     // Assign the response object of ‘HttpWebRequest‘ to aHttpWebResponse‘ variable.
12     HttpWebResponse myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
13     Console.WriteLine("Page modified recently\nResponse headers \n{0}\n", myHttpWebResponse.Headers);
14     Stream streamResponse = myHttpWebResponse.GetResponseStream();
15     StreamReader streamRead = new StreamReader(streamResponse);
16     Char[] readBuff = new Char[256];
17     int count = streamRead.Read(readBuff, 0, 256);
18     Console.WriteLine("\nThe contents of Html Page are :  \n");
19     while (false && count > 0)
20     {
21         String outputData = new String(readBuff, 0, count);
22         Console.Write(outputData);
23         count = streamRead.Read(readBuff, 0, 256);
24     }
25     // Close the Stream object.
26     streamResponse.Close();
27     streamRead.Close();
28     Console.WriteLine("\nPress ‘Enter’ key to continue……………..");
29     Console.Read();
30     // Release the HttpWebResponse Resource.
31     myHttpWebResponse.Close();
32 }
33 catch (System.Net.WebException e)
34 {
35     if (e.Response != null)
36     {
37         if (((HttpWebResponse)e.Response).StatusCode == HttpStatusCode.NotModified)
38             Console.WriteLine("\nThe page has not been modified since " + today);
39         else
40             Console.WriteLine("\nUnexpected status code " + ((HttpWebResponse)e.Response).StatusCode);
41     }
42     else
43     {
44         Console.WriteLine("\nUnexpected Web Exception " + e.Message);
45     }
46 } 

So, why are sucky samples relevant to a blog about security?

The original sample doesn’t cause any obvious security problems, except for the obvious lack of exception handling.

But other samples I have seen do contain significant security flaws, or at least behaviours that tend to lead to security flaws – not checking the size of buffers, concatenating strings and passing them to SQL, etc. It’s worth pointing out that while writing this blog post, I couldn’t find any such samples at Microsoft’s MSDN site, so they’ve obviously done at least a moderately good job at fixing up samples.

This sample simply doesn’t work, though, and that implies that the tech writer responsible didn’t actually test it in the two most obvious cases (page modified, page not modified) to see that it works.

MVP Summit Next Week

Today, I’ve been reminding many people at work that I’ll be out next week for the MVP Summit.

In previous years, the questions I’ve received in response have been mainly about “what’s that?”, “does that mean you work for Microsoft?”, “what are you going to be learning about?” etc.

This year, the questions have moved on to “what kind of stuff do you get from that?”, “are they going to give you a Zune?”, “do you all get a new Windows Phone?” and so on.

While that would certainly be a really cool thing, I think it is worth pointing out that Microsoft’s MVP programme is suffering from the credit crunch just as much as anyone. When I first joined, I can remember the hotel room I stayed in for the MVP Summit was huge – there was a phone in the bathroom, which was necessary because you had to call for a taxi to get to the bed. Now, we’re expected to double up on room occupancy. Previous summits have been in Seattle at the conference centre, this summit is in Bellevue. As has been revealed in numerous places, there’s no concept of “MVP Bucks” that we get to spend each year at the company store any more. Many of the program group dinners are now held in Microsoft cafeterias, nice though they are, rather than in restaurants and bars around the Redmond area.

So, no, I don’t anticipate getting a Zune or a Tablet PC (but wouldn’t it be funny if Steve Jobs were to offer us all iPads?) – though we might hear something about the much rumoured Zune Phone, if it really exists at all, but then we probably would be told to keep it a secret.

What I do anticipate is getting a look into some of the attitudes that are being brought to the design of Windows 8, IE 9, IIS 8, ADFS, etc. With luck, I’ll learn something I can bring back not only to work, but also to readers of my blog, and to the newsgroups I still hang out in. [Occasionally I’ll hit the web forums, but they’re still too painfully slow and cumbersome to read and respond to on a regular basis.]

And that’s well worth the price of admission.

Did I say there’s a price to being an MVP? Yes, there is, and it is that you help the community of Microsoft customers. Because it’s a retrospective award, and the criteria is based on something like “conspicuously more than others in the field”, it’s not really something you can evaluate ahead of time – and true to that, most of the MVPs would “pay that price” even in the absence of an MVP programme. It’s just that with membership of the programme, it’s a little easier to give the right advice.

TLS Renegotiation attack – Microsoft workaround/patch

Hidden by the smoke and noise of thirteen (13! count them!) security bulletins, with updates for 26 vulnerabilities and a further 4 third-party ActiveX Killbits (software that other companies have asked Microsoft to kill because of security flaws), we find the following, a mere security advisory:

Microsoft Security Advisory (977377): Vulnerability in TLS/SSL Could Allow Spoofing

It’s been a long time coming, this workaround – which disables TLS / SSL renegotiation in Windows, not just IIS.

Disabling renegotiation in IIS is pretty easy – you simply disable client certificates or mutual authentication on the web server. This patch gives you the ability to disable renegotiation system-wide, even in the case where the renegotiation you’re disabling is on the client side. I can’t imagine for the moment why you might need that, but when deploying fixes for symmetrical behaviour, it’s best to control it using switches that work in either direction.

The long-term fix is yet to arrive – and that’s the creation and implementation of a new renegotiation method that takes into account the traffic that has gone on before.

To my mind, even this is a bit of a concession to bad design of HTTPS, in that HTTPS causes a “TOC/TOU” (Time-of-check/Time-of-use) vulnerability, by not recognising that correct use of TLS/SSL requires authentication and then resource request, rather than the other way around. But that’s a debate that has enough clever adherents on both sides to render any argument futile.

Suffice it to say that this can be fixed most easily by tightening up renegotiation at the TLS layer, and so that’s where it will be fixed.

Should I apply this patch to my servers?

I’ll fall back to my standard answer to all questions: it depends.

If your servers do not use client auth / mutual auth, you don’t need this patch. Your server simply isn’t going to accept a renegotiation request.

If your servers do use client authentication / mutual authentication, you can either apply this patch, or you can set the earlier available SSLAlwaysNegoClientCert setting to require client authentication to occur on initial connection to the web server.

One or other of these methods – the patch, or the SSLAlwaysNegoClientCert setting – will work for your application, unless your application strictly requires renegotiation in order to perform client auth. In that case, go change your application, and point them to documentation of the attack, so that they can see the extent of the problem.

Be sure to read the accompanying KB article to find out not only how to turn on or off the feature to disable renegotiation, but also to see which apps are, or may be, affected adversely by this change – to date, DirectAccess, Exchange ActiveSync, IIS and IE.

How is Microsoft’s response?

Speed

I would have to say that on the speed front, I would have liked to see Microsoft make this change far quicker. Disabling TLS/SSL renegotiation should not be a huge amount of code, and while it has some repercussions, and will impact some applications, as long as the change did not cause instability, there may be some institutions who would want to disable renegotiation lock, stock and barrel in a hurry out of a heightened sense of fear.

I’m usually the first to defend Microsoft’s perceived slowness to patch, on the basis that they do a really good job of testing the fixes, but for this, I have to wonder if Microsoft wasn’t a little over-cautious.

Accuracy

While I have no quibbles with the bulletin, there are a couple of statements in the MSRC blog entry that I would have to disagree with:

IIS 6, IIS 7, IIS 7.5 not affected in default configuration

Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.

Well, of course – in the default setting on most Windows systems, IIS is not installed, so it’s not vulnerable.

That’s clearly not what they meant.

Did they mean “the default configuration with IIS installed and turned on, with a certificate installed”?

Clearly, but that’s hardly “the default configuration”. It may not even be the most commonly used configuration for IIS, as many sites escape without needing to use certificates.

Sadly, if I add “and mutual authentication enabled”, we’re only one checkbox away from the “default configuration” to which this article refers, and we’re suddenly into vulnerable territory.

In other words, if you require client / mutual authentication, then the default configuration of IIS that will achieve that is vulnerable, and you have to make a decided change to non-default configuration (the SSLAlwaysNegoClientCert setting), in order to remain non-vulnerable without the 977377 patch.

The other concern I have is over the language in the section “Likelihood of the vulnerability being exploited in general case”, which discusses only the original CSRF-like behaviour exploited under the initial reports of this problem.

There are other ways to exploit this, some of which require a little asinine behaviour on the part of the administrator, and others of which are quite surprisingly efficient. I was particularly struck by the ability to redirect a client, and make it appear that the server is the one doing the redirection.

I think that Eric and Maarten understate the likelihood of exploit – and they do not sufficiently emphasise that the chief reason this won’t be exploited is that it requires a MITM (Man-in-the-middle) attack to have already successfully taken place without being noticed. That’s not trivial or common – although there are numerous viruses and bots that achieve it in a number of ways.

Clarity

It’s a little unclear on first reading the advisory whether this affects just IIS or all TLS/SSL users on the affected system. I’ve asked if this can be addressed, and I’m hoping to see the advisory change in the coming days.

Summary

I’ve rambled on for long enough – the point here is that if you’re worried about SSL / TLS client certificate renegotiation issues that I’ve reported about in posts 1, 2 and 3 of my series, by all means download and try this patch.

Be warned that it may kill behaviour your application relies upon – if that is the case, then sorry, you’ll have to wait until TLS is fixed, and then drag your server and your clients up to date with that fix.

The release of this advisory is by no means the end of the story for this vulnerability – there will eventually be a supported and tested protocol fix, which will probably also be a mere advisory, followed by updates and eventually a gradual move to switch to the new TLS versions that will support this change.

This isn’t a world-busting change, but it should demonstrate adequately that changes to encryption protocols are not something that can happen overnight – or even in a few short months.

A golden rule of performance improvement

The Rule: Performance optimizations are not worth making for anything less than 10% improvement in speed.

Corollary: Performance optimizations must be measured before and after, and changes reverted if they do not cause significant performance improvement.

Converse: If you are pushing back on implementing a feature “because it will make the app unbearably slow”, particularly if that feature is deemed a security requirement, you had better be able to demonstrate that loss in performance, and it had better be significant.

I come about this insight through years of work as a developer, in which I’ve seen far too many mistakes introduced through people either being “clever” or taking shortcuts – and the chief reason given for both of these behaviours is that the developer was “trying to make the program faster” (or leaner, or smaller).

I have also seen developers disable security features, or insist that they shouldn’t implement security measures (SSL is a common example) because they “will make the application slower”.

I have yet to see a project complete a proper implementation of SSL for security that significantly slowed the application. In many cases, the performance testing that was done to ensure that SSL had no significant effect demonstrated that the bottleneck was already somewhere else.

Microsoft TechFest

Microsoft TechFest logo 2009

Last week, I went to Microsoft’s TechFest as part of their “Public Day”. This is the first time MVPs as a group have been invited to this event, and although it’s clear we missed some of the demonstrations that are not public-ready, this is something that I hope can be extended to us in future, even if only to Washington-state MVPs

For general news links on MS TechFest 2009, you can search news.google.com for “TechFest”. Here’s a couple of samples:

http://www.king5.com/video/index.html?nvid=335707 – I didn’t see these guys there.

http://www.guardian.co.uk/technology/blog/2009/feb/25/microsoft-software – I bumped into this guy.

I also saw Chris Pirillo there from LockerGnome and Chris.Pirillo, but he hasn’t written anything yet. I only mention him because it’s about time that I thanked him for being one of the earliest online writers (they were called “e-Zines” back then, apparently) to mention WFTPD in his column. Sadly, I don’t have a copy to remember what it is that he said :(

Apologies to anyone who expected to reach me by email that day – the usual computers spread around the Microsoft Conference Centre for email and web browsing were missing, possibly because the Press were there, and they’ll steal anything that isn’t nailed down, before coming back with crowbars.

So, here’s some description of the things I saw, ranging from the exciting and relevant to the “why is Microsoft spending money on that?” [Note that this is not meant to be disrespectful of ‘pure research’ – often, today’s “useless meanderings” become tomorrows product – WFTPD itself started from a momentary “how hard can it really be?” lapse in my own judgement, followed by a little research and a lot of effort.]

Specification Inference for Security
To improve focus on potential security faults in static analysis tools, this is a toolset whose approach is to divide functions into Sources, Sinks and Sanitizers (although that alliteration is liable to lead to confusion) – Sources generate untrustworthy data from input, Sinks consume data that they trust will fit their expectations, and Sanitizers transform the data along the way, ideally making sure that it goes from untrustworthy to trusted. Thinking in terms of a SQL injection, the Source would be a web server receiving input from a user containing a SQL command, the Sink would be the SQL server, and the Sanitizer would be whatever code packages the input and determines whether to pass it to the SQL server, and what changes to make (such as requiring proper quoting, or using a stored proc or parameterized query). Once these categorizations have been made, the static analysis tool can check that Sanitizers actually do sanitize – rather than having to try and analyse every function for possible sanitization. http://research.microsoft.com/merlin
Concurrency Analysis Platform and Tools
Enhances your test tool set by allowing tests to run with multiple permutations of concurrency. Race conditions are usually caught by users, or in production environments, because the environments cause different threads or processes to run at different speeds – with this toolkit, you get to try out multiple combinations of execution sequence, so that you are more likely to trigger the race condition. Of course, you still have to write tests that consider the prospect of doing more than one thing at a time, and because there are a large number of concurrency permutations, it’s not a turn-key solution, but it does allow you to debug concurrency issues more methodically, and catch those that appear more frequently. http://research.microsoft.com/chess – and this one’s available for download as an add-on to Visual Studio!
Lightweight Software Transactions for Games
Not just for games, the ORCS platform (Object-based Runtime for Concurrent Systems) makes coding multi-threaded applications easier and more problem-free. http://research.microsoft.com/orcs
Closed-Loop Control Systems for the Data Center
Power consumption monitoring and control allows for servers to be brought online or offline as computing demands change, so that as usage ramps up, more servers are turned on, and as usage declines, servers are turned off. I don’t think this is entirely original.
Algorithms and Cryptography
Cryptographic solutions with leakage. Unfortunately, the lady who came up with this wasn’t on hand to discuss her work, and her husband standing in for her didn’t seem to understand much about it either. The poster claimed an algorithm whereby you could leak some of your key to an attacker without reducing the strength of the key. I’m not sure how this works, or where it differs from having redundant information in the keys, or something like M of N crypto, but maybe it’ll be something that will affect our field in the years to come.
Opinion Search
Full of marketing jargon and too dense for me to penetrate, this is something that we could potentially use in the business side of Expedia, making use of customer opinions to allow search results to match the user’s opinion against the opinions of others with whom they have consistently agreed in the past, and can be expected to do so in the future.
Low-Power Processors in the Data Center
Using Netbook processors for data processing in a parallel environment allows for significant power savings.
Audio Spatialisation and AEC for Teleconferencing
Relying on the rise of computer-phone integration, and the fact that most computers have stereo speakers, this is a system for teleconferencing where different parties are given a different spot in the stereo spatialisation. Makes it much easier to tell who’s talking.
SecondLight
Surface computing taken to another level, literally. The surface on which images are projected is usually a light diffuser, so that the image effectively “stays” on the surface. In this implementation, the surface is rapidly switched between diffuse and transparent, so that you can use a secondary diffuser surface on top, which shows a different image. You have to see a demonstration to understand it – mms://wm.microsoft.com/ms/research/projects/secondlight-cambridge/secondlight.wmv – it’s a little flickery, in real-life too, but the team assured me that it can be made less so.
Commute UX – Dialog System for In-Car Infotainment
Will this stop executives requesting shorter passwords for unlocking their phone while driving? Probably not.
Back-of-Device Touch Input
Anyone using an iPhone or similar touch-based device will be familiar with the issue that your fingers are covering the image you’re trying to manipulate. By putting a sensor panel on the back of the device, you can reduce the size of the display without making it impossible to read while you select.
Augmented Reality
Combining GPS location with stock footage of the place you’re in, this is all about placing extra information into a view (such as a cell-phone with a video camera, or maybe eventually a heads-up display in glasses / goggles) of the world around you, by recognising where you are. Can be used for games, directions, advertising, city guides, or post-it notes without the paper.
Recognizing characters written in the Air
Entertaining just to watch people dragging an apple around to make letters on a screen in front of them. Probably more useful in the mode where the lid of an OHP pen is the “bright spot of strong solid colour” being tracked in mid-air.
Colour-structured Image Search
Draw a rough colour picture of the image you want to see, and get a page of search results from around the web. The demonstrations consisted of drawing pictures of flowers, or flags, or a sunset. I foresee widespread abuse once deployed, although it will mean that people who usually draw on bathroom walls will be moving their talents online.

MVP Summit 2009 is here!

IMG_2512 (480x640) (480x640)

I snapped this picture last week at Microsoft’ Research’s Tech-Fest event.

Microsoft always makes the visiting MVPs feel welcome at Global Summit time, when all MVP awardees are invited to visit Microsoft’s campus, and engage in face-to-face conversations with various Microsoft Product Groups about the feedback they’re seeing from the users they talk to in their various forums, whether that’s Usenet newsgroups, web forums, user groups, or book and magazine readers.

This year, in large part thanks to the efforts of one of the other Security MVPs, Dana Epps, we have a fantastic schedule of in-depth sessions on identity frameworks, threat modeling, Microsoft’s internal security, and a number of other topics that I should perhaps keep quiet about.

The other benefit to me, as an MVP, from these sessions is that I get to network with other MVPs – all of whom are intelligent, driven individuals with expertise in a wide variety of fields, not just my own area of Enterprise Security.

Already I’ve spoken to a number of people in conversations that I intend to continue long after the Summit is over. I’ve made some new friends, met plenty of old friends, and expanded and strengthened existing social connections.

It’s a little sad that the worsening economic climate has caused a number of MVPs from outside the US to not attend this year’s Summit, and even some from inside the country. But it does appear that the MVP programme is still strong, as around 1500 MVPs from around the world are in attendance.

For those wondering about the swag bag, we got a cloth bag, stickers, a pen, and a water bottle. The shirts will be arriving on Wednesday (thank you, US Customs!). The benefit is more in the programme of technical sessions than the bag, unlike some technical conferences, where your $2500 entrance fee gets you a rather spectacular bag of ‘freebies’ and a number of sessions scheduled such that all the ones you want to see are in the same time slot.

I have to say, I love the stickers. Being a part of the MVP programme is a really nice thing that Microsoft does to say ‘thank you’ to people who are assisting Microsoft’s customers in newsgroups, user groups, etc, and who would continue to do so anyway, even if Microsoft ended the MVP programme. As such, I think it’s an excellent recognition, and I’m proud of the fact that I was awarded – so I like to show it off, mainly by plastering stickers on my various technology items like laptops and PDAs.

1 2 3 5