Windows Server 2008 – Tales from the Crypto

Windows Server 2008

UDP and DTLS not a performance improvement.

Saw this update in my Windows Update list recently:

As it stands right now, this is what it says (in part):


OK, so I started off feeling good about this – what’s not to like about the idea that DTLS, a security layer for UDP that works roughly akin to TLS / SSL for TCP, now can be made a part of Windows?

Sure, you could say “what about downstream versions”, but then again, there’s a point where a developer should say “upgrading has its privileges”. I don’t support Windows 3.1 any more, and I don’t feel bad about that.

No, the part I dislike is this one:

Note DTLS provides TLS functionalities that are based on the User Datagram Protocol (UDP) protocol. Because TLS is based on the Transmission Control Protocol (TCP) protocol, DTLS performs better than TLS.


That’s just plain wrong. Actually, I’m not even sure it qualifies as wrong, and it’s quite frankly the sort of mis-statement and outright guff that made me start responding to networking posts in the first place, and which propelled me in the direction of eventually becoming an MVP.


Yes, I was the nerdy guy complaining that there were already too many awful networking applications, and that promulgating stupid myths like “UDP performs better than TCP” or “the Nagle algorithm is slowing your app down, just disable it” causes there to be more of the same.

But I think that’s really the point – you actually do want nerds of that calibre writing your network applications, because network programming is not easy – it’s actually hard. As I have put it on a number of occasions, when you’re writing a program that works over a network, you’re only writing one half of the application (if that). The other half is written by someone else – and that person may have read a different RFC (or a different version of the protocol design), may have had a different interpretation of ambiguous (or even completely clear) sections, or could even be out to destroy your program, your data, your company, and anyone who ever trusted your application.

Surviving in those circumstances requires an understanding of the purity of good network code.

But surely UDP is faster?

Bicycle messengers are faster than the postal service, too. Fast isn’t always what you’re looking for. In the case comparing UDP and TCP, if it was just a matter of “UDP is faster than TCP”, all the world’s web sites would be running on some protocol other than HTTP, because HTTP is rooted in TCP. Why don’t they?

Because UDP repeats packets, loses packets, repeats packets, and first of all, re-orders packets. And when your web delivery over UDP protocol retransmits those lost packets, correctly orders packets, drops repeated packets, and thereby gives you the full web experience without glitches, it’s re-written large chunks of the TCP stack over UDP – and done so with worse performance.

Don’t get me wrong – UDP is useful in and of itself, just not for the same tasks TCP is useful for. UDP is great for streaming audio and video, because you’d rather drop frames or snippets of sound than wait for them to arrive later (as they would do with TCP requesting retransmission, say). If you can afford to lose a few packets here and there in the interest of timely delivery of those packets that do get through, your application protocol is ideally suited to UDP. If it’s more important to occasionally wait a little in order to get the whole stream, TCP will outperform UDP every time.

In summary…

Never choose UDP over TCP because you heard it goes faster.

Choose UDP over TCP because you’d rather have packets dropped at random by the network layer than have them arrive any later than the absolute fastest they can get there.

Choose TCP over UDP because you’d rather have all the packets that were sent, in the order that they were sent, than get most / many / some of them earlier.

And whether you use TCP or UDP, you can now add TLS-style security protection.

I await the arrival of encrypted UDP traffic with some interest.

2ndAuth released for Windows 7, Windows Server 2008 R2

I’ve given some hints at what we’ve been working on lately, by my choice of article topics.

Credential Providers have been my headache for a couple of months now, not least of which is because Microsoft haven’t quite provided all the working code they ought to have done for Windows Vista. Windows 7, now that works just fine. So that’s what we’re supporting – Windows 7 and Windows Server 2008 R2 (essentially Windows 7 Server) – with our new release of 2ndAuth.

[We’re still supporting 2ndAuth for Windows Server 2003 / Windows XP / Windows 2000, and will be releasing patches, new features and updates as necessary]

To whet your appetite, here’s a screen-shot of 2ndAuth at work on a Windows 7 system:


Notice that when 2ndAuth detects that you’ve selected to log on to a shared user (by a confusing coincidence, this one has a first name of “Shared”, and a last name of “User”), it prompts you for a second authentication (hence the name), which requires that the actual user enter another set of credentials (these should be their own credentials, and shared users cannot vouch for other shared users). This is then written to the Windows Event Log so that you can check who has been accessing which shared accounts and when.

Unauthenticated / failed attempts are also logged, but it’s difficult to say how useful it is to read that, since the failure could be with an invalid user name as much as an invalid password.

Terminal Services / Remote Desktop Connections are supported, too, as well as locking and unlocking the workstation (e.g. handing off to another user part way through a procedure).

The goal here is to acknowledge that sometimes you can’t help using a shared account, and the best thing to do is to provide a mechanism whereby you can discover who is responsible for the use of that account.

I’ll be adding a download link to our products page for 2ndAuth in a little while, but in the meantime, please feel free to ask me any questions about this service – either in the blog comments here, or by email to

Command Line MD5 hash

A colleague asked me the other day what the command-line tool was for calculating MD5 hashes in Windows.

In a moment of sanity, I told him that the usual tool was FCIV, the Microsoft File Checksum Integrity Verifier, but that you had to download it.

Then when he started making fun, and saying that Linux had a command-line tool built in, I went more towards insanity, and suggested the following for him:

[BitConverter]::ToString((new-object Security.Cryptography.MD5CryptoServiceProvider).ComputeHash((new-object IO.FileInfo("c:\windows\explorer.exe")).OpenRead())).Replace("-","").ToLower()

Sure, it’s PowerShell, but that’s been a part of Windows for some while now.

[If you really want to use the example, note that it calculates the hash for the file c:\windows\explorer.exe – change the string to change the file.]

More useful is to create a function:

function MD5 ($a) {[BitConverter]::ToString((new-object Security.Cryptography.MD5CryptoServiceProvider).ComputeHash((new-object IO.FileInfo($a)).OpenRead())).Replace("-","").ToLower();}

Then you can call this with MD5(“c:\windows\calc.exe”) to get a hash of the Calculator.

The meta-lesson

But this does draw out a distinction between operating systems – Linux has an MD5 hash calculator because you are expected to calculate MD5 hashes of files manually on a regular basis. Windows doesn’t have an MD5 hash calculator, because that’s generally done for you. Windows Update will check hashes on files it downloads before it applies them, for instance.

You can learn a lot about an operating system by looking at what is in its default deployment, and what is absent – and why it’s absent (which you can deduce from finding out what you’re supposed to do instead).

The power of stupidity

I just spent a couple of days trying to figure out why logon-related code that worked in Windows XP failed in Windows Vista and Windows 7.

hToken = NULL;
if ( LogonUser( g_sUser, bIsUPN ? NULL : g_sDomain, g_sPass, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, &hToken ) )
    // Re-populate the g_sUser and g_sDomain values from the token!
    TOKEN_USER tUser;
    DWORD nLength;
    // Get the user / domain information from the token.
    if (GetTokenInformation(hToken,TokenUser,&tUser,sizeof tUser,&nLength))
        SID_NAME_USE eUse;
        DWORD dwUserSize = _countof(g_sUser);
        DWORD dwDomainSize = _countof(g_sDomain);
            g_sUser, &dwUserSize,
            g_sDomain, &dwDomainSize,

[Note that some error handling has been removed for clarity and brevity.]

So what was going wrong? This totally used to work – it’s designed to validate the username and password, as well as to provide me with the canonical form of the user name.

I had a look through the APIs, and sure enough, there was a more up-to-date version of one of them – LogonUser has a colleague, LogonUserEx, and that function returns the Logon SID as well as verifying the logon works. Cool, I thought, I can get rid of GetTokenInformation, which seems to be failing anyway, and use LogonUserEx.

No dice.

LogonUserEx claimed to be working, and yet LookupAccountSid returned an error, signifying ERROR_NONE_MAPPED (1332 decimal, 0x534 in hex, “No mapping between account names and security IDs was done.”)

A little searching on ERROR_NONE_MAPPED led to a blog post by David LeBlanc, indicating that logon SIDs will cause this error, because they are entirely ephemeral SIDs, used mainly to protect securable items that should not be available to processes running outside of this logon session (and, by the same measure, allowing access to be provided, where appropriate, across different processes in the same logon session).

And then I realised, after a few hours of experimentation, the answer was staring me in the face, in the documentation – LogonUserEx returns the Logon SID in ppLogonSid, which is a Logon SID. Does not have a name, only a SID.

So, that explained the failure of LookupAccountSid with LogonUserEx, and I returned to using LogonUser – which left me with the conundrum of what was failing there.

It often turns out to be the simplest of things.

TOKEN_USER is a structure containing a pointer to the SID. As such, GetTokenInformation has to put the SID somewhere. Cleverly, it asks you to build your TOKEN_USER structure a little bit long, and places the SID at the end of the structure, before setting the pointer in the structure to point to the SID. So, sizeof(TOKEN_USER) is not big enough to pass to a GetTokenInformation call requesting a TokenUser.

The big question is, not why this failed, but why it worked ever at all! I’m not too fussed in finding that answer, because I’ve now changed my code to do it properly, and everything still works – on Windows Vista and XP. But I do feel stupid that debugging this code took me part of Sunday and a little of Monday evening.

Now the code looks like this (again, some error handling has been removed for brevity – don’t skimp in your production code!)

hToken = NULL;
if ( LogonUser( g_sUser, bIsUPN ? NULL : g_sDomain, g_sPass, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, &hToken ) )
    SecureZeroMemory(g_sPass,sizeof g_sPass);
    TOKEN_USER *ptUser;
    DWORD nLength;
    GetTokenInformation(hToken, TokenUser, NULL, 0, &nLength);
    ptUser = (TOKEN_USER*)new char[nLength];
    // Get the user / domain information from the token.
    if (GetTokenInformation(hToken,TokenUser,ptUser,nLength,&nLength))
        SID_NAME_USE eUse;
        DWORD dwUserSize = _countof(g_sUser);
        DWORD dwDomainSize = _countof(g_sDomain);
            g_sUser, &dwUserSize,
            g_sDomain, &dwDomainSize,
    delete [] (char *)ptUser;

Note that the fix is to request the length of the TOKEN_USER structure with an initial call to GetTokenInformation, followed by a second call to fill it in.

World IPv6 Day–some likely effects

Are you ignoring IPv6 for the moment, knowing it’s not going to affect you any time soon? I have news for you – you will be significantly affected in the next two months.

It seems that a large fraction of the world is really rather dismissive about the coming of IPv6, which is, after all, the best IPv.

But there are people who are intent on providing a move to the new world, and they’ve geared up to provide a “World IPv6 Day” on which they will be enabling IPv6 on their main sites. (There is an ever-increasing list of participants)

So what is going to happen when some web sites – some big web sites – turn on IPv6 for a day this June? And what will happen when IPv6 is turned on permanently at those sites?

For individuals – “Consumers”

Individual users are probably thinking “someone will make it all work for me” – and some of that is likely true, if you have someone managing your network for you. Your Internet Service Provider will eventually do what they can to provide IPv6 service to your home, and your employer’s IT department is probably thinking in some terms about what to do when they feel like it’s time to deploy IPv6 to the company. But most home routers are not currently able to provide native IPv6 service.

If your cable modem, DSL router, or other entry device is rented to you by the ISP, then you probably have nothing to worry about, they will eventually get replaced to support IPv6, when the ISP is ready to accept IPv6.

If you have bought your own entry device, or other routers (such as a wireless router), you will have to replace it to support IPv6. Don’t run out and get a new router yet – there are no home routers on the market that currently support IPv6 fully – or even enough to consider upgrading for that functionality. Those of us using IPv6 at home are generally using custom software that we have installed, not something the average consumer wants to do.

This means you are stuck on IPv4 for the foreseeable future, although your computer is most likely capable of using IPv6 when connected to a network that supports it. But you will still be affected – see the section below, “For Everyone” for more.

For businesses – management

If you’re not already engaged in some form of IPv6 project, you really should be.

If your IT department are telling you that IPv4 addresses have not run out, ask them “then why are we flailing around behind a NAT, and having to write or purchase software that specifically knows how to make its way out and back through a NAT?”

The fact is, IPv4 addresses ran out years ago, and we’re really only in this last couple of years in a situation where we can deploy IPv6 to fix the problem. Operating Systems for desktops and laptops now support IPv6, and usually have it enabled by default; business-class routers and switches are available with IPv6 support built in, and firmware for some not-so-new devices in that class is available to provide IPv6 support.

More than that, though, you have to make sure that you have staff on-hand who are trained to understand IPv6, because training your staff may be the investment that takes the longest to get right. When you read the section “For Everyone” below, consider what the impact will be to your support centres and to your customers when something breaks. Will you have to explain away broken links, images or even broken pages? [Any site that has previously seen broken pages because of inability to download ads should know how this comes about]

For businesses – IT

If you’re an IT department, you probably have some people on staff who are into new technology – the more they can get, the better. Quite frankly, everyone in an IT department should have something of that feel, or they’re in the wrong team. So, when you get management approval to start down the IPv6 road, it should be a simple matter of asking “who wants it?” and letting people sign up to work on learning the new technology and finding the solutions. Ideally, when your management asks “who’s the IPv6 guy”, you’ll be able to point him out right away.

You should obviously consider a staged roll-out of IPv6 technology, starting with internal networking, to make sure you have an infrastructure that supports it, and only later considering allowing incoming IPv6 to connect to your web site, or to other externally-facing systems.

As a part of enabling routing, make sure that you match, in your IPv6 environment, the protections you already have for IPv4. Do not try to match feature-for-feature, because of features like NAPT, where there is some accidental / incidental security protection from a feature that is essentially unavailable in IPv6. Match protection-for-protection – an IPv4 NAT’s security protection is that it is a firewall with no holes punched in it. So, its IPv6 equivalent protection is a default-deny firewall.

Consider grouping servers into subnets or address ranges based on their use, so that you can configure your firewall using contiguous ranges, rather than individual address assignments. This will make your IPv6 firewall fast – perhaps faster than when operating on its IPv4 rules – and simple.

For everyone

When external sites turn on IPv6, and start resolving their site names to IPv6 addresses as well as IPv4, there will be some users who have poorly-configured IPv6 installations. Their DNS name servers will say “here’s an IPv6 address”, their operating system and web browser will say “I understand IPv6, so let’s connect to that address!”, and some portion of their network will say “huh? What is this, the future or something? I’m still wearing shoulder-pads and leg-warmers and watching Dynasty, because it’s been the 1980s for the last several decades!”

What that user will see is that the IPv6-capable web site just dropped off the Internet. At best, it may simply cause a long delay (several seconds) in reaching the site, as the browser tries – and fails – to connect to IPv6, and then switches to IPv4. At worst, it will cause the big red X to appear, and sites to fail to load completely, as the browser (or other client software) gives up.

You can’t quit the game, either

Fine, so maybe all this means is that those sites who take part in World IPv6 Day will drop off the Internet for a day, to some of their users, and then the next day all will be just perfect.

Not quite.

You see, with “Web 2.0”, everything’s mashed up and interconnected. Google’s everywhere. So are some of the other participants in World IPv6 Day. Each one of those sites being unreachable could affect your favourite mashups, whether you are consumer or service provider. And what is an advertising-laden website if not a mashup of its advertising and its content?

Businesses – what if your adverts fail to load? What about that mapping site you use? Is your technical support ready for an estimated 0.1% of your customers calling in with failures on your site?

Consumers – are you ready to take these errors as a sign that you need to fix your network, or to bug your ISP, or are you going to insist, wrongly, that the problem is with the web sites participating in World IPv6 Day? At least, will you accept that these errors are a necessary part of learning how to move to IPv6?

ISPs – even if you have no plans for IPv6, are you ready for the technical support requests from people who have errors connecting to an IPv6-supporting site?

Quitting, or refusing to take part in the move to IPv6, is not an option. IPv6 will roll out. World IPv6 Day is only the FIRST of many shake-outs that will happen, as sites increasingly add support for IPv6 to their existing IPv4 lineup.

For a preview of what will happen to your machine, try connecting to a system that supports IPv6 and IPv4. The usual example is – it displays a picture of a turtle. The turtle dances for IPv6 users, and sits there doing nothing for IPv4 users (although your browser may choose to display the IPv4 version as its default even if you support IPv6).

If you are one of those rare individuals in an IPv6-capable network island that is unreachable by the IPv6 Internet, you will see an error.

Sadly, with new organisations joining World IPv6 Day every day, you can’t really predict what exactly will break – but you can predict how some of it will break, and train your staff to handle this, whether it is by deploying changes, or simply handling support calls.

I’d love to know what effects you’ve anticipated will come on World IPv6 Day, and what work you’ve done to mitigate these issues.

Starting to build your own Credential Provider

If you’re starting to work on a Credential Provider (CredProv or CP, for short) for Windows Vista, Windows Server 2008, Windows Server 2008 R2 or Windows 7, there are a few steps I would strongly recommend you take, because it will make life easier for you.

0. Read Dan Griffin’s article in MSDN Magazine.

The article, "Create Custom Login Experiences With Credential Providers For Windows Vista" by Dan Griffin in January 2007’s MSDN Magazine on Credential Providers is a truly excellent source of information, gleaned largely by the same exhaustive trial and error effort that you will be engaging in with your own CP.

0.1 Read it again.

0.2 And again, and again and again.

As you work on your CP, you will keep running into questions and new insights as to what it is that Dan was telling you in that article.

Keep a printed copy next to you when developing your CP, so that you can keep looking back to it.

If you have met Dan and asked his permission, keep him on speed-dial.

1. Test your Credential Provider in a Virtual PC environment.

You will screw something up, and when you do, the logon screen will most likely cycle over and over and over (what, Microsoft couldn’t provide a “this Credential Provider has failed eighteen times in a row and will be temporarily disabled” feature?), preventing you from logging back on to change out your broken CP. At this point, you really want to revert back to a previous working session.

To my mind, the easiest way to do this is to create one Virtual PC environment with a base Windows 7 system, patched up to current levels, and with a few test users installed. You can burn an MSDN licence up on this test installation, if you like, but quite frankly, I’m likely to want to refresh it from scratch every so often anyway, so the activation timeout is no big deal.

Once you have created this base image, create another virtual machine, based off the virtual hard disk (VHD) of the base image, and be sure to enable undo disks. This way, when things go wrong, you can shut down this second virtual machine, telling Virtual PC to discard the Undo Disk data, and you will be able to restart the machine immediately and continue to work on it.

2. Enable the kernel debugger against your VM.

This is a little tricky.

2.1 First, edit the settings on your VM.

Enable COM1 to point to a Named Pipe, such as “\\.\pipe\credprov”:


2.2 Now, enable kernel debugging on the VM itself

Log on to the VM, and use the bcdedit tool, from an Administrator Command Prompt to change the debugging option in the boot database. You can go the long way around, reading Microsoft’s instructions on how to do this, or you can simply use the following two commands:

bcdedit /dbgsettings serial debugport:1

bcdedit /debug {current} on


Notice that Microsoft suggests creating a separate environment for debugging on and off, but I don’t see that as being terribly useful. I will always be debugging this test environment, and it really doesn’t slow me down that much. You can always use “bcdedit /debug {current} off” to turn debugging off later.

This setting will take effect at the next reboot of the VM, but don’t reboot yet.

2.3 Enable the Debug Output Filter so OutputDebugString works.

Windows Vista and later don’t output debug messages to the kernel debugger by default. Those messages are filtered. You can spend a lot of time trying to figure out why you are staring at a blank screen when you have filled your code with OutputDebugString and/or TRACE calls. Or you can change the registry entry that controls the Output Debug Filter:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Debug Print Filter\DEFAULT

Create the “Debug Print Filter” value, if it isn’t there, and then create the DEFAULT value as a DWORD, and set it to the value 8.


2.4 Save these settings

Since you’ll want these settings to come back after a restart, you’ll want to commit them to the VHD. Easily done, but takes some time. Shut down the VM, and when you are prompted what you want to do, select that you wish to commit changes to the virtual hard disk.


Expect this to take several long minutes. While you do that, go read Dan’s article again.

2.5 Create a shortcut to the debugger

I use WinDBG (is that pronounced “Windbag”?), and the shortcut I use points to:

"C:\Program Files\Debugging Tools for Windows (x64)\windbg.exe" -k com:port=\\.\pipe\credprov,baud=115200,pipe,reconnect,resets=10

Remember to start the VM before starting the WinDBG shortcut, so that the VM has a pipe for WinDbg to connect to.

3. Start from the CredProv samples

Play around with the credential provider sample, or samples, that are closest to your eventual design goal, and add features to move them towards your desired end-state, rather than building your own from scratch.

Don’t just play with the one sample – looking at, or testing, the other samples may give you a little more insight that you didn’t get from the sample you’re working with.

3.1 Build often, and test frequently

Random errors and occasional misunderstandings (“gee, I didn’t realise you can’t call SetFieldString from GetStringValue”) will cause you to crash often. A crash in your CP means an infinite loop, and some inventive use of Anglo-Saxon.

Building often, testing frequently, and backing out disastrous changes (use version control if you have it!) will lead to a better process.

3.2 Later, build your own CP

Once you have a good understanding of the Credential Provider and its mysterious ways, you may decide to throw out Microsoft’s code and build your own from scratch. Keep comparing against your modified sample to see why it isn’t working.

3.3 Before deployment, change the GUID!

The GUIDs used by the sample code are well-known, and will tie in some systems to other, more shoddy, developers’ versions of those samples. If you forget to change the GUID on your code, you will have a CP-fight.

4. Go back to Dan’s article every time you reach a bottleneck

Occasionally a twist of phrase, or a reinterpretation of a paragraph is all it takes to wring some more useful knowledge out of this article. Don’t forget to use the online help Microsoft provides, as well as searching the MSDN, but remember that this is not a very frequently-trod path. It may be that you are doing something the credential provider architects didn’t consider. In fact, it’s highly likely.

5. Stop mailing

Nobody monitors that email address any more, and there seems to be something of a black hole associated with questions related to Credential Providers in general. It’s as if nobody really truly understands them. A few of the MVPs (particularly Dan Griffin, Dana Epp, and perhaps myself) have a good understanding, so read their blogs, and perhaps post to the Microsoft Forums, if you can manage to do so.

6. Enumerate, and test, the scenarios your customers might run into

  • domain-joined and non-domain
  • administrator, non-administrator, guest
  • with and without user names being supplied (Secpol.msc –> Local Policies –> Security Options –> Interactive Logon: Do not display last user name)
  • default domain, other domain, local accounts
  • logon, switch user, unlock workstation, access from Remote Desktop Connection / MSTSC (as we old-timers call it)
  • change password
  • If you’re of a mind, test the credential user interface mode, too.

Comcast aims for the future

I’m visiting the in-laws in Texas this weekend, and I use the SSTP VPN in Windows Server 2008 R2 to connect home (my client is Windows 7, but it works just as well with Vista). Never had many problems with it up until this weekend.

Apparently, on Friday, we had a power cut back at the house, and our network connectivity is still not happening. I’ve asked the house-sitter to restart the servers and routers where possible, but it’s still not there.

So I went online to Comcast, to track down whether they were aware of any local outage. Sadly not, so we’ll just have to wait until I get home to troubleshoot this issue.

What I did see at Comcast, though, got me really excited:

Comcast is looking for users to test IPv6 connectivity!

Anyone who talks to me about networking knows I can’t wait for the world to move to IPv6, for a number of reasons, among which are the following:

  • Larger address space – from 2^32 to 2^128. Ridiculously large space.
  • Home assignment of 64 bits to give a ridiculously large address space to each service recipient.
  • Multicast support by default. Also, IPsec.
  • Everyone’s a first-class Internet citizen – no more NAT.
  • FTP works properly over IPv6 without requiring an ALG.
  • Free access to all kinds of IPv6-only resources.

So I can’t but be excited that my local ISP, Comcast, is looking to test IPv6 support. I only hope that it’ll work well with the router we have (and the router we plan to buy, to get into the Wireless-N range). Last time I was testing IPv6 connectivity, it turned out that our router was not forwarding the GRE tunneling protocol that was used by the 6-in-4 protocol used by Hurricane Electric’s Tunnel Broker.

Who knows what other connectivity issues we’re likely to see with whatever protocol(s) Comcast is going to expect our routers and servers to support? I can’t wait to find out

TLS Renegotiation attack – Microsoft workaround/patch

Hidden by the smoke and noise of thirteen (13! count them!) security bulletins, with updates for 26 vulnerabilities and a further 4 third-party ActiveX Killbits (software that other companies have asked Microsoft to kill because of security flaws), we find the following, a mere security advisory:

Microsoft Security Advisory (977377): Vulnerability in TLS/SSL Could Allow Spoofing

It’s been a long time coming, this workaround – which disables TLS / SSL renegotiation in Windows, not just IIS.

Disabling renegotiation in IIS is pretty easy – you simply disable client certificates or mutual authentication on the web server. This patch gives you the ability to disable renegotiation system-wide, even in the case where the renegotiation you’re disabling is on the client side. I can’t imagine for the moment why you might need that, but when deploying fixes for symmetrical behaviour, it’s best to control it using switches that work in either direction.

The long-term fix is yet to arrive – and that’s the creation and implementation of a new renegotiation method that takes into account the traffic that has gone on before.

To my mind, even this is a bit of a concession to bad design of HTTPS, in that HTTPS causes a “TOC/TOU” (Time-of-check/Time-of-use) vulnerability, by not recognising that correct use of TLS/SSL requires authentication and then resource request, rather than the other way around. But that’s a debate that has enough clever adherents on both sides to render any argument futile.

Suffice it to say that this can be fixed most easily by tightening up renegotiation at the TLS layer, and so that’s where it will be fixed.

Should I apply this patch to my servers?

I’ll fall back to my standard answer to all questions: it depends.

If your servers do not use client auth / mutual auth, you don’t need this patch. Your server simply isn’t going to accept a renegotiation request.

If your servers do use client authentication / mutual authentication, you can either apply this patch, or you can set the earlier available SSLAlwaysNegoClientCert setting to require client authentication to occur on initial connection to the web server.

One or other of these methods – the patch, or the SSLAlwaysNegoClientCert setting – will work for your application, unless your application strictly requires renegotiation in order to perform client auth. In that case, go change your application, and point them to documentation of the attack, so that they can see the extent of the problem.

Be sure to read the accompanying KB article to find out not only how to turn on or off the feature to disable renegotiation, but also to see which apps are, or may be, affected adversely by this change – to date, DirectAccess, Exchange ActiveSync, IIS and IE.

How is Microsoft’s response?


I would have to say that on the speed front, I would have liked to see Microsoft make this change far quicker. Disabling TLS/SSL renegotiation should not be a huge amount of code, and while it has some repercussions, and will impact some applications, as long as the change did not cause instability, there may be some institutions who would want to disable renegotiation lock, stock and barrel in a hurry out of a heightened sense of fear.

I’m usually the first to defend Microsoft’s perceived slowness to patch, on the basis that they do a really good job of testing the fixes, but for this, I have to wonder if Microsoft wasn’t a little over-cautious.


While I have no quibbles with the bulletin, there are a couple of statements in the MSRC blog entry that I would have to disagree with:

IIS 6, IIS 7, IIS 7.5 not affected in default configuration

Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.

Well, of course – in the default setting on most Windows systems, IIS is not installed, so it’s not vulnerable.

That’s clearly not what they meant.

Did they mean “the default configuration with IIS installed and turned on, with a certificate installed”?

Clearly, but that’s hardly “the default configuration”. It may not even be the most commonly used configuration for IIS, as many sites escape without needing to use certificates.

Sadly, if I add “and mutual authentication enabled”, we’re only one checkbox away from the “default configuration” to which this article refers, and we’re suddenly into vulnerable territory.

In other words, if you require client / mutual authentication, then the default configuration of IIS that will achieve that is vulnerable, and you have to make a decided change to non-default configuration (the SSLAlwaysNegoClientCert setting), in order to remain non-vulnerable without the 977377 patch.

The other concern I have is over the language in the section “Likelihood of the vulnerability being exploited in general case”, which discusses only the original CSRF-like behaviour exploited under the initial reports of this problem.

There are other ways to exploit this, some of which require a little asinine behaviour on the part of the administrator, and others of which are quite surprisingly efficient. I was particularly struck by the ability to redirect a client, and make it appear that the server is the one doing the redirection.

I think that Eric and Maarten understate the likelihood of exploit – and they do not sufficiently emphasise that the chief reason this won’t be exploited is that it requires a MITM (Man-in-the-middle) attack to have already successfully taken place without being noticed. That’s not trivial or common – although there are numerous viruses and bots that achieve it in a number of ways.


It’s a little unclear on first reading the advisory whether this affects just IIS or all TLS/SSL users on the affected system. I’ve asked if this can be addressed, and I’m hoping to see the advisory change in the coming days.


I’ve rambled on for long enough – the point here is that if you’re worried about SSL / TLS client certificate renegotiation issues that I’ve reported about in posts 1, 2 and 3 of my series, by all means download and try this patch.

Be warned that it may kill behaviour your application relies upon – if that is the case, then sorry, you’ll have to wait until TLS is fixed, and then drag your server and your clients up to date with that fix.

The release of this advisory is by no means the end of the story for this vulnerability – there will eventually be a supported and tested protocol fix, which will probably also be a mere advisory, followed by updates and eventually a gradual move to switch to the new TLS versions that will support this change.

This isn’t a world-busting change, but it should demonstrate adequately that changes to encryption protocols are not something that can happen overnight – or even in a few short months.

When “All” isn’t everything you need – Terminal Services Gateway certificates.

Setting up Terminal Services Gateway on Windows Server 2008 the other day.

It’s an excellent technology, and one I’ve been waiting for for some time – after all, it’s fairly logical to want to have one “bounce point” into which you connect, and have your connection request forwarded to the terminal server of your choice. Before this, if you were tied to Terminal Services, you had to deal with the fact that your terminal connection was taking up far more traffic than it should, and that the connection optimisation settings couldn’t reliably tell that your incoming connection was at WAN speeds, rather than LAN speeds.

image But to get TS Gateway working properly, it needs a valid server certificate that matches the name you provide for the gateway, and that certificate needs to be trusted by the client. Not usually a problem, even for a small business operating on the cheap – if you can’t afford a third-party trusted certificate, there are numerous ways to deploy a self-signed certificate so that your client computers will trust it.

I have a handily-created certificate that’s just right for the job.

I ran into a slight problem when I tried to install the certificate, however.


The certificate isn’t there! In this machine, it isn’t even possible for me to “Browse Certificates” to find the certificate I’m looking for. On another machine, the option is present:


That’s promising, but my certificate doesn’t appear in the list of certificates available for browsing:


I checked in the Local Computer’s Personal Certificates store, which is where this certificate should be, and sure enough, on both machines, it’s right there, ready to be used by TSG.


So, why isn’t TSG offering this certificate to me to select? The clue is in the title.

The certificate that doesn’t show up is the one with “Intended purposes: <All>” – the cert that shows up has only “Server Authentication” enabled. Opening the certificate’s properties, I see this:


Simply selecting the radio-button “Enable only the following purposes”, I click “OK”:


And now, back over in the TSG properties, when I Browse Certficates, the Install Certificate dialog shows me exactly the certificates I expected to see:


This isn’t a solution I would have expected, and if that one certificate hadn’t shown up there, I wouldn’t have had the one clue that let me solve this issue.

Hopefully my little story will help someone solve this issue on their system.

Debugging SSTP error -2147023660

Setting up an SSTP (Secure Socket Tunneling Protocol) connection earlier, I encountered a vaguely reminiscent problem. [SSTP allows virtual private network – VPN – connections between clients running Vista Service Pack 1 and later and servers running Windows Server 2008 and later, using HTTP over SSL, usually on port 443. Port 443 is the usual HTTPS port, and creating a VPN over just that port and no other allows it to operate over most firewalls.]

The connection just didn’t seem to want to take, even though I had already followed the step-by-step instructions for setting up the SSTP server. I thought I had resolved the issue originally by ensuring that I installed the certificate (it was self-signed) in the Trusted Roots certificate store. [If the certificate was not self-signed, I would have ensured that the root certificate itself was installed in Trusted Roots]

The first thing I did was to check the event viewer on the client, where I found numerous entries.

I found error -2147023660 in the Application event log from RasClient. This translates to 0x800704D4, ERROR_CONNECTION_ABORTED. That was pretty much the same information I already had, that the connection was being prevented from completing. So I visited the server to see if there was more information there.

On the server, I couldn’t find any entries from the time around when I was trying to connect. Not too good, because of course that’s where you’re going to look. In some cases, particularly errors that Microsoft thinks are going to happen too frequently, the conditions are checked at boot-time, and an error reported then, rather than every time the service is called on to perform an action.

Fortunately, it hadn’t been that long since I last booted (and I had a hint or two from the RRAS team at Microsoft), so my eyes were quickly drawn to an Event with ID 24 in the System Log, sourced at Microsoft-Windows-RasSstp. The text said:

The certificates bound to the HTTPS listener for IPv4 and IPv6 do not match. For SSTP connections, certificates should be configured for for IPv4, and [::]:Port for IPv6. The port is the listener port configured to be used with SSTP.

Note that this happens even if your RRAS server isn’t configured to offer IPv6 addresses to clients.

So, here’s some documentation on event ID 24 :

This is one of those nasty areas where there is no user interface other than the command-line. Don’t get me wrong, I love being able to do things using the command line, because it’s easy to script, simple to email to people who need to implement it, and it works well with design-approve-implement processes, where a designer puts a plan together that is approved by someone else and finally implemented by a third party. With command-line or other scripts, you can be sure that if the script didn’t change on its way through the system, then what was designed is what was approved, and is also what was implemented.

But it’s also easy to get things wrong in a script, whereas a selection in a UI is generally much more intuitive. It’s particularly easy to get long strings of hexadecimal digits wrong, as you will see when you try and follow the instructions above. Make sure to use copy-and-paste when assembling your script, and read the output for any possible errors.