Why is PKI so hard? – Page 3 – Tales from the Crypto

Why is PKI so hard?

Can’t I trust the Postal Service? Part 2 – the certificate.

In part 1 of this mini-series, I talked about how the US Postal Service had deployed only part of the certificate that they had bought, and that this resulted in either an irritating dialog (in IE 6, and other browsers), or a page that warned you not to go farther (in IE 7).

I’d like to reiterate my advice that when you see a certificate problem, you should not continue to the web site. Again, the certificate problem warnings indicate that the site has failed to prove to you that they are who they claim to be. At that point, you say “I cannot trust the web site – I must use the brick-and-mortar store, or the phone”, and you don’t carry on into the web site.

[I asked the same question of an Internet Explorer presenter at Tech-Ed (Markellos Diorinos), and he gave the same answer – unless you are the owner of the web site, or a security researcher, don’t try and debug certificate errors, just assume you cannot trust the site and walk away. Remember, it’s not about trusting or not trusting the Postal Service, it’s about how you deal with the site to which you’ve connected, which has claimed that it can identify itself as the Postal Service, and then singularly failed to do so.]

Now we go to the next step – looking at the certificate that is in use.

I was surprised to see the following item appear in the certificate’s details:

So, this isn’t a certificate for just the web site in question – this is a certificate for any web site in the usps.gov domain.

Okay, this is a technically valid certificate – but is it good security?

I’m not sure that I can go quite as far as to say “no”, but it’s certainly something I would shy away from.

Why?

  • Purchase cost
    It costs a lot more to get a wildcard certificate than it does to get a single host certificate.
    Not quite as much as to get a certificate authority certificate, but definitely significantly more, so that it only makes financial sense if there’s something that you absolutely cannot do without using a wildcard certificate.
  • Deployment cost
    When you use a wildcard certificate across several sites within your domain, you have to give that wildcard certificate to all site administrators, or install it for them on all sites within your domain. This means that the administrator of one of your secure sites is a huge step closer to being able to spoof any of your secure sites.
  • Increased attack surface
    Several of your sites are now sharing the same private key; if someone attacks one site successfully, they can now pretend to be any of your other sites.
  • Revocation cost
    Say the worst happens, and you discover that the private key has been exposed to unauthorised parties – not necessarily through an external attack, but possibly because the administrator of one of these sites has left your employment; you now want to revoke the key – so again, you have to re-deploy the new certificate to all of your web sites and administrators.
  • Third-party hosting
    Large companies like the Postal Service often outsource the development and hosting of web sites. When you give a third party a certificate for the site they are hosting, you really don’t want them to be able to spoof others of your sites. That’s part of the point of certificates.

There are doubtless some other good reasons why wild-card certificates might be bad. Why would you use them, then?

  • Purchase cost
    While the cost is more than that of buying a single certificate, there is a number of sites (depending on your CA) for which it is cheaper to buy a wildcard certificate, rather than multiple individual certificates
  • Small business
    If you’re a small business, where you are the sole administrator for a dozen websites under your domain, the cost of deployment is the same for a wildcard certificate as for a single certificate.
  • Server co-hosting
    Again, possibly more for small businesses, if you are running several web sites on the same IP address and port combination, you can only give out one certificate when people connect. This may require a wildcard certificate, although this is generally a suggestion that you either separate these sites out to their own IP addresses, or treat them as a single host with multiple applications. Wildcard certificates don’t help with cross-domain server co-hosting.
  • Certificate management
    It’s easier to maintain a backup copy of one key than a dozen.

Quite frankly, I don’t think any of these arguments really outweigh the risks. Maybe you will, or maybe there are some reasons that I haven’t given – what’s your take on wild-card certificates? Is there something I’ve missed, either for or against?

Can’t I trust the Postal Service? Part 1 – the crypto.

The Security MVPs have a private mailing list on which we gather to share expertise or our interesting findings – the following was raised by an MVP, and very much interested me, on a number of levels:

The US Postal Service has a web service (as well as a physical method) for signing up to have your mail held if you’re not going to be able to check on it for a while.

This service represents a number of lessons in privacy and security.

Can I trust this web site?

First, let’s look at the web site itself, at least as it was this morning (the certificate error has been fixed since then):

There is a problem with this website's security certificate.

Okay, I don’t know about you, but that’s not what I expect to see when I go to the post office.

[For those of you using a search engine to reach this page, the text reads:

There is a problem with this website’s security certificate.
The security certificate presented by this website was not issued by a trusted certificate authority.

Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server.
We recommend that you close this webpage and do not continue to this website.
Click here to close this webpage.
Continue to this website (not recommended).]

If you’re anything other than someone trying to research this issue and see its cause, then I suggest you follow the browser’s recommendation right now and close the webpage. This page is broken, just as if it was displaying any other error. It is not an appropriate use of Internet security technology to “Continue to this website (not recommended)”. Really.

Do not press that button. Do not… ah, what the heck.

So, let’s show you what we get when we do continue to the website:

USPS Web site with a red Certificate Error bar.

The web page certainly looks like the rest of the USPS web site – same colours, same graphics, etc. But all that can be easily faked. What about the part that isn’t so easy to fake, the certificate that tells me that this really is a usps.gov web page? No, that’s a problem, as you can clearly see from the “Certificate Error” message.

Note that if this had been a bad page pretending to be the USPS, we would possibly have just started running evil code, designed to take over our system, steal our information, and generally mess with our lives. This is why I said earlier that you should just leave it alone, think of it as a broken page, and not deal with it any more. Unless you’re trying to debug the problem in the web server.

So, what’s broken?

Let’s see what the details are of the Certificate Error (Microsoft, take note – it’s a bad thing that you have to visit the page in order to view its certificate from within IE!):

The security certificate presented by this website was not issued by a trusted certificate authority.

Okay, that’s pretty much what the other page said – the certificate this web site gave us was not issued by someone we trust.

Let’s view the certificate and find out more about it:

This certificate cannot be verified up to a trusted certification authority.

“This certificate cannot be verified up to a trusted certification authority”, huh?

[An interesting note here is that the certificate is issued to “*.usps.gov” – this is a wildcard certificate, and is generally expensive to get, and requires some understanding of certificates and how to safely manage them, because a wildcard certificate is open to abuse if it escapes. Bear that in mind as you read on.]

Why can’t we verify the certificate chain?

What is a certificate chain?

If you’ve only ever bought a Verisign certificate, you’ve probably got a certificate that’s signed directly by a trusted root certification authority (CA).

That means that the “Issuer” identified in the certificate is already installed in everyone’s “Trusted Roots” certificate store when your operating system was first installed. But that doesn’t always have to be the way.

Certificates can, in fact, be issued by subordinate Certificate Authorities (CAs) – and those subordinate CAs have their own certificates that are issued by other CAs, and so on up to an eventual root CA, whose certificate is installed in your clients’ systems.

So what’s the problem here? This certificate says it’s issued by “Comodo Class 3 Security Services CA” – isn’t that good enough?

Not for the browser – remember, it’s not a human, and doesn’t know how to get a hold of “Comodo Class 3 Security Services CA” to tell if it’s a valid issuer, and part of a chain up to a trusted root. It needs to be given a certificate, or told how to find it.

Distribution Points to the rescue!

Let’s look at the Extensions in the certificate, and see what might be of use there:

Extensions for this certificate - missing something...

Okay, so a CRL Distribution Point is a list of places where a Certificate Revocation List (CRL) might be published – and a CRL will tell the browser whether or not this certificate has been declared unsafe. Oh, if only there was such a Distribution Point for the issuer’s certificate!

There is, of course, or I wouldn’t be leading you there.

Let’s take a look at Microsoft’s secure certificate extensions:

Microsoft's certificate extensions include an AIA

There’s something new there, under the CRL Distribution Points – it’s an “Authority Information Access” extension. This extension lists how your browser can fetch the certificate for Microsoft’s Intermediate CA – in this case, from http://www.microsoft.com/pki/mscorp/Microsoft Secure Server Authority(3).crt.

So, the Post Office should ask for their money back, right?

No, because actually, that’s not the only way to provide an Intermediate CA’s certificate. It’s the only way that guarantees that the certificate chain can be checked no matter how the certificate can be delivered, but SSL / TLS give you an extra opportunity to fix this.

In the SSL handshake, where the server gives its certificate to the connecting client, there’s the possibility for the server to give several certificates – a chain, in fact, from its own certificate all the way up to its root, if necessary.

The Post Office can – and did – ensure that the server gives out its own certificate and the rest of its chain.

I’m not much of a web administrator – in fact, I know next to nothing about web servers. However, I have been told that in Windows, all you need to do is import the intermediate CA’s certificate, and the server will automatically give it out to the client.

That was way too long an article – what’s the lesson again?

The basic lesson here is that you need to test your certificates on a system that isn’t in your domain and doesn’t have any of the imported certificates that you might already have fetched from your dealings with your CA.

You need to discover if your CA put a valid AIA in your certificate when it issued it, and if it didn’t, then at least import the intermediate CA’s certificate into your server, so that it’s ready to go.

But this page made me think about more than how to fix the lack of Intermediate CA… more next time.

EFS in a domain expires after three years

I enjoyed the research for writing my article on EFS, for the Technet Security Newsletter, but there’s always something experience will teach you.

Here’s an issue I experienced just last week, with EFS. It shouldn’t have been a surprise, given what I already know, and if I put the two facts together, you’ll probably spot it straight away:

  • EFS certificates are automatically issued, and expire after three years if you use the default EFS template.
  • When you create a domain, the administrator account on the first domain controller is automatically given an EFS certificate, so he can become the domain’s default DRA.

You’ve spotted it already (and the title helped you, right?) – after three years, the administrator’s EFS certificate expires.

His certificate may get renewed, so he can encrypt more documents, and of course his old private key still allows him to read files that were encrypted while the certificate was still valid.

That assumes, though, that the administrator’s account is an actively used one.

Whether it’s used or not, though, the fact remains that the DRA certificate does not get updated in the default Group Policy Object – and as a result, even if the administrator renews his EFS certificate, EFS will be effectively disabled throughout the domain.

Here’s the dialog you get:

 

For those of you using search engines, that dialog says “Error Applying Attributes”, “An error occurred applying attributes to the file:”, and “Recovery policy configured for this system contains invalid recovery certificate.”

Pretty much your only good choice here is “Cancel”, until you can generate a new certificate and add it to the default domain policy, being sure to remove the old expired cert.

DON’T DELETE THE OLD EXPIRED CERT’S PFX FILE AND PRIVATE KEY THAT I’M SURE YOU MADE A BACKUP OF!!!

[That old private key can be used to recover anything that was encrypted using EFS before the key expired. Always hold PFX files on keys that can be used to decrypt information – always, always, always.]

There’s no easy way to put the new certificate into the default domain policy, so you have to do it by hand. You might as well also generate the certificate by hand, and make sure that it’s not associated with a particular user account (why should it be? it’s just a key with a purpose, and that purpose is not associated with a user.)

How do you do this best?

A simple command line is easiest, in my opinion:

C:\>cipher /R:EFS_DRA_20070324
Please type in the password to protect your .PFX file:
Please type in the password to protect your .PFX file:

Your .CER file was created successfully.
Your .PFX file was created successfully.

That generates two files – EFS_DRA_20070324.PFX, and EFS_DRA_20070324.CER. As hinted at in the output, the PFX file is protected by a password (as they all should be) – move this immediately to a floppy disk and lock it in a cabinet, along with documentation of the password you used (or segregate the two, whatever your certificate handling policies dictate). Or maybe you expect to have frequent requests to recover EFS-encrypted files, so you want the Service Desk to own the PFX file.

Then, go through whatever change management nightmares you have to do in order to edit the default domain policy, delete the old expired certificate, and import the one you just created.

Now, encrypt away, knowing that your encrypted files can be recovered using the PFX file you just created.

Finding your private keys

For the most part, Windows users and administrators don’t ever have to worry about how or where their private keys are stored.

After all, your private key is yours, and it’s private. You request it to be generated, and then you don’t need to touch it, it’s already in your store – somewhere.

But every now and again, there’s a reason to do so – the classic example being when you want to run a service under its own account (because you don’t want to use “SYSTEM”, or worse, the user account of a real person). When you need to do this – whether it’s an AD-AM instance, or an FTP server that works over SSL / TLS – you will need to import the key into the machine store, and then make it readable by the service account.

Previously, I would have recommended using the WinHttpCertCfg tool from Microsoft’s download site – despite its rather particular sounding name, the basic point of this tool is to (import and) assign access rights to a certificate for a particular user. Exactly what you need to do.

Lately, though, I’ve come across another tool that has a big advantage over WinHttpCertCfg. You see, as a developer, when I see a tool that does something I can’t figure out for myself, I ask “how did they do that?” Whenever I see a KB article that says “Application A can’t do this, but Application B can”, I ask “and how does it do that? How can I do that?”

WinHttpCertCfg is like magic powder – you sprinkle it on, and it does what it’s supposed to do. But you’re none the wiser as to what it’s doing. Wouldn’t it be better if there was a tool with source code?

Now, there is.

It’s a very tiny part of the Windows Communications Framework and Windows CardSpace Samples download, and it’s called FindPrivateKey. It’s a simple executable, based on a simple C# source, with something approaching five lines of actual heavy lifting. Reading the C# source will tell even a relatively average programmer what’s going on here, and could come in handy with any future projects where you may need to trace your private keys.

Uh… except when it comes to Vista, because the keys have moved. Ah, but you’re all smart little security geeks, and know that in Vista, you can assign ACLs directly from the Certificate Manager:

You did already know that, didn’t you? Honestly, that’s such a cool feature, it makes me want Vista at my work place NOW.

Certificate Manager does not require administrator access.

When you manage your personal certificates in Windows, the tool to use is Certificate Manager – you can access it either by running “certmgr.msc” to access your own personal certificate store, or by running MMC, the Microsoft Management Console, and choosing File | Add / Remove Snap-in to add the Certificates snap-in. You’ll then need to choose whether you’re going to access your personal certificate store, or the local computer store, or the store for a service. As you can see from that description, running “certmgr.msc” is the easiest way to get to your personal certificate store.


In Windows Vista, things are pretty much the same – there is still no direct “user interface” way to open your certificate store (that I am aware of – let me know if you’ve found one).


One thing that is different is that everywhere the Windows Help and Support Center mentions the Certificate Manager, it takes pains to assure you that you can’t do this unless you log on as an administrator.


As you can imagine, since every user is allowed to have his or her very own personal certificate store, entirely at his or her whim to control, Certificate Manager must be able to do everything from a restricted user account – the only thing that cannot be done from a restricted user account is to access certificate stores belonging to other user accounts.


Windows Vista is new – some of its help is clearly going to be expanded on and expounded later – for right now, if you can, it’s worth enabling the “Online Help” to pick up changes to topics as soon as they get made.

ChangePassword versus SetPassword

Writing a piece of code last night, I was struck by the thought that many developers I’ve worked with would not know why I use a ChangePassword function, instead of a SetPassword function.


The difference in use is simple – SetPassword requires one password (the new one), whereas ChangePassword requires two passwords (the old one, and the new one).


It would seem as if the obvious easy function to use is SetPassword, because you don’t need to prompt the user for the old password on the account.


But I avoid SetPassword unless there’s absolutely no alternative – why is that?


Because of all the secrets that a user may own – private keys for EFS encryption, for email identification, for server identification, etc.


Any secret like this is stored in the DPAPI store, which is encrypted using a key derived from the user’s current password.


If you use SetPassword, all the information is still in the DPAPI store, but the user no longer has access to it(*).


[The information is still there for a simple reason – if your use of SetPassword was to gain access to an account while its owner is away from work, you’ll want to regain that data when the user comes back. You can do this by having him change his password from the new password back to his old password.]


This means that the user loses access to their encrypted files, loses the ability to identify themselves in email (and to decrypt messages sent to them), or if this is a server account, they lose the ability to start up an SSL-based server.


Using ChangePassword, by comparison, because it uses the existing password as a starting point, re-encrypts the DPAPI store with a key derived from the new password.


The other big advantage of a ChangePassword function is that it can be used by anyone, without administrative rights being required (subject to rights and policies, depending on the tool you’re using and the OS you’re on).


That sounds like a security violation, but isn’t – after all, if you know the user’s old password, you can log on as that user, and as long as the user is able to change his own password, there’s no functional difference between you logging on then changing your password, and changing your password from some other account by providing your old and new passwords.


Depending on the interface you’re using, these functions may not be called ChangePassword and SetPassword – for instance, in the Win32 API, the functions to use would be NetUserChangePassword and NetUserSetInfo.


(*)Not quite as simple as this – on a domain at Windows 2000 or later, you will find that a second copy of the DPAPI Master Key is stored in the domain controllers, encrypted using the DC’s private key. In the event of a SetPassword operation, the DPAPI Master Key is decrypted and re-encrypted with the new password, so you don’t lose any data. The same is sometimes true on workstations, depending on the version. Details are in KB article 309408)

Defence in death

“Defence in depth” (or “defense in depth”, if you’re American) is a frequently misunderstood term in security.


It refers to designing your software with the assumption that layers above you that were supposed to protect you have failed to do so – in whatever manner is most inconvenient to your application.


As Steve Riley points out, it’s not the same as simply applying the same measure at a couple of different places – it’s about assuming that the measure above you failed.


An example is “my firewall restricts external traffic from reaching me” – that’s a first layer of defence. The second layer of defence might be “my application requires a user-name and password”. It’s defence in depth, because even if an attacker can fake traffic through your firewall, he’ll have to come up with a password that works.


I’m starting to think about laptop encryption as being “defence in death”.


It’s long been a statement in computer security that “if the attacker has physical access, it’s ‘game over'”.


That’s true – if you’re talking about a system that provides a service – as usual, you have to talk about what you are securing.


Your server rooms are generally susceptible to a guy with a chainsaw – physical access means loss of service; ergo, security problem. You fix this problem with strong physical security.


Your servers, if they can be stolen, are susceptible to being cracked open by hackers who want to pull the data from them; ergo, security problem. You fix this with strong physical security (plus an appropriate hardware retirement procedure that includes degaussing the disks, shredding them, and lightly sprinkling them with thermite).


Your laptops can be stolen even more easily, and can be similarly opened up to hackers who want to read their data. Again, this is a security problem.


You can’t solve it with physical security.


In fact, with security designs for laptops, you pretty much have to start with the assumption that physical security is impossible – and what can software security do for you, if the hacker can simply prevent your software from running?


This is where “defence in death” comes about – by making the system only usable while it is alive and running, by encrypting it with a key that is not stored locally, you make it functionally impossible to use or read the system unti you have brought the system to life.


And while the system is alive, it can actively protect itself.


Encryption is a lovely thing. Be careful to understand how you use it.

Where did Private Folders go?

Wow – yesterday, you could download “Microsoft Private Folders” (if you were attested as Genuine) from Microsoft’s downloads site.


Today, it’s gone.


There’s a brief synopsis of the story at the Seattle P-I’s site here – as usual, I’m patient enough to wait while you go and read it.


As a security engineer at a company that cares to manage its domain environment, I’m very comfortable with the argument that it’s not something our users should be installing it – but it’s a service, and our users are not local admins, so they can’t install a new service.


What bothers me, though, is the argument that this is dangerous because “It also didn’t offer a way to retrieve a forgotten password, raising the possibility of effectively losing access to files if people forgot the phrase they chose.”


People, this is encryption.


That’s what it’s supposed to do.


You encrypt data that you would rather lose than leak.


You want to lose the data if it falls into the hands of people who don’t know the password, even if that means you.


If you can’t handle that, then encryption is not what you want – you want “protection”, or “concealment”, where there’s a back-door for people with powerful tools, a little training and some time.

New ActiveSync – still not going to upgrade to it.

Microsoft just released a new version of ActiveSync – version 4.2.


It has some Outlook improvements, proxy improvements, partnership improvements, and VPN connectivity improvements.


So why am I still not going to bother installing this?


Because it still doesn’t support syncing via wireless.


I’m sticking with ActiveSync 3.8, which allows syncing via wireless and/or VPN.


Isn’t that unsecure? Yes, but it works, and I need it to work. I generally don’t carry my sync cable with me, and I generally don’t plug in except to charge – and then, I just want to charge, and may be away from my sync partner.


How difficult is it for Microsoft to write an ActiveSync tool that exchanges a huge shared key (or certificate, whatever they feel is most appropriate), to securely identify a sync partner, so that wireless and VPN network synchronisation can be securely supported? I must be missing something, because it doesn’t seem that hard to me.

PGP / Truecrypt brouhaha

There’s a fascinating debate going on at present. Two ‘researchers’, called Abed and Adonis, are trumpeting their mad sk177z at cryptography.


They have a few basic claims:



  • They can bypass authentication on PGP self-decrypting archives.
  • They can decrypt PGP-encrypted drives without knowing the passphrase.

It’s an interesting read, and full of the sort of lack of comprehension, poor language, loose terminology, etc, that is typical of some of the worst kind of vulnerability reporting. I’ve read a couple of dozen vulnerability reports, and while a couple of them were clear, concise and well-researched, the majority were barely understandable, and showed a staggering lack of comprehension of the software and algorithms being discussed.


So, here’s a little description of what goes on in most modern file or disk encryption (EFS, BitLocker, PGP, TrueCrypt, etc):



  1. A random key is generated.
  2. The data is encrypted using the random key.
  3. The random key is encrypted using the user’s selected pass-phrase, or some other identifying credential (their public key, for instance). This encrypted key is stored with the file.
  4. The random key is also encrypted using a recovery token (either randomly created and stored away in a key recovery file, or it’s an existing private key of a designated third-party recovery agent). This encrypted key is also stored with the file.
  5. For any number of other users of this file, the random key can also be encrypted with their pass-phrases, or their public keys.

When you want to decrypt a file, here’s what happens:



  1. Your credential – pass-phrase or private key – is used to decrypt the key-blob associated with you (or every key-blob in turn until you get a decrypted key-blob that matches its checksum).
  2. The decrypted key is used to decrypt the data.

What Adonis and Abed have managed to do with their fancy debugging on the SDA (self-decrypting archive) is to break into the point where the code checks that the random key has been successfully decrypted, and change the stored checksum to match the checksum of the key they’ve decrypted using the wrong pass-phrase. So, they’ve got a key that doesn’t decrypt the file to the correct data, and they’ve managed to persuade the program to tell them that this is acceptable.


<sarcasm>Clever attackers – they’ve managed to get the system to tell them that they’ve successfully decrypted the file, while at the same time getting back a key that ‘decrypts’ the file to random garbage.</sarcasm> They even acknowledge it in one of their Flash animations. [Oh yeah, and I want to view a Flash animation less than a month after a remote code execution vulnerability in Flash.]


Their binary patching of the PGP encrypted disk is slightly more interesting.


What they have demonstrated is that a change of pass-phrase does not change the random key, it just decrypts it using the old pass-phrase, and then re-encrypts it with the new pass-phrase, obliterating the stored copy with the new one. [This is actually a good and necessary thing, because if you’re on the road, and you change your pass-phrase to something that you then forget, you want the recovery token that the help-desk provides you to still work!]


This can be used to gain some measure of an attack – encrypt the drive with a pass-phrase you know, save a copy of the encrypted key blob, and you can later come in and replace the encrypted key-blob on the machine with yours – this effectively resets the pass-phrase to what it was when you saved your copy of the encrypted key-blob.


But that’s not something that the encryption is designed to protect against – and really, it’s not something that the encryption should try to protect against. If you receive an encrypted device from someone you don’t trust (or later decide not to trust someone who has encrypted a device you use), you should decrypt it and re-encrypt it with a new random key. This makes good sense anyway, because you want a new recovery token on the device, and you want that token to be under your name, not the previous user’s name.


As with so many presumed attacks on cryptographic solutions, this one’s a real yawner if you understand the cryptography at hand, because it’s really an attack on the policy behind the system. In this case, the policy says that if you receive a device (disk, encrypted file, whatever) from someone who possessed the means to decrypt it, that device can continue to be decrypted until such time as you encrypt it with a new encryption key – not just a new pass-phrase.