Why is PKI so hard?

Explaining Blockchains – without understanding them.

I tweeted this the other day, after reading about Microsoft’s Project Bletchley:

I’ve been asked how I can tweet something as specific as this, when in a subsequent tweet, I noted:

[I readily admit I didn’t understand the announcement, or what it’s /supposed/ to be for, but that didn’t stop me thinking about it]

— Alun Jones (@ftp_alun) June 17, 2016

Despite having  reasonably strong background in the use of crypto, and a little dabbling into the analysis of crypto, I don’t really follow the whole “blockchain” thing.

So, here’s my attempt to explain what little I understand of blockchains and their potential uses, with an open invitation to come and correct me.

What’s a blockchain used for?

The most widely-known use of blockchains is that of Bit Coin and other “digital currencies”.

Bit Coins are essentially numbers with special properties, that make them progressively harder to find as time goes on. Because they are scarce and getting scarcer, it becomes possible for people of a certain mindset to ascribe a “value” to them, much as we assign value to precious metals or gemstones aside from their mere attractiveness. [Bit Coins have no intrinsic attractiveness as far as I can tell] That there is no actual intrinsic value leads me to refer to Bit Coin as a kind of shared madness, in which everyone who believes there is value to the Bit Coin shares this delusion with many others, and can use that shared delusion as a basis for trading other valued objects. Of course, the same kind of shared madness is what makes regular financial markets and country-run money work, too.

Because of this value, people will trade them for other things of value, whether that’s shiny rocks, or other forms of currency, digital or otherwise. It’s a great way to turn traceable goods into far less-traceable digital commodities, so its use for money laundering is obvious. Its use for online transactions should also be obvious, as it’s an irrevocable and verifiable transfer of value, unlike a credit card which many vendors will tell you from their own experiences can be stolen, and transactions can be revoked as a result, whether or not you’ve shipped valuable goods.

What makes this an irrevocable and verifiable transfer is the principle of a “blockchain”, which is reported in a distributed ledger. Anyone can, at any time, at least in theory, download the entire history of ownership of a particular Bit Coin, and verify that the person who’s selling you theirs is truly the current and correct owner of it.

How does a blockchain work?

I’m going to assume you understand how digital signatures work at this point, because that’s a whole ‘nother explanation.

Remember that a Bit Coin starts as a number. It could be any kind of data, because all data can be represented as a number. That’s important, later.

The first owner of that number signs it, and then distributes the number and signature out to the world. This is the “distributed ledger”. For Bit Coins, the “world” in this case is everyone else who signs up to the Bit Coin madness.

When someone wants to buy that Bit Coin (presumably another item of mutually agreed similar value exchanges hands, to buy the Bit Coin), the seller signs the buyer’s signature of the Bit Coin, acknowledging transfer of ownership, and then the buyer distributes that signature out to the distributed ledger. You can now use the distributed ledger at any time to verify that the Bit Coin has a story from creation and initial signature, unbroken, all the way up to current ownership.

I’m a little flakey on what, other than a search in the distributed ledger for previous sales of this Bit Coin, prevents a seller from signing the same Bit Coin over simultaneously to two other buyers. Maybe that’s enough – after all, if the distributed ledger contains a demonstration that you were unreliable once, your other signed Bit Coins will presumably have zero value.

So, in this perspective, a blockchain is simply an unbroken record of ownership or provenance of a piece of data from creation to current owner, and one that can be extended onwards.

In the world of financial use, of course, there are some disadvantages – the most obvious being that if I can make you sign a Bit Coin against your well, it’s irrevocably mine. There is no overarching authority that can say “no, let’s back up on that transaction, and say it never happened”. This is also pitched as an advantage, although many Bit Coin owners have been quite upset to find that their hugely-valuable piles of Bit Coins are now in someone else’s ownership.

Now, explain your tweet.

With the above perspective in the back of my head, I read the Project Bletchley report.

I even looked at the pictures.

I still didn’t really understand it, but something went “ping” in my head.

Maybe this is how C-level executives feel.

Here’s my thought:

Businesses get data from customers, users, partners, competitors, outright theft and shenanigans.

Maybe in environments where privacy is respected, like the EU, blockchains could be an avenue by which regulators enforce companies describing and PROVING where their data comes from, and that it was not acquired or used in an inappropriate manner?

When I give you my data, I sign it as coming from me, and sign that it’s now legitimately possessed by you (I won’t say “owned”, because I feel that personal data is irrevocably “owned” by the person it describes). Unlike Bit Coin, I can do this several times with the same packet of data, or different packets of data containing various other information. That information might also contain details of what I’m approving you to do with that information.

This is the start of a blockchain.

When information is transferred to a new party, that transfer will be signed, and the blockchain can be verified at that point. Further usage restrictions can be added.

Finally, when an information commissioner wants to check whether a company is handling data appropriately, they can ask for the blockchains associated with data that has been used in various ways. That then allows the commissioner to verify whether reported use or abuse has been legitimately approved or not.

And before this sounds like too much regulatory intervention, it also allows businesses to verify the provenance of the data they have, and to determine where sensitive data resides in their systems, because if it always travels with its blockchain, it’s always possible to find and trace it.

[Of course, if it travels without its blockchain, then it just looks like you either have old outdated software which doesn’t understand the blockchain and needs to be retired, or you’re doing something underhanded and inappropriate with customers’ data.]

It even allows the revocation of a set of data to be performed – when a customer moves to another provider, for instance.

Yes, there’s the downside of hugely increased storage requirements. Oh well.

Oh, and that revocation request on behalf of the customer, that would then be signed by the business to acknowledge it had been received, and would be passed on to partners – another blockchain.

So, maybe I’ve misunderstood, and this isn’t how it’s going to be used, but I think it’s an intriguing thought, and would love to hear your comments.

Untrusting the Blue Coat Intermediate CA from Windows

So, there was this tweet that got passed around the security community pretty quickly:

Kind of confusing and scary if you’re not quite sure what this all means – perhaps clear and scary if you do.

BlueCoat manufactures “man in the middle” devices – sometimes used by enterprises to scan and inspect / block outbound traffic across their network, and apparently also used by governments to scan and inspect traffic across the network.

The first use is somewhat acceptable (enterprises can prevent their users from distributing viruses or engaging in illicit behaviour from work computers, which the enterprises quite rightly believe they own and should control), but the second use is generally not acceptable, depending on how much you trust your local government.

Filippo helpfully gives instructions on blocking this from OSX, and a few people in the Twitter conversation have asked how to do this on Windows.

Disclaimer!

Don’t do this on a machine you don’t own or manage – you may very well be interfering with legitimate interference in your network traffic. If you’re at work, your employer owns your computer, and may intercept, read and modify your network traffic, subject to local laws, because it’s their network and their computer. If your government has ruled that they have the same rights to intercept Internet traffic throughout your country, you may want to consider whether your government shouldn’t be busy doing other things like picking up litter and contributing to world peace.

The simple Windows way

As with most things on Windows, there’s multiple ways to do this. Here’s one, which can be followed either by regular users or administrators. It’s several steps, but it’s a logical progression, and will work for everyone.

Step 1. Download the certificate. Really, literally, follow the link to the certificate and click “Open”. It’ll pop up as follows:

5-26-2016 3-49-29 PM

Step 2. Install the certificate. Really, literally, click the button that says “Install Certificate…”. You’ll see this prompt asking you where to save it:

5-26-2016 3-49-41 PM

Step 3. If you’re a non-administrator, and just want to untrust this certificate for yourself, leave the Store Location set to “Current User”. If you want to set this for the machine as a whole, and you’re an administrator, select Local Machine, like this:

5-26-2016 3-49-50 PM

Step 4: Click Next, to be asked where you’re putting the certificate:

5-26-2016 3-50-02 PM

Step 5: Select “Place all certificates in the following store”:

5-26-2016 3-50-16 PM

Step 6: Click the “Browse…” button to be given choices of where to place this certificate:

5-26-2016 3-50-23 PM

Step 7: Don’t select “Personal”, because that will explicitly trust the certificate. Scroll down and you’ll see “Untrusted Certificates”. Select that and hit OK:

5-26-2016 3-50-35 PM

Step 8: You’re shown the store you plan to install into:

5-26-2016 3-50-47 PM

Step 9: Click “Next” – and you’ll get a final confirmation option. Read the screen and make sure you really want to do what’s being offered – it’s reversible, but check that you didn’t accidentally install the certificate somewhere wrong. The only place this certificate should go to become untrusted is in the Untrusted Certificates store:

5-26-2016 3-50-52 PM

Step 10: Once you’re sure you have it right, click “Finish”. You’ll be congratulated with this prompt:

5-26-2016 3-50-59 PM

Step 11: Verification. Hit OK on the “import was successful” box. If you still have the Certificate open, close it. Now reopen it, from the link or from the certificate store, or if you downloaded the certificate, from there. It’ll look like this:

5-26-2016 3-51-44 PM

The certificate hasn’t actually been revoked, and you can open up the Untrusted Certificates store to remove this certificate so it’s trusted again if you find any difficulties.

Other methods

There are other methods to do this – if you’re a regular admin user on Windows, I’ll tell you the quicker way is to open MMC.EXE, add the Certificates Snap-in, select to manage either the Local Computer or Current User, navigate to the Untrusted Certificates store and Import the certificate there. For wide scale deployment, there are group policy ways to do this, too.

OK, OK, because you asked, here’s a picture of how to do it by GPO:

5-26-2016 4-49-38 PM

How do you encrypt a password?

I hate when people ask me this question, because I inevitably respond with a half-dozen questions of my own, which makes me seem like a bit of an arse.

To reduce that feeling, because the questions don’t seem to be going away any time soon, I thought I’d write some thoughts out.

Put enough locks on a thing, it's secure. Or it collapses under the weight of the locks.

Do you want those passwords in the first place?

Passwords are important objects – and because people naturally share IDs and passwords across multiple services, your holding on to a customer’s / user’s password means you are a necessary part of that user’s web of credential storage.

It will be a monumental news story when your password database gets disclosed or leaked, and even more of a story if you’ve chosen a bad way of protecting that data. You will lose customers and you will lose business; you may even lose your whole business.

Take a long hard look at what you’re doing, and whether you actually need to be in charge of that kind of risk.

Do you need those passwords to verify a user or to impersonate them?

If you are going to verify a user, you don’t need encrypted passwords, you need hashed passwords. And those hashes must be salted. And the salt must be large and random. I’ll explain why some other time, but you should be able to find much documentation on this topic on the Internet. Specifically, you don’t need to be able to decrypt the password from storage, you need to be able to recognise it when you are given it again. Better still, use an acknowledged good password hashing mechanism like PBKDF2. (Note, from the “2” that it may be necessary to update this if my advice is more than a few months old)

Now, do not read the rest of this section – skip to the next question.

Seriously, what are you doing reading this bit? Go to the heading with the next question. You don’t need to read the next bit.

<sigh/>

OK, if you are determined that you will have to impersonate a user (or a service account), you might actually need to store the password in a decryptable form.

First make sure you absolutely need to do this, because there are many other ways to impersonate an incoming user using delegation, etc, which don’t require you storing the password.

Explore delegation first.

Finally, if you really have to store the password in an encrypted form, you have to do it incredibly securely. Make sure the key is stored separately from the encrypted passwords, and don’t let your encryption be brute-forcible. A BAD way to encrypt would be to simply encrypt the password using your public key – sure, this means only you can decrypt it, but it means anyone can brute-force an encryption and compare it against the ciphertext.

A GOOD way to encrypt the password is to add some entropy and padding to it (so I can’t tell how long the password was, and I can’t tell if two users have the same password), and then encrypt it.

Password storage mechanisms such as keychains or password vaults will do this for you.

If you don’t have keychains or password vaults, you can encrypt using a function like Windows’ CryptProtectData, or its .NET equivalent, System.Security.Cryptography.ProtectedData.

[Caveat: CryptProtectData and ProtectedData use DPAPI, which requires careful management if you want it to work across multiple hosts. Read the API and test before deploying.]

[Keychains and password vaults often have the same sort of issue with moving the encrypted password from one machine to another.]

For .NET documentation on password vaults in Windows 8 and beyond, see: Windows.Security.Credentials.PasswordVault

For non-.NET on Windows from XP and later, see: CredWrite

For Apple, see documentation on Keychains

Can you choose how strong those passwords must be?

If you’re protecting data in a business, you can probably tell users how strong their passwords must be. Look for measures that correlate strongly with entropy – how long is the password, does it use characters from a wide range (or is it just the letter ‘a’ repeated over and over?), is it similar to any of the most common passwords, does it contain information that is obvious, such as the user’s ID, or the name of this site?

Maybe you can reward customers for longer passwords – even something as simple as a “strong account award” sticker on their profile page can induce good behaviour.

Length is mathematically more important to password entropy than the range of characters. An eight character password chosen from 64 characters (less than three hundred trillion combinations – a number with 4 commas) is weaker than a 64 character password chosen from eight characters (a number of combinations with 19 commas in it).

An 8-character password taken from 64 possible characters is actually as strong as a password only twice as long and chosen from 8 characters – this means something like a complex password at 8 characters in length is as strong as the names of the notes in a couple of bars of your favourite tune.

Allowing users to use password safes of their own makes it easier for them to use longer and more complex passwords. This means allowing copy and paste into password fields, and where possible, integrating with any OS-standard password management schemes

What happens when a user forgets their password?

Everything seems to default to sending a password reset email. This means your users’ email address is equivalent to their credential. Is that strength of association truly warranted?

In the process to change my email address, you should ask me for my password first, or similarly strongly identify me.

What happens when I stop paying my ISP, and they give my email address to a new user? Will they have my account on your site now, too?

Every so often, maybe you should renew the relationship between account and email address – baselining – to ensure that the address still exists and still belongs to the right user.

Do you allow password hints or secret questions?

Password hints push you dangerously into the realm of actually storing passwords. Those password hints must be encrypted as well as if they were the password themselves. This is because people use hints such as “The password is ‘Oompaloompah’” – so, if storing password hints, you must encrypt them as strongly as if you were encrypting the password itself. Because, much of the time, you are. And see the previous rule, which says you want to avoid doing that if at all possible.

Other questions that I’m not answering today…

How do you enforce occasional password changes, and why?

What happens when a user changes their password?

What happens when your password database is leaked?

What happens when you need to change hash algorithm?

Apple’s “goto fail” SSL issue–how do you avoid it?

Context – Apple releases security fix; everyone sees what they fixed

 

Last week, Apple released a security update for iOS, indicating that the vulnerability being fixed is one that allows SSL / TLS connections to continue even though the server should not be authenticated. This is how they described it:

Impact: An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS

Description: Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.

Secure Transport is their library for handling SSL / TLS, meaning that the bulk of applications written for these platforms would not adequately validate the authenticity of servers to which they are connected.

Ignore “An attacker with a privileged network position” – this is the very definition of a Man-in-the-Middle (MITM) attacker, and whereas we used to be more blasé about this in the past, when networking was done with wires, now that much of our use is wireless (possibly ALL in the case of iOS), the MITM attacker can easily insert themselves in the privileged position on the network.

The other reason to ignore that terminology is that SSL / TLS takes as its core assumption that it is protecting against exactly such a MITM. By using SSL / TLS in your service, you are noting that there is a significant risk that an attacker has assumed just such a privileged network position.

Also note that “failed to validate the authenticity of the connection” means “allowed the attacker to attack you through an encrypted channel which you believed to be secure”. If the attacker can force your authentication to incorrectly succeed, you believe you are talking to the right server, and you open an encrypted channel to the attacker. That attacker can then open an encrypted channel to the server to which you meant to connect, and echo your information straight on to the server, so you get the same behaviour you expect, but the attacker can see everything that goes on between you and your server, and modify whatever parts of that communication they choose.

So this lack of authentication is essentially a complete failure of your secure connection.

As always happens when a patch is released, within hours (minutes?) of the release, the patch has been reverse engineered, and others are offering their description of the changes made, and how they might have come about.

In this case, the reverse engineering was made easier by the availability of open source copies of the source code in use. Note that this is not an intimation that open source is, in this case, any less secure than closed source, because the patches can be reverse engineered quickly – but it does give us a better insight into exactly the code as it’s seen by Apple’s developers.

Here’s the code:

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;

Yes, that’s a second “goto fail”, which means that the last “if” never gets called, and the failure case is always executed. Because of the condition before it, however, the ‘fail’ label gets executed with ‘err’ set to 0.

Initial reaction – lots of haha, and suggestions of finger pointing

So, of course, the Internet being what it is, the first reaction is to laugh at the clowns who made such a simple mistake, that looks so obvious.

T-shirts are printed with “goto fail; goto fail;” on them. Nearly 200 have been sold already (not for me – I don’t generally wear black t-shirts).

But really, these are smart guys – “be smarter” is not the answer

This is SSL code. You don’t get let loose on SSL code unless you’re pretty smart to begin with. You don’t get to work as a developer at Apple on SSL code unless you’re very smart.

Clearly “be smart” is already in evidence.

There is a possibility that this is too much in evidence – that the arrogance of those with experience and a track record may have led these guys to avoid some standard protective measures. The evidence certainly fits that view, but then many developers start with that perspective anyway, so in the spirit of working with the developers you have, rather than the ones you theorise might be possible, let’s see how to address this issue long term:

Here’s my suggested answers – what are yours?

Enforce indentation in your IDE / check-in process

OK, so it’s considered macho to not rely on an IDE. I’ve never understood that. It’s rather like saying how much you prefer pounding nails in with your bare fists, because it demonstrates how much more of a man you are than the guy with a hammer. It doesn’t make sense when you compare how fast the job gets done, or the silly and obvious errors that turn up clearly when the IDE handles your indenting, colouring, and style for you.

Yes, colouring. I know, colour-blind people exist – and those people should adjust the colours in the IDE so that they make sense. Even a colour-blind person can get shade information to help them. I know syntax colouring often helps me spot when an XSS injection is just about ready to work, when I would otherwise have missed it in all the surrounding garbage of HTML code. The same is true when building code, you can spot when keywords are being interpreted as values, when string delimiters are accidentally unescaped, etc.

The same is true for indentation. Indentation, when it’s caused by your IDE based on parsing your code, rather than by yourself pounding the space bar, is a valuable indication of program flow. If your indentation doesn’t match control flow, it’s because you aren’t enforcing indentation with an automated tool.

What the heck, enforce all kinds of style

Your IDE and your check-in process are a great place to enforce style standards to ensure that code is not confusing to the other developers on your team – or to yourself.

A little secret – one of the reasons I’m in this country in the first place is that I sent an eight-page fax to my bosses in the US, criticising their programming style and blaming (rightly) a number of bugs on the use of poor and inconsistent coding standards. This was true two decades ago using Fortran, and it’s true today in any number of different languages.

The style that was missed in this case – put braces around all your conditionally-executed statements.

I have other style recommendations that have worked for me in the past – meaningful variable names, enforced indenting, maximum level of indenting, comment guidelines, constant-on-the-left of comparisons, don’t include comparisons and assignments in the same line, one line does one thing, etc, etc.

Make sure you back the style requirements with statements as to what you are trying to do with the style recommendation. “Make the code look the same across the team” is a good enough reason, but “prevent incorrect flow” is better.

Make sure your compiler warns on unreachable code

gcc has the option “-Wunreachable-code”.

gcc disabled the option in 2010.

gcc silently disabled the option, because they didn’t want anyone’s build to fail.

This is not (IMHO) a smart choice. If someone has a warning enabled, and has enabled the setting to produce a fatal error on warnings, they WANT their build to fail if that warning is triggered, and they WANT to know when that warning can no longer be relied upon.

So, without a warning on unreachable code, you’re basically screwed when it comes to control flow going where you don’t want it to.

Compile with warnings set to fatal errors

And of course there’s the trouble that’s caused when you have dozens and dozens of warnings, so warnings are ignored. Don’t get into this state – every warning is a place where the compiler is confused enough by your code that it doesn’t know whether you intended to do that bad thing.

Let me stress – if you have a warning, you have confused the compiler.

This is a bad thing.

You can individually silence warnings (with much comments in your code, please!) if you are truly in need of a confusing operation, but for the most part, it’s a great saving on your code cleanliness and clarity if you address the warnings in a smart and simple fashion.

Don’t over-optimise or over-clean your code

The compiler has an optimiser.

It’s really good at its job.

It’s better than you are at optimising code, unless you’re going to get more than a 10-20% improvement in speed.

Making code shorter in its source form does not make it run faster. It may make it harder to read. For instance, this is a perfectly workable form of strstr:

const char * strstr(const char *s1, const char *s2)

{

  return (!s1||!s2||!*s2)?s1:((!*s1)?0:((*s1==*s2&&s1==strstr(s1+1,s2+1)-1)?s1:strstr(s1+1,s2)));

}

Can you tell me if it has any bugs in it?

What’s its memory usage? Processor usage? How would you change it to make it work on case-insensitive comparisons? Does it overflow buffers?

Better still: does it compile to smaller or more performant code, if you rewrite it so that an entry-level developer can understand how it works?

Now go and read the implementation from your CRT. It’s much clearer, isn’t it?

Release / announce patches when your customers can patch

Releasing the patch on Friday for iOS and on Tuesday for OS X may have actually been the correct move – but it brings home the point that you should release patches when you maximise the payoff between having your customers patch the issue and having your attackers reverse engineer it and build attacks.

Make your security announcements findable

Where is the security announcement at Apple? I go to apple.com and search for “iOS 7.0.6 security update”, and I get nothing. It’d be really nice to find the bulletin right there. If it’s easier to find your documentation from outside your web site than from inside, you have a bad search engine.

Finally, a personal note

People who know me may have the impression that I hate Apple. It’s a little more nuanced than that.

I accept that other people love their Apple devices. In many ways, I can understand why.

I have previously owned Apple devices – and I have tried desperately to love them, and to find why other people are so devoted to them. I have failed. My attempts at devotion are unrequited, and the device stubbornly avoids helping me do anything useful.

Instead of a MacBook Pro, I now use a ThinkPad. Instead of an iPad (remember, I won one for free!), I now use a Surface 2.

I feel like Steve Jobs turned to me and quoted Dr Frank N Furter: “I didn’t make him for you.”

So, no, I don’t like Apple products FOR ME. I’m fine if other people want to use them.

This article is simply about a really quick and easy example of how simple faults cause major errors, and what you can do, even as an experienced developer, to prevent them from happening to you.

DigiNotar – why is Google special?

So, you’ve probably heard about the recent flap concerning a Dutch Certificate Authority, DigiNotar, who was apparently hacked into, allowing for the hackers to issue certificates for sites such as Yahoo, Mozilla and Tor.

I’ve been reading a few comments on this topic, and one thing just seems to stick out like a sore thumb.

DigiNotar’s servers issued over 200 fraudulent certificates. These certificates were revoked – but, as with all certificate revocations, you can’t really get a list of the names related to those revoked certificates to go back and see which sites you visited recently that you might want to reconsider. [You can only check to see if a certificate you’re offered matches one that was revoked.]

What behaviour would you reconsider on recently visited sites? Well, I’d start by changing my passwords at those sites, at the very least, perhaps even checking to make sure nobody had used my account in my stead.

But that’s not the thing that sticks out

What does stick out is that DigiNotar’s own certificate was removed from, well, just about everyone’s list of trusted root Certificate Authorities, once it was discovered that a fraudulent certificate in the name of *.google.com had been issued, and had not yet been revoked.

Yeah, given the title of my blog posting, I’m sure you could guess that this was the thing that I was concerned about.

So, why is Google so special?

I’m not sure I buy that DigiNotar’s removal from the trusted certificate list was simply because they failed to find one fraudulently issued certificate. It seems like, if that fraudulent certificate was for Joe Schmoe Electrical Repair, it would just have been revoked like all of the other certificates.

Removing a CA from the trusted list, after all, is pretty much going to kill that CA – every certificate ever issued by them will suddenly fail. All of their customers will have to install a new certificate, and what’s the chance those customers will go back to the CA that caused them a sudden outage to their secure web site?

So it’s not something that companies like Google, Microsoft, Mozilla, etc would do at the drop of a hat.

It certainly seems like Google is special.

The argument I would buy

There is an argument I would buy, but no one is making it. It goes something like this:

“Back in July, when we first discovered the fraudulent certificates, we had no evidence that anyone was using them in the wild, and the CRL publication schedule allowed us to quietly and easily render unusable the certificates we had discovered. Anyone visiting a fraudulent web site would simply have seen the usual certificate error.

“Then, when we discovered in August that there was still one undiscovered certificate, and it was being used in the wild, it was not appropriate to revoke the certificate, because the CRL publishing schedule wasn’t going to bring it to people’s desks in time to prevent them from being abused. So, we had to look for other ways to prevent people from being abused by this certificate.

“We could have trusted to OCSP, but it’s unlikely that the fraudulent certificate pointed to a valid OCSP server. Besides, the use of a fraudulent certificate pretty much requires that you are a man-in-the-middle and can redirect your target to any site you like.

“We could have added this certificate to the ‘untrusted’ certificate list, but only Microsoft has a way to quickly publish that – the other browser and app vendors have to release new versions of their software, because they have a hard coded untrusted certificates list.

“And maybe there’s another certificate – or pile of certificates – that we missed.

“So we chose, in the interests of securing the Internet, and at the risk of adversely affecting valid customers, we chose to remove this one certificate authority from everyone’s list of trusted roots.”

I’ve indented that as if it’s a quote, but as I said, this is an argument that no one is making. So it’s just a fantasy quote.

Is there another possible argument I might be missing, but willing to accept?

Woot got my Zune, Zune can’t get my woot!

Quite some time ago, my wife was very sneaky. Oh, she’s sneaky again and again, but this is the piece of sneakiness that is appropriate for this post.

I logged on to woot.com one day, as I often do, and saw that there was a 30GB Zune for sale – refurbished, and quite a bit cheaper than most places had it for sale, but still more than I could plonk down without blinking.

I told my wife about it, and she told me that no, I was right, we couldn’t really afford it even at that price.

Then, months later, I found that my birthday present was a 30GB Zune – the very one from woot that she said we couldn’t afford.

Ever since then, I’ve been a strong fan of Zune and woot alike.

The other day, though, it dawned on me that I could use my Zune (now I have a Zune HD 32GB) to keep up with woot’s occasional “woot-off” events, where they proceed throughout the day to offer several deals. Unfortunately, I can’t actually buy anything from woot on the Zune.

I couldn’t figure this out for a while, and assumed that it was simply a lack of Flash support.

Sidebar: Why the Zune and iPhone Don’t Have Flash Support

It’s not immediately obvious that there’s a difference between the Zune having no Flash support, and the iPhone having no Flash support.

But there is – and it’s a little subtle.

The Zune doesn’t have Flash support because Adobe haven’t built it.

The iPod doesn’t have Flash support because Apple won’t let Adobe build it.

Back to the main story – why my Zune can’t woot!

I did a little experimenting, and it’s not that woot requires Flash.

I tried to logon directly to the account page at https://sslwww.woot.com/Member/YourAccount.aspx (peculiar that, the URL says “Your Account”, but it’s my account, not yours, that I see there. That’s why you shouldn’t use personal pronouns in folder names).

That failed with a cryptic error – “Can’t load the page you requested. OK”

No, it’s not actually OK that you can’t load the page, but thanks for telling me what the problem was.

Oh, that’s right, you didn’t, you just told me “failed”. Takes me right back to the days of “Error 4/10”.

The best I can reckon is that, since the Zune can visit other SSL sites, and other browsers have no problem with this SSL site, the Zune simply doesn’t have trust in the certificate chain.

That should be easy to fix, all I have to do on my PC, or on any number of web browsers, is to add the site’s root certificate from its certificate chain to my Trusted Root store.

Sadly, I can find no way to do this for my Zune. So, no woot.

Would this be a feature other people would want?

I think this would – for a start, it would mean that users could add web sites that were previously unavailable to them – including test web sites that they might be working on, which are supported by self-signed test certificates.

But more than that, adding a new root certificate to the trusted root certificate store on the Zune is a vital feature for another functionality that people have been begging for. Without adding a root certificate, it is often impossible to support WPA2 Enterprise wireless mode. So, the “add certificate to my Zune’s Trusted Root store” feature would be a step toward providing WPA2 Enterprise support.

How would that interface look on the Zune?

I’m not sure that the interface would have to be on the Zune itself – but perhaps the Zune could stock up failed certificate matches to pass to the Zune software, and then ask the operator of the Zune software at the next Sync, “do you want to trust these certificates to enable browsing to these sites?”

Similarly, for the WPA Enterprise mode, it could ask the Zune software user “do you want to connect to this WPA Enterprise network in future?”

TLS Renegotiation attack – Microsoft workaround/patch

Hidden by the smoke and noise of thirteen (13! count them!) security bulletins, with updates for 26 vulnerabilities and a further 4 third-party ActiveX Killbits (software that other companies have asked Microsoft to kill because of security flaws), we find the following, a mere security advisory:

Microsoft Security Advisory (977377): Vulnerability in TLS/SSL Could Allow Spoofing

It’s been a long time coming, this workaround – which disables TLS / SSL renegotiation in Windows, not just IIS.

Disabling renegotiation in IIS is pretty easy – you simply disable client certificates or mutual authentication on the web server. This patch gives you the ability to disable renegotiation system-wide, even in the case where the renegotiation you’re disabling is on the client side. I can’t imagine for the moment why you might need that, but when deploying fixes for symmetrical behaviour, it’s best to control it using switches that work in either direction.

The long-term fix is yet to arrive – and that’s the creation and implementation of a new renegotiation method that takes into account the traffic that has gone on before.

To my mind, even this is a bit of a concession to bad design of HTTPS, in that HTTPS causes a “TOC/TOU” (Time-of-check/Time-of-use) vulnerability, by not recognising that correct use of TLS/SSL requires authentication and then resource request, rather than the other way around. But that’s a debate that has enough clever adherents on both sides to render any argument futile.

Suffice it to say that this can be fixed most easily by tightening up renegotiation at the TLS layer, and so that’s where it will be fixed.

Should I apply this patch to my servers?

I’ll fall back to my standard answer to all questions: it depends.

If your servers do not use client auth / mutual auth, you don’t need this patch. Your server simply isn’t going to accept a renegotiation request.

If your servers do use client authentication / mutual authentication, you can either apply this patch, or you can set the earlier available SSLAlwaysNegoClientCert setting to require client authentication to occur on initial connection to the web server.

One or other of these methods – the patch, or the SSLAlwaysNegoClientCert setting – will work for your application, unless your application strictly requires renegotiation in order to perform client auth. In that case, go change your application, and point them to documentation of the attack, so that they can see the extent of the problem.

Be sure to read the accompanying KB article to find out not only how to turn on or off the feature to disable renegotiation, but also to see which apps are, or may be, affected adversely by this change – to date, DirectAccess, Exchange ActiveSync, IIS and IE.

How is Microsoft’s response?

Speed

I would have to say that on the speed front, I would have liked to see Microsoft make this change far quicker. Disabling TLS/SSL renegotiation should not be a huge amount of code, and while it has some repercussions, and will impact some applications, as long as the change did not cause instability, there may be some institutions who would want to disable renegotiation lock, stock and barrel in a hurry out of a heightened sense of fear.

I’m usually the first to defend Microsoft’s perceived slowness to patch, on the basis that they do a really good job of testing the fixes, but for this, I have to wonder if Microsoft wasn’t a little over-cautious.

Accuracy

While I have no quibbles with the bulletin, there are a couple of statements in the MSRC blog entry that I would have to disagree with:

IIS 6, IIS 7, IIS 7.5 not affected in default configuration

Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.

Well, of course – in the default setting on most Windows systems, IIS is not installed, so it’s not vulnerable.

That’s clearly not what they meant.

Did they mean “the default configuration with IIS installed and turned on, with a certificate installed”?

Clearly, but that’s hardly “the default configuration”. It may not even be the most commonly used configuration for IIS, as many sites escape without needing to use certificates.

Sadly, if I add “and mutual authentication enabled”, we’re only one checkbox away from the “default configuration” to which this article refers, and we’re suddenly into vulnerable territory.

In other words, if you require client / mutual authentication, then the default configuration of IIS that will achieve that is vulnerable, and you have to make a decided change to non-default configuration (the SSLAlwaysNegoClientCert setting), in order to remain non-vulnerable without the 977377 patch.

The other concern I have is over the language in the section “Likelihood of the vulnerability being exploited in general case”, which discusses only the original CSRF-like behaviour exploited under the initial reports of this problem.

There are other ways to exploit this, some of which require a little asinine behaviour on the part of the administrator, and others of which are quite surprisingly efficient. I was particularly struck by the ability to redirect a client, and make it appear that the server is the one doing the redirection.

I think that Eric and Maarten understate the likelihood of exploit – and they do not sufficiently emphasise that the chief reason this won’t be exploited is that it requires a MITM (Man-in-the-middle) attack to have already successfully taken place without being noticed. That’s not trivial or common – although there are numerous viruses and bots that achieve it in a number of ways.

Clarity

It’s a little unclear on first reading the advisory whether this affects just IIS or all TLS/SSL users on the affected system. I’ve asked if this can be addressed, and I’m hoping to see the advisory change in the coming days.

Summary

I’ve rambled on for long enough – the point here is that if you’re worried about SSL / TLS client certificate renegotiation issues that I’ve reported about in posts 1, 2 and 3 of my series, by all means download and try this patch.

Be warned that it may kill behaviour your application relies upon – if that is the case, then sorry, you’ll have to wait until TLS is fixed, and then drag your server and your clients up to date with that fix.

The release of this advisory is by no means the end of the story for this vulnerability – there will eventually be a supported and tested protocol fix, which will probably also be a mere advisory, followed by updates and eventually a gradual move to switch to the new TLS versions that will support this change.

This isn’t a world-busting change, but it should demonstrate adequately that changes to encryption protocols are not something that can happen overnight – or even in a few short months.

Ten key truths

In the spirit of "ten unavoidable security truths", and numerous other top-ten lists, here’s a list of ten key truths that apply to public / private key pairs:

  1. Your private key has to be private to you. It cannot be created by anyone else.
  2. Anyone who has your private key is you, for the purposes to which that key is applied.
  3. If you have a private key that was generated by your employer, then that key identifies you as a part of the employer. It cannot be used to uniquely identify you, because the key was generated under your employer’s control.
  4. Keys associated with expired or revoked certificates are not always useless – you can use them to decrypt a file that they encrypted a long time ago; you can also verify the time-stamped signature of a document, if the certificate was valid at the time of the signature.
  5. A key is a number – it cannot expire, it is the associated certificate that expires. Similarly, the certificate, not the key, is what is revoked after exposure.
  6. A key is not a certificate. A certificate is a statement (usually of ownership) about a key pair, and contains the public key.
  7. Too short of a key is no key at all, and an exposed private key is no key at all.
  8. Protecting a password or key by encrypting it may do nothing more than extend the problem by a level – now you have to protect the key used to encrypt the password / key.
  9. Revocation of a certificate applies only to trusting parties who actually bother to check for revocation.
  10. A key derived from a password has the same strength as the password, no matter how long the key.

Note that this list describes what happens when cryptography is working perfectly. There are other key facts that apply to broken cryptography and broken process:

  1. You cannot increase entropy by repeating bits, or by padding with constant or predictable values.
  2. If someone creates a private key for you, they can become you using that key.
  3. Protecting anything by using a password or key to feed a yes/no gate requires the attacker to only fool the yes/no gate. Encrypted USB drive manufacturers found this out recently to their embarrassment.
  4. Inventing your own crypto – or the processes around it – is always a bad idea. Research others’ ideas and use them, particularly published standards.
  5. Even the longest strongest key in the world can be defeated by a big enough bribe to the key-holder.
    1. If the price of a card number on the black market is $1, then access to a million card numbers is equivalent to a million dollars. Who do you pay well enough that they can ignore that value?
  6. A key or password that is never updated as it passes through many hands is vulnerable to abuse by each of those hands.
  7. Broken cryptographic algorithms cannot be made better by increasing the length of the key or by applying the algorithm more times.
  8. All cryptographic algorithms become broken in time. The trick is to choose one that lasts longer than your need to protect your data.
  9. When you lose the private key through insufficient key management, you have lost the data it protected.
  10. Even a good cryptographic algorithm can be destroyed by a poor key-generation algorithm.

Note that these lists are rather arbitrarily scoped to ten – there may be more important truths I’ve forgotten, or items I’ve included that aren’t really so important.

Corrections to Thierry Zoller’s Whitepaper

Thanks to Thierry Zoller for mentioning me in the FTP section of his whitepaper summary of the TLS renegotiation attacks on various protocols. I’m glad he also spells my name right – you’d be surprised how many people get that wrong, although I’m sure Thierry gets his own share of people unable to spell his name.

The whitepaper itself contains some really nice and simple documentation of the SSL MITM renegotiation attack, and how it works. It’s well worth reading if you’re looking for some insight into how this works.

First, though, a couple of corrections to Thierry’s summary – while he’s working on revising his whitepaper, I’ll post them here:

  • The name of my FTP server is “WFTPD Server”, or “WFTPD Pro Server”, as you will see from http://www.wftpd.com. Of the two, only the WFTPD Pro Server has FTP over TLS capabilities.
  • SFTP is not “FTP over SSH”. SFTP is a completely different protocol – a sub-protocol of SSH. For example, where FTP uses commands of three or four letters, SFTP uses binary single byte instructions. There is no one-to-one mapping between SFTP commands and FTP commands, and there are many other differences that aren’t really worth going into.
  • I don’t see the use of CCC as a reason to “use SFTP over FTPS” – I see it as a reason to not use FTPS clients that require, or FTPS servers that support, the CCC command. There are better ways to surmount the NAT problem in FTPS – the best is IPv6, but even in IPv4, the use of block-mode FTP over the default data channel removes NATs from being a problem for FTPS.
  • The language in the “Client Certificate Authentication” FTPS section appears to be an incomplete sentence. What I think he is trying to say is that when authentication is performed in FTPS, it resets the command state, so that commands entered prior to authentication, if they are executed at all, are executed in the context that existed at that time, rather than the newly-authenticated context.

I think that where FTPS has problems that are thrown into sharp relief with the SSL MITM renegotiation attacks I’ve been discussing for a while now, it has had those problems before. If an attacker can monitor and modify the FTP control channel (because the client requested CCC and the server allowed it), the attacker can easily upload whatever data they like in place of the client’s bona fide upload.

The renegotiation attack simply makes it easier for the attacker to hide the attack. It’s the use of CCC which facilitates the MITM attack, far more than the renegotiation does.

To address one further comment I’ve heard with regard to SSL MITM attacks, I hear “yeah, but getting to be a man-in-the-middle is so difficult anyway, that even a really simple attack is unlikely”. That’s a true comment – for the most part, there is little chance of a man-in-the-middle attack occurring on the general Internet in a bulk situation. The ‘last mile’ of home wireless, coffee bars and other public wireless hangouts, or the possibility of DNS hijacking, HOSTS file editing, broadband router hacking, or just plain viruses and worms, are the place where most man-in-the-middle entry points exist.

However, if you’re going to assert that it’s truly unlikely that an attacker can insert himself into your network stream, you basically have no reason whatever to use SSL / TLS – without a potential for that interception and modification of your traffic, there’s really no need to authenticate it, encrypt it, or monitor its integrity along the path.

The fact that a protocol or application uses SSL /TLS means that it tacitly assumes the existence of a man in the middle. If SSL / TLS allows a man-in-the-middle attack at all, it fails in its basic raison d’etre.

Next post, I promise something other than SSL renegotiation attacks.

My take on the SSL MITM Attacks – part 3 – the FTPS attacks

[Note – for previous parts in this series, see Part 1 and Part 2.]

FTP, and FTP over SSL, are my specialist subject, having written one of the first FTP servers for Windows to support FTP over SSL (and the first standalone FTP server for Windows!)

Rescorla and others have concentrated on the SSL MITM attacks and their effects on HTTPS, declining to discuss other protocols about which they know relatively far less. OK, time to step up and assume the mantle of expert, so that someone with more imagination can shoot me down.

FTPS is not vulnerable to this attack.

No, that’s plainly rubbish. If you start thinking along those lines in the security world, you’ve lost it. You might as well throw in the security towel and go into a job where you can assume everybody loves you and will do nothing to harm you. Be a developer of web-based applications, say. 🙂

FTPS has a number of possible vulnerabilities

And they are all dependent on the features, design and implementation of your individual FTPS server and/or client. That’s why I say “possible”.

Attack 1 – renegotiation with client certificates

The obvious attack – renegotiation for client certificates – is likely to fail, because FTPS starts its TLS sessions in a different way from HTTPS.

In HTTPS, you open an unauthenticated SSL session, request a protected resource, and the server prompts for your client certificate.

In FTPS, when you connect to the control channel, you provide your credentials at the first SSL negotiation or not at all. There’s no need to renegotiate, and certainly there’s no language in the FTPS standard that allows the server to query for more credentials part way into the transaction. The best the server can do is refuse a request and say you need different or better credentials.

Attack 2 – unsolicited renegotiation without credentials

A renegotiation attack on the control channel that doesn’t rely on making the server ask for client credentials is similarly unlikely to succeed – when the TLS session is started with an AUTH TLS command, the server puts the connection into the ‘reinitialised’ state, waiting for a USER and PASS command to supply credentials. Request splitting across the renegotiation boundary might get the user name, but the password wouldn’t be put into anywhere the attacker could get to.

Attack 3 – renegotiating the data connection

At first sight, the data connection, too, is difficult or impossible to attack – an attacker would have to guess which transaction was an upload in order to be able to prepend his own content to the upload.

But that’s betting without the effect that NATs had on the FTP protocol.

Because the PORT and PASV commands involve sending an IP address across the control channel, and because NAT devices have to modify these commands and their responses, in many implementations of FTPS, after credentials have been negotiated on the control channel, the client issues a “CCC” command, to drop the control channel back into clear-text mode.

Yes, that’s right, after negotiating SSL with the server, the client may throw away the protection on the control channel, so the MitM attacker can easily see what files are going to be accessed over what ports and IP addresses, and if the server supports SSL renegotiation, the attacker can put his data in at the start of the upload before renegotiating to hand off to the legitimate client. Because the client thinks everything is fine, and the server just assumes a renegotiation is fine, there’s no reason for either one to doubt the quality of the file that’s been uploaded.

How could this be abused? Imagine that you are uploading an EXE file, and the hacker prepends it with his own code. That’s how I wrote code for a ‘dongle’ check in a program I worked on over twenty years ago, and the same trick could still work easily today. Instant Trojan.

There are many formats of file that would allow abuse by prepending data. CSV files, most exploitable buffer overflow graphic formats, etc.

Attack 3.5 – truncation attacks

While I’m on FTP over SSL implementations and the data connection, there’s also the issue that most clients don’t properly terminate the SSL connection in FTPS data transfers.

As a result, the server can’t afford to report as an error when a MitM closes the TCP connection underneath them with an unexpected TCP FIN.

That’s bad – but combine it with FTP’s ability to resume a transfer from part-way into a file, and you realize that an MitM could actually stuff data into the middle of a file by allowing the upload to start, interrupting it after a few segments, and then when the client resumed, interjecting the data using the renegotiation attack.

The attacker wouldn’t even need to be able to insert the FIN at exactly the byte mark he wanted – after all, the client will be sending the REST command in clear-text thanks to the CCC command. That means the attacker can modify it, to pick where his data is going to sit.

Not as earth-shattering as the HTTPS attacks, but worth considering if you rely on FTPS for data security.

How does WFTPD Pro get around these attacks?

1. I never bothered implementing SSL / TLS renegotiation – didn’t see it as necessary; never had the feature requested. Implementing unnecessary complexity is often cause for a security failure.

2. I didn’t like the CCC command, and so I didn’t implement that, either. I prefer to push people towards using Block instead of Stream mode to get around NAT restrictions.

I know, it’s merely fortunate that I made those decisions, rather than that I had any particular foresight, but it’s nice to be able to say that my software is not vulnerable to the obvious attacks.

I’ve yet to run this by other SSL and FTP experts to see whether I’m still vulnerable to something I haven’t thought of, but my thinking so far makes me happy – and makes me wonder what other FTPS developers have done.

I wanted to contact one or two to see if they’ve thought of attacks that I haven’t considered, or that I haven’t covered. So far, however, I’ve either received no response, or I’ve discovered that they are no longer working on their FTPS software.

Let me know if you have any input of your own on this issue.