Monthly Archives: May 2008

Searching for Weak Debian / Ubuntu SSL Certificates

Tuxkeys I’ve seen a number of people promote packages that have shipped for Debian and Ubuntu, which allow users to scan their collected keys – OpenSSH or OpenSSL or OpenVPN, to discover whether they’re too weak to be of any functional use. [See my earlier story on Debian and the OpenSSL PRNG]

These tools all have one problem.

They run on the Linux systems in question, and they scan the certificates in place.

Given that the keys in question could be as old as 2 years, it seems likely that many of them have migrated off the Linux platforms on which they have started, and onto web sites outside of the Linux platform.

Or, there may simply be a requirement for a Windows-centric security team to be able to scan existing sites for those Linux systems that have been running for a couple of years without receiving maintenance (don’t nod like that’s a good thing).

So, I’ve updated my SSLScan program. I’m attaching a copy of the tool to this blog post, (along with a copy of the Ubuntu OpenSSL blacklists for 1024-bit and 2048-bit keys if I can get approval), though of course I would suggest keeping up with your own copies of these blacklists. It took a little research to find out how to calculate the quantity being used for the fingerprint by Debian, but I figure that it’s best to go with the most authoritative source to begin with.

Please let me know if there are other, non-authoritative blacklists that you’d like to see the code work with – for now, the tool will simply search for “blacklist.RSA-1024″ and “blacklist.RSA-2048″ in the current directory to build a list of weak key fingerprints.

I’ve found a number of surprising certificates that haven’t been reissued yet, and I’ll let you know about them after the site owners have been informed.

[Sadly, I didn’t find https://whitehouse.gov before it was changed – its certificate is shared with, of all places, https://www.gov.cn – yes, the White House, home of the President of America, is hosted from the same server as the Chinese government. The certificate was changed yesterday, 2008/5/21. https://www.cacert.org’s certificate was issued two days ago, 2008/5/20 – coincidence?]

My examples are from the web, but the tool will work on any TCP service that responds immediately with an attempt to set up an SSL connection – so LDAP over SSL will work, but FTP over SSL will not. It won’t work with SSH, because that apparently uses a different key format.

Simply run SSLScan, and enter the name of a web site you’d like to test, such as www.example.com- don’t enter “http://” at the beginning, but remember that you can test a host at a non-standard port (which you will need to do for LDAP over SSL!) by including the port in the usual manner, such as www.example.com:636.

If you’re scanning a larger number of sites, simply put the list of addresses into a fie, and supply the file’s name as the argument to SSLScan.

Let me know if you think of any useful additions to the tool.

Here is some slightly modified output from a sample run of the tool (the names have been changed to protect the innocent):

Image-0195

The text to look for here is “>>>This Key Is A Weak Debian Key<<<“.

Debian and the OpenSSL PRNG

[PRNG is an abbreviation for “Pseudo-Random Number Generator”, a key core component of the key-generation in any cryptographic library.]

Warning: Choking HazardA few people have already commented on the issue itself – Debian issued, in 2006, a version of their Linux build that contained a modified version of OpenSSL. The modification has been found to drastically reduce the randomness of the keys generated by OpenSSL on Debian Linux and any Linux derived from that build (such as Ubuntu, Edubuntu, Xubuntu, and any number of other buntus). Instead of being able to generate 1024-bit RSA keys that have a 1-in-2^1024 chance of being the same, the Debian build generated 1024-bit RSA keys that have a 1-in-2^15 chance of being the same (that’s 1 in 32,768).

Needless to say, that makes life really easy on a hacker who wants to pretend to be a server or a user who is identifed as the owner of one of these keys.

The fun comes when you go to http://metasploit.com/users/hdm/tools/debian-openssl/ and see what the change actually was that caused this. Debian fetched the source for OpenSSL, and found that Purify flagged a line as accessing uninitialised memory in the random number generator’s pre-seeding code.

So. They. Removed. The. Line.

I thought I’d state that slowly for dramatic effect.

If they’d bothered researching Purify and OpenSSL, they’d have found this:

http://rt.openssl.org/Ticket/Display.html?id=521&user=guest&pass=guest

Which states (in 2003, three years before Debian applied teh suck patch) “No, it’s fine – the problem is Purify and Valgrind assume all use of uninitialised data is inherently bad, whereas a PRNG implementation has nothing but positive (or more correctly, non-negative) things to say about the idea.”

So, Debian removed a source of random information used to generate the key. Silly Debian.

But there’s a further wrinkle to this.

If I understand HD Moore’s assertions correctly, this means that the sole sources of entropy (essentially, “randomness”) for the random numbers used to generate keys in Debian are:

  1. The Process ID (from 1 to 32,767)
  2. The contents of an uninitialised area in the process’ memory
  3. uh… that’s it.

[Okay, so that’s not strictly true in all cases – there are other ways to initialise randomness, but these two are the fallback position – the minimum entropy that can be used to create a key. In the absence of a random number source, these are the two things that will be used to create randomness.]

If you compile C++ code using Microsoft’s Visual C++ compiler in DEBUG mode, or with the /GZ, /RTC1, or /RTCs flags, you are asking the compiler to automatically initialise all uninitialised memory to 0xcc. I’m sure there’s some similar behaviour on Linux compilers, because this aids with debugging accidental uses of uninitialised memory.

But what if you don’t set those flags?

What does “uninitialised memory” contain?

It would be bad if “uninitialised memory” contained memory from other processes – previous processes that had owned memory but were now defunct – because that would potentially mean that your new process had access to secrets that it shouldn’t.

So, “uninitialised memory” has to be initialised to something, at least the first time it is accessed.

Is it really going to be initialised to random values? That would be such a huge waste of processor time – and anyway, we’re looking at this from the point of view of a cryptographic process, which needs to have strongly random numbers.

No, random would be bad. Perhaps in some situations, the memory will be filled with copies of ‘public’ data – environment variables, say. But most likely, because it’s a fast easy thing to do, uninitialised memory will be filled with zeroes.

Of course, after a few functions are called, and returned from, and after a few variables are created and go out of scope, the stack will contain values indicative of the course that the program has taken so far – it may look randomish, but it will probably vary very little, if any, from one execution of the program to another.

In the absence of a random number seed file, or a random number generator providing /dev/urand or /dev/random, then, an OpenSSL key is going to have a 1 in 32,768 chance of being the same as a key created on a similar build of OpenSSL – higher, if you consider that most PIDs fall in a smaller range.

So, here’s some lessons to learn about compiling other people’s cryptographic code:

  1. Don’t ever compile cryptographic code in release mode, because you will optimize away lines that clear secrets from memory.
  2. Don’t ever compile cryptographic code in debug mode, because you will initialize memory that is expected to be uninitialised and random.
  3. Don’t ever modify cryptographic code, even if it throws up warnings. You don’t understand what you’re doing.
  4. Don’t ever compile cryptographic code, because you don’t know what you are doing.

Why I use CryptoAPI

This is one reason why I prefer to use Microsoft’s CryptoAPI, rather than libraries such as OpenSSL. There are others:

  1. It’s not my fault if something goes wrong with the crypto.
  2. The users will apply patches to the crypto, and I don’t have to go persuading my users to apply the patches.
  3. There’s a central place where administrators will expect to find crypto keys, and it’s well-protected.
  4. The documentation for CryptoAPI is far better than the documentation for OpenSSL, which is at best confusing, and at worst, non-existent.

In fairness, there are reasons not to use CryptoAPI:

  1. New algorithms are made available for new versions of Windows, and not backported readily to older versions. With a library you ship, you get to decide which version customers can run – unless someone else comes and installs another version.
  2. Microsoft’s documentation is better, but it’s still not perfect. Once in a while, it’s not even correct. At least if you have the source code, and are insanely motivated, you can find out what the truth of a matter is.

We’ll still be learning lessons for a while…

The lessons to learn from this episode are almost certainly not yet over. I expect someone to find in the next few weeks that OpenSSL with no extra source of entropy on some operating system or family of systems generates easily guessed keys, even using the “uninitialised memory” as entropy. I wait with ‘bated breath.

Change the Administrator account name?

Boxers Religious debates are rarely clean or pretty.

The same is true in all spheres, whether debating Christianity against Islam, Linux against Windows, or Cagney vs Lacey.

In security, there are a few divisive issues that are always going to crop up.

Is your datacentre network trustworthy enough to pump secret data around it at any speed?

Are virtual machines on the same host PC “separated” for segregation of duties purposes?

Is SHA-1 completely broken yet?

There’s nothing more infuriating than arguing your position on one side of such a debate, only to see those infuriating people on the other side sit smugly in their assertion that what you state has no bearing on their view, which is still more correct than yours, nyaah nyaah.

I hope it doesn’t get that way with a debate between two people I like to claim as friends – Jesper Johansson and Roger Grimes – who are currently waging their war of words in TechNet, in what I hope will become a regular series.

The current article is on the big debate between those who think it’s a great security idea to rename the Administrator account to something else, and those who perceive little or no benefit in the practice – so little that it’s not worth doing.

For those of you too lazy to follow the link and read the article, Jesper (and his Microsoft insider, Steve Riley) are on the “don’t bother renaming Administrator” side, while Roger (with his own insider, Aaron Margosis) are on the side that renaming the Administrator account is a security win.

I really can’t dispute the mathematics, which says that if you have a 10-character password, you have a 1-in-umpteen-thousand chance of someone guessing and logging in as Administrator; if you have a 10-character password and a renamed Administrator account, however, the chances rise to 1-in-umpty-thousand. A couple of orders of magnitude of benefit, yes?

Sure – but there’s a couple of points I’d make here:

  1. There’s not much difference between zero and zero, and the two numbers representing the probability of a random guess succeeding are as close to zero as makes no realistic difference. At that level of difference between near-zeroes, you’re as likely to find your password is weakened by poor choice of random number generator as you are to find that renaming the account protected you while the password did not. In essence, you’re saying “we’re already protected against the sort of guy with enough luck to win the lottery a million times in a row, but just in case, we want to protect ourselves against the guy with luck enough that he could win a million and one times.”
  2. You could get the same increase in probabilistic protection by lengthening the password. Even if all you did was to add into the password the name that you were going to give the Administrator account, you’ve provided yourself with just as much mathematical protection against random guessing as you would have by changing the Administrator account name.

Okay, so maybe you’re not really getting orders of magnitude better protection – but surely it can’t hurt security, and it feels enough like security that several people in the field recommend it.

To me, that’s old-style security thinking, where the goal was to disable, disable, disable – when the web sites and applications were so full of holes that any time you saw something that looked like a hole, you immediately knew that the right thing was to plug it up.

Modern information security, though, should be more about enabling – enabling business and customers alike, to conduct business without unnecessary inconvenience. Without wishing to sound like Yoda, inconvenience leads to confusion; confusion leads to mistakes, which lead inexorably to insecurity.

If you rename the administrator account, you’re asking for its name to be a part of the secret that secures its access. You won’t get any cooperation in that, however, as the operating system and all of your applications are designed around the principle that the username is not a secret. You’re also asking your system administrators – the people who are going to be using the Administrator account – to remember that it’s been renamed, to remember what it’s been renamed to, and to remember to not let anyone else know that.

So, yeah, I’m on the side that says “renaming the administrator account doesn’t add any significant security benefit”.

The one benefit I do see is that the “random noise” of random attacks on any account named Administrator can be separated from the log entries indicating that someone is attacking your Administrator account. I think this is a bit of a false saving, though – you really shouldn’t be allowing any external access to the Administrator account. If your staff wants to access the Administrator account remotely, they should VPN in under their own account, and then use RDP, or some other protocol to connect to the machine they wish to administer.

I’m hoping to entice some of the Security MVPs to contribute to this debate – maybe even Roger and Jesper. There are two sides, here, and I doubt that I’ll actually end up converting anyone to my side who wasn’t already there to begin with.

In Defence of the Self-Signed Certificate

Recently I discussed using EFS as a simple, yet reliable, form of file encryption. Among the doubts raised was the following from an article by fellow MVP Deb Shinder on EFS:

EFS generates a self-signed certificate. However, there are problems inherent in using self-signed certificates:

  • Unlike a certificate issued by a trusted third party (CA), a self-signed certificate signifies only self-trust. It’s sort of like relying on an ID card created by its bearer, rather than a government-issued card. Since encrypted files aren’t shared with anyone else, this isn’t really as much of a problem as it might at first appear, but it’s not the only problem.
  • If the self-signed certificate’s key becomes corrupted or gets deleted, the files that have been encrypted with it can’t be decrypted. The user can’t request a new certificate as he could do with a CA.

Well, she’s right, but that only really gives a part of the picture, and it verges on out-and-out recommending that self-signed certificates are completely untrustworthy. Certainly that’s how self-signed certificates are often viewed.

Let’s take the second item first, shall we?

“Request a new certificate” isn’t quite as simple as all that. If the user has deleted, or corrupted, the private key, and didn’t save a copy, then requesting a new certificate will merely allow the user to encrypt new files, and won’t let them recover old files. [The exception is, of course, if you use something called “Key Recovery” at your certificate authority (CA) – but that’s effectively an automated “save a copy”.]

Even renewing a certificate changes its thumbprint, so to decrypt your old EFS-encrypted files, you should keep your old EFS certificates and private keys around, or use CIPHER to re-encrypt with current certificates.

So, the second point is dependent on whether the CA has set up Key Recovery – this isn’t a problem if you make a copy of your certificate and private key, onto removable storage. And keep it very carefully stored away.

As to the first point – you (or rather, your computer) already trust dozens of self-signed certificates. Without them, Windows Update would not work, nor would many of the secured web sites that you use on a regular basis.

Whuh?

certmgr shows that all Trusted Root Certificates are self-signed.

Hey, look – they’ve all got the same thing in “Issued To” as they have in “Issued By”!

Yes, that’s right – every single “Trusted Root” certificate is self-signed!

If you’re new to PKI and cryptography, that’s going to seem weird – but a moment’s thought should set you at rest.

Every certificate must be signed. There must be a “first certificate” in any chain of signed certificates, and if that “first certificate” is signed by anyone other than itself, then it’s not the first certificate. QED.

The reason we trust any non-root certificate is that we trust the issuer to choose to sign only those certificates whose identity can be validated according to their policy.

So, if we can’t trust these trusted roots because of who they’re signed by, why should we trust them?

The reason we trust self-signed certificates is that we have a reason to trust them – and that reason is outside of the certificate and its signature. The majority (perhaps all) of the certificates in your Trusted Root Certificate Store come from Microsoft – they didn’t originate there, but they were distributed by Microsoft along with the operating system, and updates to the operating system.

You trusted the operating system’s original install disks implicitly, and that trust is where the trust for the Trusted Root certificates is rooted. That’s a trust outside of the certificate chains themselves.

So, based on that logic, you can trust the self-signed certificates that EFS issues in the absence of a CA, only if there is something outside of the certificate itself that you trust.

What could that be?

For me, it’s simple – I trust the operating system to generate the certificate, and I trust my operational processes that keep the private key associated with the EFS certificate secure.

There are other reasons to be concerned about using the self-signed EFS certificates that are generated in the absence of a CA, though, and I’ll address those in the next post on this topic.

Apple Changes Update Policies – Still No Biscuit

As I have mentioned in other posts (Retro-bundling – another suck of the Apple, MacBook Air debuts; iTunes Pesters Me Again, Removing Apple Mobile Device Support, I didn’t want iTunes – now I’ve got iPod, too?, etc, etc), this has long since stopped being an issue for me, because I’ve removed all the Apple software from my machine as a bit of a protest against Apple’s inability or unwillingness to provide me the means to manage my own systems.

Now, I understand that Apple has finally heard some of the complaints from various blogs around the world, and has done something about it.

They have separated the updates from the new software. The new dialog looks like this:

But it still marks the new software by default to be installed.

This is the behaviour that is wrong – okay, so it’s now clear as to the difference between an update and a new software, but the key again is that Apple is marking new software for installation from an update tool.

An update tool should be a piece of software that most users say “yes, do whatever”, and that doesn’t then cause significant additions to the software. By automatically checking new software, Apple is eroding the trust that users will have in the update tool.

Again, I don’t mind that they’re encouraging users to install Safari – I don’t even mind them spending time persuading their existing install base to use it. What I’m perplexed at is that Apple feels that they have to slide it in under the door, rather than sell it to users on its own merits.

And, yes, I’m quite well aware that you could also say the same of any browser that ships with an operating system – except, really, you’ve got to have a browser shipping in your operating system these days. Yeah, the guys who ship the operating system have an advantage – and they worked hard to build that advantage in the first place. They have a certain momentum behind anything they offer, and even if the system is as open and transparent to all application vendors as it is to the OS vendor, the default installed applications will generally have a larger market share than the ‘after-market’ tools, just because of users’ inertia.

[Note that the paragraph above applies to Apple / Mac / Safari, just as well as it does to Microsoft / Windows / Internet Explorer]

However, I don’t think that users’ inertia is a cause for sleight-of-hand tactics like retro-bundling.

Think like a bad guy? It’s a start.

Cool new site (and blog) from Microsoft – http://securedeveloper.com – and it has a tag line I’ve heard many times before:

image

Like that old maxim that “you need to stop fighting fires long enough to tell the architects to stop building things out of wood”, thinking like a bad guy is just the first step to developer security.

It’s a necessary step, but it’s not the final goal.

It’s a start – in fact, it’s a great start, and I think every developer needs to go through that phase. Many have yet to do so – particularly, it seems, those fresh out of college or programming school.

But I think it’s really a catch-phrase for the beginning of becoming a secure developer. It’s what you have to tell yourself when you’re used to writing code for the sole purpose of implementing features, so that you can get over that mind-set and into the sort of thinking that accepts that your code can be attacked.

But the bad guy has it easy.

He only has to find one way in. He can afford to become an expert on one part of your software, and zero in on it.

Thinking like a bad guy will widen your awareness to the point that you know that incursions can and will happen, and you’ll occasionally take better care in your coding. That’s a good thing.

But what if you start thinking like someone building a defensive structure?

The defence builder has to find (and limit) all the ways in, and just in case he missed one, he has to find all the ways you can get further in once you’re in – he has to become an expert on all parts of the software, as well as something of an expert on the external dependencies – libraries, network equipment, database components, etc.

[After all, we’ve seen this past week how many sites can get exploited through SQL Injection attacks – and the primary cause for those seems to be web developers who don’t know SQL, yet who send SQL statements to be executed at the database.]

You could start thinking like a defender – what alarms should signal the presence, or possibility, of an intruder? What information could an active defender use to verify the intent of a potential intruder? How could you slow down a possible attacker to the point where it’s feasible for a human responder to outpace a mechanical attacker?

Maybe you could start thinking like an investigator – once you believe someone has got in, what clues would you like to be left, showing you where the holes were? How can you tell what defences have been useful and what defences were useless? Where was the attacker actively assisted or resisted by your system and software?

Perhaps you could even think like a defence component builder – how can you ensure that you learn lessons from tried and true defences in order to build those lessons in to the next system, or to teach the next set of builders?

Think like the architect of a mediaeval castle – we’ve gotten used to the idea that mediaeval castles were places of defence, that they sought to be impenetrable bastions behind which the local king, thane, lord or whatever could take refuge and survive. Yet they were also places of business, places of government, places with a function. We need to design programs like mediaeval castles – capable of functioning for business as well as for defence.

SecureDeveloper.com hasn’t really gone beyond the first stage of its launch yet, so it will be a while before these advanced topics will be discussed – and I am eager to see that happen.

Can You Write Good Code for an OS you Despise?

No, this isn’t another of my anti-Mac frothing rants.

This is one of my “here’s what I hate about many of the open-source projects I deal with” rants.

I’m trying to find an SFTP client for Windows that works the way I want it to.

All I seem to be able to find are SFTP clients for Unix shoe-horned in to Windows.

[Perhaps the Unix guys feel the same way about playing Halo under Wine.]

What do I mean?

Here’s an example – Windows has a certificate store. It’s well-protected, in that there haven’t been any disclosures of significant vulnerabilities that allow you to read certificates without first having got the credentials that would allow you to do so.

So, I want an SFTP client that lets me store my private keys in the Windows certificate store. Or at least, that uses DPAPI to protect its data.

Can’t find one.

Can’t find ONE. And I’m known for being good at finding stuff.

PuTTY is recommended to me. It, too, requires that the private key be stored in a file, not in the certificate store. Its alternative is to use its own certificate store, called Pageant (it’s an authorization “Age-Ant” for PuTTY, get it?) Maybe I could do something with that – write a variant of Pageant that directly accesses certificates stored in the certificate store.

But no, there’s no protocol definition or API, or service contract that I can see in the documentation, that would allow me to rejigger this. I could edit the source code, but that’s an awful lot of effort compared to building a clean implementation of only those parts of the API that I’d need.

What I do find in the documentation for Pageant are comments such as these:

  • Windows unfortunately provides no way to protect pieces of memory from being written to the system swap file. So if Pageant is holding your private keys for a long period of time, it’s possible that decrypted private key data may be written to the system swap file, and an attacker who gained access to your hard disk later on might be able to recover that data. (However, if you stored an unencrypted key in a disk file they would certainly be able to recover it.)
  • Although, like most modern operating systems, Windows prevents programs from accidentally accessing one another’s memory space, it does allow programs to access one another’s memory space deliberately, for special purposes such as debugging. This means that if you allow a virus, trojan, or other malicious program on to your Windows system while Pageant is running, it could access the memory of the Pageant process, extract your decrypted authentication keys, and send them back to its master.

I’ll address the second comment first – it’s a strange way of noting that Windows, like other modern operating systems, assumes that every process run by the user has the same access as the user. Typically, this is addressed by simply minimising the amount of time that a secret is held in memory in its decrypted form, and using something like DPAPI to store the secret encrypted.

The first comment, though, indicates a lack of experience with programming for Windows, and an inability to search. Five minutes at http://msdn.microsoft.com gets you a reference to VirtualLock, which allows you to lock 4kB at a time into physical memory, aka non-paged pool. Of course, there are other options – encrypting the Pagefile using EFS also helps protect against this kind of attack, and the aforementioned trick of holding the secret decrypted in memory for as short a time as possible also reduces the risk of having it exposed.

Now I’m really stretching to assert that this single author despises Windows and that’s why he’s completely unaware of some of its obvious security features and common modes of use. But it does seem to be a trend prevalent in some of the more religious of open source developers – “Windows sucks because it can’t do X, Y and Z” – without actually learning for certain whether that’s true. Often, X and Y can be done, and Z is only necessary on other operating systems due to quirks of their design.

Back when I first started writing Windows server software, the same religious folks would tell me “don’t bother writing servers for Windows – it’s not stable enough”. True enough, Windows 3.1 wasn’t exactly blessed with great uptime. But instead of saying “you can’t build a server on Windows”, I realised that there was a coming market in Windows NT, which was supposed to be server class. So I wrote for Windows NT, I assumed it was capable of server functionality, and any time I felt like I’d hit a “Windows can’t do this”, I bugged Microsoft until they fixed it.

Had I simply walked away and gone to a different platform, I’d be in a different place – but my point is that if you believe that your target OS is incapable, you will find it to be so. If you believe it should be capable, you will find it to be so.

Security Koan #3

The security guard phoned his boss in a panic.

“There’s been a break-in to the site, sir. The intruders aren’t anywhere to be seen, but they’ve got away with a bunch of equipment.”

“Understood – go and look at the perimeter fence, find out where they broke in, and keep watch. I’ll be there shortly.”

The boss arrived at the site, to find the guard pacing up and down in front of the fence.

“Did you find the hole yet?” asked the boss.

“Not yet, sir.”

“Never mind, I’ll help you look.”

For the next half-hour, they went up and down, searching for a hole in the fence.

Then the boss spoke up:

“Are you sure this is where they got in?”

“No, they got in on the other side of the site.”

“Then why are you looking over on this side?”

“Because the light’s better here, so we can see more.”

Question: Are you monitoring the places most suited for attack, or simply the places easiest to monitor?