General Security – Page 31 – Tales from the Crypto

General Security

Programmer Hubris Part 2: I’ll get you, and your little dog, too.

Apple’s QuickTime (for Mac & Windows) vulnerable to flawed images.

Great – hot on the heels of a WMF vulnerability (“why does Microsoft keep having buffer overflows when the rest of the industry doesn’t?”), we get a TGA/TIFF/QTIF/GIF/media-file overflow vulnerability in QuickTime – the warning seems almost designed to get lost in the noise surrounding Microsoft’s regular updates – but that would be a cynical view.

When I visited the page referenced above, which is at Apple’s own site, I could not find a link to the patch, or to download the current version of QuickTime for Windows.  I’ve been doing this “computer thing” for a couple of decades now, and so has my cube-neighbour, who went looking for it as well, without success.  [Hopefully Apple will read this, and edit the page so that by the time you read this, the link is prominent and obvious, but if you can’t find it, read on…]

You can find the current version of QuickTime for Windows at

<PThere are a number of disadvantages to this link, though:

  1. This is a full replacement, not a patch.
  2. The site does not say whether you are downloading the fixed 7.0.4 version, or an earlier version with the flaws still in it.
  3. The download includes iTunes, and while I can imagine QT to be necessary to view, say, presentations from vendors, iTunes is definitely not necessary for our corporate use. Nor do I want it for my personal use.
  4. The download file is called ‘iTunesSetup.exe’, and its version information declares it to be the setup program for iTunes – no mention of QuickTime is made here.
  5. Even after downloading the setup executable, you cannot tell what version you have downloaded without running it first. The version number on the setup file ‘iTunesSetup.exe’ is
  6. The setup program goes through a few unpacking steps before aborting if you are not an administrator, so a restricted user cannot tell if this is the current 7.0.4 version of QuickTime.
  7. If you only want QuickTime, you have to install iTunes and QuickTime and then remove iTunes. The installation itself doesn’t require a reboot – but removing iTunes does. So, effectively, if you want to install QuickTime, you must reboot, or you must accept iTunes.
  8. At no point in the installation are you told what version of QuickTime is being installed.

Finally, yes, the version of QuickTime at the Apple download link is 7.0.4, which is supposed to include the patches against remote exploit through image vulnerabilities.

The main thrust of this rant has been that this is really not so useful in terms of a security update – but there’s a subtle theme throughout – in order to get a tool that I want, I have to install and then remove a tool that I don’t want.  Bundling is a fine tradition – and if Apple was to bundle QuickTime and iTunes such that iTunes was required, I’d simply refuse to watch .mov files.  But this method of bundling – requiring it be installed, but allowing uninstallation afterwards – seems to be more like punishing people who want to view QuickTime format movies.

Not quite "SUS on a disk", but…

I’ve been asking Microsoft for some time to release a “SUS on a disk” – an ISO image format, and maybe an updater tool, that would allow an admin to create a DVD-R that they could then drag along to a machine that is either disconnected or poorly connected, or not allowed to connect out to the Internet.  Such a disk would be really useful for those of us called to upgrade machines of our friends and family, too.

Well, today on MS Downloads, I noticed the following:

January 2006 Security and Critical Releases ISO Image

If this isn’t new, I haven’t seen it before – and while it’s not quite SUS on a disk, it’s pretty damn close.

Thanks for listening, Microsoft!

Now, because nothing is ever perfect, some suggestions for MS:

  1. This is only Windows Update, not Microsoft Update.  Particularly, it doesn’t include MS06-003 fixes, because that’s Exchange and Outlook.  A MU-on-a-disk would be great, too.
  2. A baseline disk image of security/critical patches to date would be helpful, too – I appreciate that it would be huge.  Perhaps pick a date, make a baseline image, and provide a means to download mere updates to the image, rather than the whole image afresh, for people who like to have the “most complete” set of patches.
  3. Is there a tool to create our own WSUS-on-a-disk?  I’d love to have that tool, so that I can take a disk with me for systems that don’t get network access even for patches. Or for mailing to my parents.

IIS – Advanced Digest Authentication is MD5-SESS

Reviewing the security for another application today, I find that it relies on Digest Authentication, which is a horrible thing to do to a secure system.

Why is that? Because it requires that you enable the check-box labeled “Store Passwords Using Reversible Encryption” (and once you’ve done so, any users who want to use Digest Authentication have to change their password, so that their password can be stored decryptably).

This is such a horrible thing to do that Microsoft frequently refers to this as storing passwords “in plaintext”. There’s really not much difference – anyone who can get access to the encrypted store will be able to decrypt the passwords.

Fortunately in IIS 6, along comes Advanced Digest Authentication. Now, this is not exactly described very clearly, and in some cases, the description says some really bad things – one description I found implies that this method hashes the user name, domain, and password, and then waits for the browser to send exactly that same hash in order to identify itself.

Fortunately, that’s not the case – the people in IIS are not idiots. What appears to be the case, and it’s almost impossible to find documentation backing this up (probably because of the “Not Invented Here” syndrome), is that what Microsoft terms “Advanced Digest Authentication” is nothing more complicated than the MD5-SESS Digest Authentication described in RFC 2617.

That does hash the username, password and domain name (or realm, if you want the proper term), and stores that at the server. But it’s not what it looks for from the client. The client takes that hash, appends a nonce provided by the server, and one provided by the client, and hashes that string.

This process of “take a hash, add something random to it, and hash it again” is a fairly common procedure in security protocols, and is designed to avoid replay attacks while simultaneously avoiding the use of stored passwords.

This is great… with one caveat. Now, the hash of the username, password and realm are essentially the password to that realm. If you were the sort of nasty person that could get a hold of the reversibly encrypted password, and decrypt it, you could just as easily get a hold of the hash – and that’s all you need to generate the Advanced Digest Authentication message.

All is not lost, though – this only allows an administrator of the realm to get access to his own realm as if he were one of his own users. The “rogue administrator” problem is one that doesn’t have a good solution (except for “trust your administrators not to go rogue”), and is rightly treated as a problem not worth investigating for most systems.

What was allowed under the old Digest Authentication is that the administrator could fetch the clear-text version of the user’s password, which is almost certainly the same as that user’s password on another system. Now this is a problem worth tackling, and the Advanced Digest Authentication method adequately prevents this from occurring. The administrator can only fetch a hash, and that hash is no use outside of his domain.

Oh, and those hashes are still only generated when the password is created, set, or changed, so as a result, if you change the realm, all users have to change their password again. I’m not quite sure if you have to do this when enabling the Advanced Digest Authentication feature.

There’s no folk without some ire

[I was going to title this “PATRIOT – Piddling Around The Real Issues Of Terrorism”, but I figured that’d be a little too inflammatory.]

The other day, I was listening to good-old-fashioned talk radio, and something the host said surprised me. He was blathering about how Democrats wanted to make friends with terrorists.

It sounds really stupid when you put it in those terms, but yes – that’s essentially the approach that has to happen. Like a pyramid scheme, the terrorists at the top feed hatred down, and get power back up the chain. While that feed of hatred is accepted by their “down-line”, the feed of power up the line continues. You don’t stop terrorism by making friends with the guys at the top, you stop terrorism by making nice to the guys at the bottom; you remove the power-base by making it difficult for people to hate you.

So, how does that remotely connect to the usual topic of this blog, computer security? Like this:

Vendors [think Microsoft, but it also applies to small vendors like me] face this sort of behaviour, on a smaller level, when it comes to vulnerability reports. Rightly or not, there’s a whole pile of hatred built up among some security researchers against vendors, initially because over the years vendors have ignored and dismissed vulnerability reports on a regular basis. As a result, those researchers believe that the only way they can cause vendors to fix their vulnerabilities is to publicly shame the vendors by posting vulnerability announcements in public without first contacting the vendor.

I’m really not trying to suggest that vulnerability researchers are akin to terrorists. They’re akin to an oppressed and misunderstood minority, some members of which have been known to engage in acts which are inadvertently destructive.

Microsoft and others have been reaching out of late to vulnerability researchers, introducing them to the processes that a vendor must take when receiving a vulnerability report, and before a patch or official bulletin can be released. Some researchers are still adamant that immediate public disclosure is the only acceptable way; others have been brought over to what I think is the correct way of thinking – that it helps protect the users if the first evidence that exists in public is a bulletin and a patch.

The security industry gets regularly excited by the idea of a “zero-day exploit” – a piece of malware that exploits a vulnerability from the moment that the vulnerability is first reported. I think it’s about time we got excited about every release of a “zero-day patch”.

How many kinds of secret are there?

Trick question:
How many different classifications of document should you have?
The answer: two.

Documents should be “public” or “private”.

Public documents need not necessarily be published public documents, but contain information that is not important to keep from the public. By fact, any document that has been published is already public, no matter what you’d like it to be.

Private documents should be attached to an explicit or implicit list of people who are entitled to view them, and there should be policies, procedures, practices and phreakin’ ACLs in place to make sure that their privacy is not broken.

Can you think of a document secrecy category that isn’t covered by this?

What is a fingerprint?

Okay, so we should all be well aware as to what a fingerprint is – it’s the pattern of ridges on most people’s fingers that get left in smudges on glass doors.

What can it be used for?

The question arises as I look at my Microsoft Fingerprint Reader, and try to explain why a fingerprint reader is purposely disabled from authenticating an account to a domain.

Let’s first get into what is needed to log on to a system.  In computer science terms, you need a claim of identity, and you need one or more pieces of evidence, that together will suffice as proof of identity.

Think of the bank ATM as an example – your debit card is the claim of identity (because it contains your account number), and it’s also a piece of evidence (because you cannot use the ATM without the card).  Your PIN is a second form of evidence; with the card and your PIN, you claim and prove your identity for the purposes of the ATM’s operations.

Logging on to a domain is similar – you provide a username, which is a claim of identity, and you provide a password, which is the evidence used as proof of identity.

What differentiates a claim of identity from a proof of identity?  That’s a little subtle.

A claim of identity is any information that uniquely identifies a person, or a role, or an identity, such that it can be used by the computer to look up that identity.  Your ATM card is a claim of identity, because it contains the account number(s) to which you are allowed access, in a form that the ATM can use to supply as your identifier to your bank.

A proof of identity is made up of one or more pieces of evidence that can be relied on to demonstrate that the claimed identity is matched by the person or process presenting themselves for identification.  It’s “something you are, something you have, or something you know.”  The evidence should consist of items which, in conjunction with one another, can only be presented by the authorised user(s) whose identity is being claimed.

So, what is a fingerprint?

Is it a proof of identity?

Not as far as the Microsoft Fingerprint Reader (or any other low-resolution fingerprint reader) is concerned.  Give me a couple of warm gummy bears, a freezer, five minutes, and the use of your finger, and I can produce a replica “finger” that will authenticate to the reader.  What’s more, if someone can give me a glass door you’ve pushed open, or a cup or glass that you’ve held, within a couple of hours I can make as many gummy fingers as I need, that will all authenticate as you on any low-resolution reader.  [I won’t go into the process here].  In more grisly methods, I don’t even have to go to all that effort.

Higher-quality fingerprint readers will look for a finger’s warmth (yeah, a warm gummy bear will beat you there), or pulse, translucency, capillary patterns, or other features that are supposedly only going to be present in a real finger attached to a live human, but those are expensive.

So, because this fingerprint reader is a basic one, to it, a fingerprint alone is not evidence sufficient for a proof of identity – combined with a guard manning the station, trained to check for gummy bears and severed fingers, and who can deny suspicious attempts, it may be enough, but that’s not its designed method of operation.

Is a fingerprint, then, a claim of identity?

Not in general, no.  The fingerprint can be matched against stored fingerprints to see how closely it matches, but the fingerprint alone is not capable of generating the user ID, which is what you’d want.  The fingerprint has to be almost exhaustively matched – this is why cops on TV seem to spend days getting a fingerprint match.  It is very quick to say “here are two fingerprints, do they match” (which would be evidence of identity), but extremely slow to say “here’s a fingerprint, whose is it?”

Then there’s the issue of uniqueness.

I’ve searched and I’ve searched, and I’m surprised to find that there are as many as zero good scientific reviews of large fingerprint databases to check for uniqueness.  So, when a “fingerprint expert” testifies that the fingerprint found at a crime scene matches the defendant, and the defendant only, they’re relying on a guess that hasn’t been reliably tested, and which has been proven false (or at least, badly collected and analysed) on some celebrated occasions:

[Note that these are culled from a very quick search of only one news agency’s recent output.]

Obviously, a fingerprint can be used to refute identity, in much the same way as “the suspect had red hair” will refute the identity of a suspect who does not have red hair, but there’s still significant doubt in my mind as to whether it can be relied upon in any way to prove identity – not without extra layers of evidence to increase the reliability.

Use other, more reliable, measurable, and provable means to protect your networks.  Passwords – strong passwords – will serve you far better than a low-resolution fingerprint reader.

Top ten lists and low-hanging fruit.

I wrote this in response to a question that asked what would be the best firewall to install on a Windows 98 machine.


I like to advise people that they should look at security measures and ask “is this on my top ten list?”, and not do anything that isn’t on the list.  Obviously, as you work through the list and discard items, something that wasn’t on the top ten list before may come back onto the list and deserve to be done.


When you’re on Windows 98, I think that your top ten list starts with:

1. Unplug the network cable.

2. Upgrade to Windows XP.

3. Install Service Pack 2.

4. Convert your hard drive from FAT to NTFS.

5. Upgrade your applications.

6. As much as possible, stop running as an administrator, run as a “restricted user”.

7. Check that the Windows XP Firewall is enabled.

8. Plug the network cable back in.

9. Upgrade (at from Windows Update to Microsoft Update (look in the bottom right for the link).

10. Download and install patches for everything.


As you can imagine, several of the top-ten list items are “once only”, and others are “every month” or similarly require regular re-visiting.


The key here is to build your list on the basis of what the low-hanging fruit is.


Obviously the original question was posed by someone who was looking for the low-hanging fruit, but was labouring under the misconception that the low-hanging fruit in this case was that part of his system that he could most easily address.  That’s not a good approach, because you end up spending a lot of time making easy fixes, while the attackers are going to come in and get you through the gaping hole that you’ve labeled “difficult to fix”.


You have to address the low-hanging fruit as seen by your attackers.  What’s the easiest way to get into your system?  Address that, no matter how hard it is, because that’s the way that you will be breached.

"New Nigerian law would jail spammers" – MSNBC story.

I don’t know how I missed this story when it first appeared, but apparently the country of Nigeria is so upset with its well-earned reputation as the source of an unfeasibly large number of fraudulent spams, that they are now trying to enact a law that would cause spammers, phishers, fraudsters, child pornographers, and terrorists to spend six months to five years in jail, and pay the equivalent of $77 – $7700 in fines.  Oh, and the government could seize any profits made from the schemes in question.

Having seen how badly our own (USA) attempts to “curtail spam” with laws that do nothing of the sort have gone, I wish the Nigerians the best of luck.

DRM – safe for work, but please not at home.

Here’s a theme you’ll have heard from me a dozen times if you’ve been following my Usenet traffic:

“When I buy software, or music, or videos, I want to buy the content, not just the plastic it comes on.”

What do I mean by this?

Simply that I don’t want to find myself restricted as to what I can do with the software, music, videos, etc.  If I buy a DVD, I want to be able to watch it on my choice of device, in my choice of country, and (if necessary) in my choice of format.

With the recent news of Sony’s unpleasant intrusion into home computers (or this link for an American version), it’s a reminder for me to say this again – my computer is my computer, and I’ll thank you – any of you – to leave me to decide and actively accept what software to install on it.

Yes, Sony may include a licence on their CDs – but who reads them?  Who even expects that an audio CD (not a software title) will install software on their machine?

The key point to my mind is that I, the system administrator on my home computer, cannot hope to maintain the security and reliability of my system if I cannot know when software is installed, and be able to remove what software I choose to no longer be there.  If Mark Russinovich, a hugely capable developer, cannot remove the software from his system without losing access to parts of his system, what hope do the rest of us have?

Digital Rights Management, or DRM, is frequently put forward by music companies as the next best thing since sliced bread.  It’s not, and it’s not even remotely appropriate for home use, or for preventing privacy.

DRM works in exactly one scenario: when the owner of the rights also controls the behaviour of those subject to DRM.  That almost always means “work”, where the rights owner can discipline, and eventually terminate, those that refuse to respect the DRM restrictions on content.  To attempt to apply it to home use, where there is no such control, is to ignore that basic limitation of DRM.

And, quite frankly, it’s insulting.  I don’t feel like pulling out the “innocent until proven guilty” argument in its entirety, but as a legal and honest purchaser of all manner of electronic content, I feel insulted that I am then limited as to my use – not merely limited as to illegal copying and distribution, but limited as to what should be legal – copying for my own use in different devices.

I believe in this so strongly that I have made sure that the software I sell is controlled by those who pay for it.  You can move our software from one machine to another, and we ask only that you use no more copies than you have paid for.  We assume that we can trust our legitimate customers.  We put a few limits into the freely-distributed version, only because if we don’t, nobody buys (trust us, we’ve tried).  Even the honest need a few reminders some times.

"FTPS" document finally makes it to RFC status.

News I’ve been waiting for for years – the document formally known as draft-murray-auth-ftp-ssl-16.txt has finally been released by the RFC editor as RFC 4217 – “Securing FTP with TLS

What exactly does this mean?  Technically, not very much – FTPS has been implemented by several FTP clients, servers and wrappers for several years.  I added FTPS support to WFTPD Pro back in 2001, after first expressing interest in doing so in 1997, but being held back by the lack of crypto support in Windows.

I nearly had it ready in 2000, but spent some time trying to debug an issue that turned out to be caused by a corrupted certificate issued by the Windows 2000 Server CA that I was testing against.  Let that be a lesson to you crypto developers – sometimes the code is right, and it’s the certs that are wrong!

A few minor things have changed since then in the document that is now RFC 4217, but almost nothing significant to the compatibility of FTPS offerings.

I will end with a brief FAQ for you – please let me know if there are any other questions you’d like to see answered:

1. What’s TLS, and what is its relation to SSL?

TLS is Transport Layer Security, and is the name of the protocol that grew from Netscape’s SSL and Microsoft’s PCT.  Most people still use the term “SSL”, but TLS is where all ongoing work is carried out by the IETF.

2. Is FTPS the official term?

No – the RFC is “Securing FTP with TLS”, and perhaps the official term should be “AUTH TLS”.  However, with the general public already familiar with the concept of “https” being the secured equivalent of “http”, the term “ftps” has sprung up in general use to describe an FTP transfer, or session, encrypted and/or authenticated with SSL or TLS.

3. How different is FTPS from HTTPS?

Quite significantly – HTTPS uses a separate port for incoming SSL connections (usually port 443), compared to the port for unprotected HTTP connections (usually port 80).  Because FTP is (and has always been) a session-based protocol, it allows the client to “negotiate up” to SSL or TLS security through the use of the AUTH command described in RFC 2228.

Note also that FTP uses two channels – a control channel and a data channel, and that these channels can be secured – or left unsecured – almost independently.  HTTPS is secured from the moment you connect to the HTTPS port, until you close down the connection.  FTP is secured on the control channel from the moment you send an “AUTH TLS” or “AUTH SSL” command, until you log out; the data channel is not necessarily secured by default, and security on the data channel can be turned on or off using the PROT command, with parameters “C” for “Clear” or “P” for “Private”.

FTPS always authenticates the server through its certificate, and can be configured to authenticate the client by certificate, or by USER / PASS commands supplying username and password.  HTTP and HTTPS have several other methods of authentication (none of which bear much examination at the moment) – NTLM CHAP, Basic, Digest, etc, etc.

4. What about SFTP?  What’s that?

I get to answer this question a lot.  With all these acronyms getting thrown around, it’s easy to get confused.  Many people automatically assume that any acronym including the letters “FTP” refer to protocols based on FTP.  Obviously, that’s why “FTPS” was chosen as an informal description of “Securing FTP with TLS”.  Unfortunately, others may create confusing acronyms by including the FTP letters, either by accident or on purpose.  One such confusion was always “TFTP – Trivial File Transfer Protocol”.  This is about as far from FTP as you can get, and still be associated with transferring files from one machine to the other.

The same is true of “SFTP” – it’s a file transfer extension to “SSH”.  As that sentence implies, to do an SFTP file transfer, you need to have an SSH connection in place.  This isn’t always practical.