Monthly Archives: November 2005

What is a fingerprint?

Okay, so we should all be well aware as to what a fingerprint is – it’s the pattern of ridges on most people’s fingers that get left in smudges on glass doors.


What can it be used for?


The question arises as I look at my Microsoft Fingerprint Reader, and try to explain why a fingerprint reader is purposely disabled from authenticating an account to a domain.


Let’s first get into what is needed to log on to a system.  In computer science terms, you need a claim of identity, and you need one or more pieces of evidence, that together will suffice as proof of identity.


Think of the bank ATM as an example – your debit card is the claim of identity (because it contains your account number), and it’s also a piece of evidence (because you cannot use the ATM without the card).  Your PIN is a second form of evidence; with the card and your PIN, you claim and prove your identity for the purposes of the ATM’s operations.


Logging on to a domain is similar – you provide a username, which is a claim of identity, and you provide a password, which is the evidence used as proof of identity.


What differentiates a claim of identity from a proof of identity?  That’s a little subtle.


A claim of identity is any information that uniquely identifies a person, or a role, or an identity, such that it can be used by the computer to look up that identity.  Your ATM card is a claim of identity, because it contains the account number(s) to which you are allowed access, in a form that the ATM can use to supply as your identifier to your bank.


A proof of identity is made up of one or more pieces of evidence that can be relied on to demonstrate that the claimed identity is matched by the person or process presenting themselves for identification.  It’s “something you are, something you have, or something you know.”  The evidence should consist of items which, in conjunction with one another, can only be presented by the authorised user(s) whose identity is being claimed.


So, what is a fingerprint?


Is it a proof of identity?


Not as far as the Microsoft Fingerprint Reader (or any other low-resolution fingerprint reader) is concerned.  Give me a couple of warm gummy bears, a freezer, five minutes, and the use of your finger, and I can produce a replica “finger” that will authenticate to the reader.  What’s more, if someone can give me a glass door you’ve pushed open, or a cup or glass that you’ve held, within a couple of hours I can make as many gummy fingers as I need, that will all authenticate as you on any low-resolution reader.  [I won’t go into the process here].  In more grisly methods, I don’t even have to go to all that effort.


Higher-quality fingerprint readers will look for a finger’s warmth (yeah, a warm gummy bear will beat you there), or pulse, translucency, capillary patterns, or other features that are supposedly only going to be present in a real finger attached to a live human, but those are expensive.


So, because this fingerprint reader is a basic one, to it, a fingerprint alone is not evidence sufficient for a proof of identity – combined with a guard manning the station, trained to check for gummy bears and severed fingers, and who can deny suspicious attempts, it may be enough, but that’s not its designed method of operation.


Is a fingerprint, then, a claim of identity?


Not in general, no.  The fingerprint can be matched against stored fingerprints to see how closely it matches, but the fingerprint alone is not capable of generating the user ID, which is what you’d want.  The fingerprint has to be almost exhaustively matched – this is why cops on TV seem to spend days getting a fingerprint match.  It is very quick to say “here are two fingerprints, do they match” (which would be evidence of identity), but extremely slow to say “here’s a fingerprint, whose is it?”


Then there’s the issue of uniqueness.


I’ve searched and I’ve searched, and I’m surprised to find that there are as many as zero good scientific reviews of large fingerprint databases to check for uniqueness.  So, when a “fingerprint expert” testifies that the fingerprint found at a crime scene matches the defendant, and the defendant only, they’re relying on a guess that hasn’t been reliably tested, and which has been proven false (or at least, badly collected and analysed) on some celebrated occasions:



[Note that these are culled from a very quick search of only one news agency’s recent output.]


Obviously, a fingerprint can be used to refute identity, in much the same way as “the suspect had red hair” will refute the identity of a suspect who does not have red hair, but there’s still significant doubt in my mind as to whether it can be relied upon in any way to prove identity – not without extra layers of evidence to increase the reliability.


Use other, more reliable, measurable, and provable means to protect your networks.  Passwords – strong passwords – will serve you far better than a low-resolution fingerprint reader.

Top ten lists and low-hanging fruit.

I wrote this in response to a question that asked what would be the best firewall to install on a Windows 98 machine.

 

I like to advise people that they should look at security measures and ask “is this on my top ten list?”, and not do anything that isn’t on the list.  Obviously, as you work through the list and discard items, something that wasn’t on the top ten list before may come back onto the list and deserve to be done.

 

When you’re on Windows 98, I think that your top ten list starts with:

1. Unplug the network cable.

2. Upgrade to Windows XP.

3. Install Service Pack 2.

4. Convert your hard drive from FAT to NTFS.

5. Upgrade your applications.

6. As much as possible, stop running as an administrator, run as a “restricted user”.

7. Check that the Windows XP Firewall is enabled.

8. Plug the network cable back in.

9. Upgrade (at http://windowsupdate.microsoft.com) from Windows Update to Microsoft Update (look in the bottom right for the link).

10. Download and install patches for everything.

 

As you can imagine, several of the top-ten list items are “once only”, and others are “every month” or similarly require regular re-visiting.

 

The key here is to build your list on the basis of what the low-hanging fruit is.

 

Obviously the original question was posed by someone who was looking for the low-hanging fruit, but was labouring under the misconception that the low-hanging fruit in this case was that part of his system that he could most easily address.  That’s not a good approach, because you end up spending a lot of time making easy fixes, while the attackers are going to come in and get you through the gaping hole that you’ve labeled “difficult to fix”.

 

You have to address the low-hanging fruit as seen by your attackers.  What’s the easiest way to get into your system?  Address that, no matter how hard it is, because that’s the way that you will be breached.

"New Nigerian law would jail spammers" – MSNBC story.

I don’t know how I missed this story when it first appeared, but apparently the country of Nigeria is so upset with its well-earned reputation as the source of an unfeasibly large number of fraudulent spams, that they are now trying to enact a law that would cause spammers, phishers, fraudsters, child pornographers, and terrorists to spend six months to five years in jail, and pay the equivalent of $77 – $7700 in fines.  Oh, and the government could seize any profits made from the schemes in question.


http://www.msnbc.msn.com/id/9768247/


Having seen how badly our own (USA) attempts to “curtail spam” with laws that do nothing of the sort have gone, I wish the Nigerians the best of luck.

DRM – safe for work, but please not at home.

Here’s a theme you’ll have heard from me a dozen times if you’ve been following my Usenet traffic:


“When I buy software, or music, or videos, I want to buy the content, not just the plastic it comes on.”


What do I mean by this?


Simply that I don’t want to find myself restricted as to what I can do with the software, music, videos, etc.  If I buy a DVD, I want to be able to watch it on my choice of device, in my choice of country, and (if necessary) in my choice of format.


With the recent news of Sony’s unpleasant intrusion into home computers (or this link for an American version), it’s a reminder for me to say this again – my computer is my computer, and I’ll thank you – any of you – to leave me to decide and actively accept what software to install on it.


Yes, Sony may include a licence on their CDs – but who reads them?  Who even expects that an audio CD (not a software title) will install software on their machine?


The key point to my mind is that I, the system administrator on my home computer, cannot hope to maintain the security and reliability of my system if I cannot know when software is installed, and be able to remove what software I choose to no longer be there.  If Mark Russinovich, a hugely capable developer, cannot remove the software from his system without losing access to parts of his system, what hope do the rest of us have?


Digital Rights Management, or DRM, is frequently put forward by music companies as the next best thing since sliced bread.  It’s not, and it’s not even remotely appropriate for home use, or for preventing privacy.


DRM works in exactly one scenario: when the owner of the rights also controls the behaviour of those subject to DRM.  That almost always means “work”, where the rights owner can discipline, and eventually terminate, those that refuse to respect the DRM restrictions on content.  To attempt to apply it to home use, where there is no such control, is to ignore that basic limitation of DRM.


And, quite frankly, it’s insulting.  I don’t feel like pulling out the “innocent until proven guilty” argument in its entirety, but as a legal and honest purchaser of all manner of electronic content, I feel insulted that I am then limited as to my use – not merely limited as to illegal copying and distribution, but limited as to what should be legal – copying for my own use in different devices.


I believe in this so strongly that I have made sure that the software I sell is controlled by those who pay for it.  You can move our software from one machine to another, and we ask only that you use no more copies than you have paid for.  We assume that we can trust our legitimate customers.  We put a few limits into the freely-distributed version, only because if we don’t, nobody buys (trust us, we’ve tried).  Even the honest need a few reminders some times.

This month, three years cancer-free.

Three years ago, just before Thanksgiving, I went in for a relatively routine (if rather uncomfortable) surgery.


While I was under anaesthesia, the doctor found, and excised a “stage I seminoma”.  For those of you unfamiliar with Lance Armstrong, that’s early testicular cancer.


Since that time, I’ve had radiation therapy (curiously, at the same time that the movie of “The Incredible Hulk” was being advertised on TV), a couple more surgeries, and several more doctor visits, blood-draws, and CT-Scans.  The end result is very much worth it – I’m cancer-free, and have been for three years.


The peculiar aspect is the most frequent response I get from others:


“You don’t look old enough for cancer.”


That’s flattering, to be sure, but testicular cancer is usually found in men between the ages of 25 to 35.  As such, I was on the upper end of the age range, and I was lucky that my tumour was found before it had spread.  Testicular cancer is particularly fast-spreading, but if caught early, can be treated with a minimum of radiation.  In the vast majority of cases, this (and monitoring) kills the cancer with no remission.


During my radiation treatment, I initially lost weight, then gained it (and a little more) as I kept snacking to fend off the mild nausea.  I lost hair from the affected area – a rectangle roughly from my belly-button up to the base of my rib cage (and a matching rectangle on the back – X-rays go right through you!)  And… that’s it.


Yes, that’s the limit of the uncomfortable aspect of the treatment.


For those of you worried about asking the awkward and embarrassing question, let me assure you that you can “fly with one engine” just as well as with two.  [Testicular cancer travels “up” rather than “across”.]


I like to tell people you can check as often as you like, and as fast as you like, but you need to make sure you check yourself.


Sure, the treatment may be embarrassing, and I know there are parts of it that still irritate me.  But nobody ever died of embarrassment.

SSL Tutorial part 0.

So you want to protect your TCP application’s traffic?


You’ve been writing network code for a while, using TCP, and you’ve faced the bugbears of reliability and performance, but now you’re looking for a real challenge.


You want to secure your network traffic; you want to securely authenticate the server and maybe even the client.


Or perhaps your users are simply screaming for the protection of SSL, even if they don’t know what that means, but because “everyone else has it”.


There are obviously several reasons you might have to use SSL to protect your network traffic – and over the next few blog entries, I’m going to advise you on how you might add SSL to your client or server, and what benefits you’ll get from doing so.


I’m going to start with a brief run-down of what SSL can provide, in its most common configuration.  There are some pedants that will tell you all about using Diffie-Hellman (DH) key exchange, so that noone needs a certificate, or a NULL encryption cipher, so that you can read the SSL-wrapped communication, but neither of those apply in the general case that we’re going to talk about.  When you have finished reading this set of columns, you’ll be able to take an HTTP client or server and turn it into HTTPS, or an FTP client or server, and make it support FTPS.


So, to begin, here’s a list of what SSL gives you over and above what you already have with your TCP application.


  • Server Authentication: SSL requires that the server send a certificate to the client, identifying itself.
  • Client Authentication: SSL allows the server to ask the client for a certificate, which will identify it.
  • Communication privacy: Apart from the first few bytes of the exchange, all traffic is encrypted with a symmetric cipher.
  • Communication integrity: A special checksum, called an HMAC, is used to ensure that bits within the ciphered text have not been altered, extra text has not been added, and that the communications stream has not been closed early by a hacker (or by network faults).

Now, here’s a list of some interesting changes that SSL makes to your TCP traffic:


  • Session initialisation requires a significant amount of traffic (certificate exchange) before the first byte of your data can flow.
  • TCP is a stream-based protocol, with no suggestion of message boundaries; SSL encrypts your data stream as a series of discrete messages within the TCP stream, and a message must be fully received before being decrypted (otherwise it is not protected by the HMAC).
  • You have to think carefully about closure issues – what does a TCP RST mean, or a TCP FIN?  You thought you understood those terms already, but they may have a different interpretation when you’re trying to secure a communication.
  • In a client, in addition to resolving the server’s name to an IP address, you also have to check that the server’s certificate matches the name of the server you thought you were trying to reach.
  • Your carefully-calculated performance-enhancing measures are all going to go up the spout; the overhead of encryption, plus the requirement to work within the message size of SSL is going to seriously impact performance.

Until next time, happy coding!

"FTPS" document finally makes it to RFC status.

News I’ve been waiting for for years – the document formally known as draft-murray-auth-ftp-ssl-16.txt has finally been released by the RFC editor as RFC 4217 – “Securing FTP with TLS


What exactly does this mean?  Technically, not very much – FTPS has been implemented by several FTP clients, servers and wrappers for several years.  I added FTPS support to WFTPD Pro back in 2001, after first expressing interest in doing so in 1997, but being held back by the lack of crypto support in Windows.


I nearly had it ready in 2000, but spent some time trying to debug an issue that turned out to be caused by a corrupted certificate issued by the Windows 2000 Server CA that I was testing against.  Let that be a lesson to you crypto developers – sometimes the code is right, and it’s the certs that are wrong!


A few minor things have changed since then in the document that is now RFC 4217, but almost nothing significant to the compatibility of FTPS offerings.


I will end with a brief FAQ for you – please let me know if there are any other questions you’d like to see answered:


1. What’s TLS, and what is its relation to SSL?


TLS is Transport Layer Security, and is the name of the protocol that grew from Netscape’s SSL and Microsoft’s PCT.  Most people still use the term “SSL”, but TLS is where all ongoing work is carried out by the IETF.


2. Is FTPS the official term?


No – the RFC is “Securing FTP with TLS”, and perhaps the official term should be “AUTH TLS”.  However, with the general public already familiar with the concept of “https” being the secured equivalent of “http”, the term “ftps” has sprung up in general use to describe an FTP transfer, or session, encrypted and/or authenticated with SSL or TLS.


3. How different is FTPS from HTTPS?


Quite significantly – HTTPS uses a separate port for incoming SSL connections (usually port 443), compared to the port for unprotected HTTP connections (usually port 80).  Because FTP is (and has always been) a session-based protocol, it allows the client to “negotiate up” to SSL or TLS security through the use of the AUTH command described in RFC 2228.


Note also that FTP uses two channels – a control channel and a data channel, and that these channels can be secured – or left unsecured – almost independently.  HTTPS is secured from the moment you connect to the HTTPS port, until you close down the connection.  FTP is secured on the control channel from the moment you send an “AUTH TLS” or “AUTH SSL” command, until you log out; the data channel is not necessarily secured by default, and security on the data channel can be turned on or off using the PROT command, with parameters “C” for “Clear” or “P” for “Private”.


FTPS always authenticates the server through its certificate, and can be configured to authenticate the client by certificate, or by USER / PASS commands supplying username and password.  HTTP and HTTPS have several other methods of authentication (none of which bear much examination at the moment) – NTLM CHAP, Basic, Digest, etc, etc.


4. What about SFTP?  What’s that?


I get to answer this question a lot.  With all these acronyms getting thrown around, it’s easy to get confused.  Many people automatically assume that any acronym including the letters “FTP” refer to protocols based on FTP.  Obviously, that’s why “FTPS” was chosen as an informal description of “Securing FTP with TLS”.  Unfortunately, others may create confusing acronyms by including the FTP letters, either by accident or on purpose.  One such confusion was always “TFTP – Trivial File Transfer Protocol”.  This is about as far from FTP as you can get, and still be associated with transferring files from one machine to the other.


The same is true of “SFTP” – it’s a file transfer extension to “SSH”.  As that sentence implies, to do an SFTP file transfer, you need to have an SSH connection in place.  This isn’t always practical.