Monthly Archives: May 2006

PGP / Truecrypt brouhaha

There’s a fascinating debate going on at present. Two ‘researchers’, called Abed and Adonis, are trumpeting their mad sk177z at cryptography.


They have a few basic claims:


  • They can bypass authentication on PGP self-decrypting archives.
  • They can decrypt PGP-encrypted drives without knowing the passphrase.

It’s an interesting read, and full of the sort of lack of comprehension, poor language, loose terminology, etc, that is typical of some of the worst kind of vulnerability reporting. I’ve read a couple of dozen vulnerability reports, and while a couple of them were clear, concise and well-researched, the majority were barely understandable, and showed a staggering lack of comprehension of the software and algorithms being discussed.


So, here’s a little description of what goes on in most modern file or disk encryption (EFS, BitLocker, PGP, TrueCrypt, etc):


  1. A random key is generated.
  2. The data is encrypted using the random key.
  3. The random key is encrypted using the user’s selected pass-phrase, or some other identifying credential (their public key, for instance). This encrypted key is stored with the file.
  4. The random key is also encrypted using a recovery token (either randomly created and stored away in a key recovery file, or it’s an existing private key of a designated third-party recovery agent). This encrypted key is also stored with the file.
  5. For any number of other users of this file, the random key can also be encrypted with their pass-phrases, or their public keys.

When you want to decrypt a file, here’s what happens:


  1. Your credential – pass-phrase or private key – is used to decrypt the key-blob associated with you (or every key-blob in turn until you get a decrypted key-blob that matches its checksum).
  2. The decrypted key is used to decrypt the data.

What Adonis and Abed have managed to do with their fancy debugging on the SDA (self-decrypting archive) is to break into the point where the code checks that the random key has been successfully decrypted, and change the stored checksum to match the checksum of the key they’ve decrypted using the wrong pass-phrase. So, they’ve got a key that doesn’t decrypt the file to the correct data, and they’ve managed to persuade the program to tell them that this is acceptable.


<sarcasm>Clever attackers – they’ve managed to get the system to tell them that they’ve successfully decrypted the file, while at the same time getting back a key that ‘decrypts’ the file to random garbage.</sarcasm> They even acknowledge it in one of their Flash animations. [Oh yeah, and I want to view a Flash animation less than a month after a remote code execution vulnerability in Flash.]


Their binary patching of the PGP encrypted disk is slightly more interesting.


What they have demonstrated is that a change of pass-phrase does not change the random key, it just decrypts it using the old pass-phrase, and then re-encrypts it with the new pass-phrase, obliterating the stored copy with the new one. [This is actually a good and necessary thing, because if you're on the road, and you change your pass-phrase to something that you then forget, you want the recovery token that the help-desk provides you to still work!]


This can be used to gain some measure of an attack – encrypt the drive with a pass-phrase you know, save a copy of the encrypted key blob, and you can later come in and replace the encrypted key-blob on the machine with yours – this effectively resets the pass-phrase to what it was when you saved your copy of the encrypted key-blob.


But that’s not something that the encryption is designed to protect against – and really, it’s not something that the encryption should try to protect against. If you receive an encrypted device from someone you don’t trust (or later decide not to trust someone who has encrypted a device you use), you should decrypt it and re-encrypt it with a new random key. This makes good sense anyway, because you want a new recovery token on the device, and you want that token to be under your name, not the previous user’s name.


As with so many presumed attacks on cryptographic solutions, this one’s a real yawner if you understand the cryptography at hand, because it’s really an attack on the policy behind the system. In this case, the policy says that if you receive a device (disk, encrypted file, whatever) from someone who possessed the means to decrypt it, that device can continue to be decrypted until such time as you encrypt it with a new encryption key – not just a new pass-phrase.

Forget that I asked you to ignore what I said about SAL.

Okay, so for the foreseeable future at least, SAL (and other code analysis goodness) is indeed available to all and sundry, like pie and chips, for free, on the Windows Vista Beta 2 SDK.


Michael Howard and I had a very pleasant exchange (as always) over email, where neither of us quite seemed to grasp exactly what the other was saying (as always), and where we mostly agreed (as always), and which resolved out as follows:


  • cl /analyze works if you run the cl.exe from the Windows Vista Beta 2 SDK.
  • The current plan is to leave this in the Platform SDK as Vista moves to release status from its current beta version.
  • Plans may change.

So, for right now, I will be using code analysis for my existing code, on my existing platform, by using the C++ compiler that shipped with the Windows Vista Beta 2 SDK.  I hope to post some articles here shortly on the following:


  1. How to download and install the Windows Vista Beta 2 SDK, with emphasis on tying it into Visual Studio 2005.
  2. Adding SAL comments to your code (and how to find the important arguments that really need commenting).
  3. Deciding whether it is better to complexificate the SAL comments, or to simplify your source code.

But I promise to resurrect my rant if it looks like cl /analyze is going to be withdrawn from the released version of the Platform SDK.

Full Disclosure – how full is full?

Bruce Schneier says “full disclosure is the best tool we have to improve security“.


Woah, that’s rather like saying “wheeled vehicles are the best tool for ground transport of passengers”. There are many different kinds of wheeled vehicles, and there are many different kinds of “full disclosure”.


Most often, “full disclosure” means “complete and immediate public disclosure”. Such disclosure essentially acts like the starter’s pistol in a race between the malware authors and the software developer. I’d rather see the software developer get a head-start in that race.


Public disclosure was initially used as a reaction to vendors’ irresponsibility – Microsoft is the vendor most people think about, but these days a certain database company comes more instantly to mind as a company who, even when given the extra time before the starter’s pistol goes off, tends to hang around behind the stands, smoking a cigarette, and denying that there’s a race coming up.


Public disclosure is great as a punishment for current and ongoing lack of action – but as a punishment for past misdeeds, it’s very much cutting off your nose to spite your face. A vendor who is trying their best to be responsible now – even one who has always been responsible – has to deal with the fact that they have to start the race at the same time as the malware authors.


Worse, with so many different disclosure mailing lists, newsgroups, chat servers, web forums, etc, a vendor has to try and figure out a way to respond to the starter’s pistol at every athletic venue that might be hosting a race.


In such an environment, where it’s impossible for most vendors to spend enough time discovering the “exploit mailing list du jour”, this immediate public disclosure, stops well short of “full disclosure”, because it informs a large group of people that generally does not include the one group that can most widely spread a fix – the vendor.


In summary, it’s not full disclosure until you’ve disclosed – directly – to the vendor. It might be fun to think you’re taking the high-ground by punishing the vendor, but it’s more likely that you’re punishing the users by presuming that their best hope is going to waste time even if you notify them.


The true high-ground comes when you notify the vendor, so that if they’re worthy of punishment, you can tell the public that even after you notified the vendor directly and gave them every reasonable assistance, they still failed to act in the users’ best interest. Or, the vendor acts on your notice, and you still get to claim the high-ground, because your actions directly helped the users whose security was under threat.


So, yeah, I agree with Bruce – full disclosure is our best hope. But full disclosure doesn’t begin until you’ve disclosed to the developers / vendors, and I think it’s disingenuous of Bruce not to discuss what full disclosure means to him, versus what it means to others.

Security questions considered dangerous

Keith Brown expresses concern over the security questions people ask themselves for password reset, and suggests that the user not be allowed to write the question, so that sufficiently secure questions can be asked.


Congratulations – you’ve addressed half the problem.


The server can now require that the server asks the user a complex question.


Because the correct answer is determined entirely by the user, though, the answer can be unnervingly simple.


  • What’s your mother’s maiden name?
    • 1111
  • What’s the last four digits of your SSN?
    • 1111

I bet you can guess the last four digits of my driver’s licence, and the city in which I was born, too. :-)


So, this clearly hasn’t started to solve the problem – the only complexity you’ve enforced is the public portion of the exchange.


Sadly, many of these complex questions raise a further concern – who else knows the answers?


My mother knows her maiden name, and the city in which I was born. My wife knows that, and also has access to documentation for the other keys to the castle. Suppose one day she becomes my ex-wife, and wants to have access to my online banking, my business, my health information – those questions are now the simple key to allowing her in.


Other elements of concern:


  • Privacy
    • I’ve just told my bank what my SSN is, who my mother was, what my driver’s licence is, where I was born, etc – do they need any of that information to do business with me? No. Then they don’t get that information.
  • Accessibility
    • I express it often with biometrics – how does your iris scanner work on a person with aniridia? how does your fingerprint scanner handle a person with no fingerprints? how does your “What is your driver’s licence number” cope with a person who has been banned from driving, or is sufficiently disabled that they cannot drive?

At work, we’re required to create the same sort of “three questions” to reset our password.


I’m tempted to enter the following:


  • What is your name?
  • What is your quest?
  • What is your favourite colour?

What I do instead, is to enter:


  • Why don’t you just walk over to the security office, show them your photo identity, and get them to reset your password?

 

Why would someone hack my site?

Sandi Hardmeier often has something to say that I want to listen to, even if she approaches things from a different perspective.


Today, she posed the question “Why would somebody want to hack into my network?


My first thought is to note that the “PC in Herndon, VA” may not necessarily be even as harmless as it sounds, given that Herndon is a “bedroom community” for certain intelligence-gathering organisations.


Okay, so there’s still a little of the dreamer living in fantasy-land in me; but I think my second thought is something you can give to your boss.


Who is to decide the value of your network?


You have bought your network, and all of the computers it comprises, for a specific purpose. To maintain the process of that purpose, you’ve bought space, you’ve hired personnel, and you have a lot invested in making sure it stays available for your use. There is value in that, and such value is reasonably easily evaluated.


Where else is there value? Who else sees your network as valuable?


If you’re not using all your processing and communications resources – or even if you are – someone out there believes that he has a better use for those resources. If they can pay a few hundred dollars to steal your system, even if they get to steal them while the systems remain in your building, then what’s to stop them from exchanging a few hundred dollars for a hack into your systems?


Then there’s the worth to consider – what is it worth to you, every day, for your systems to remain under your control? How much would you spend per day to retain that value?

When is a virus not a virus?

When it doesn’t spread.


There’s been a lot of press devoted of late to this “Word zero-day vulnerability“, some of it even referring to this as a virus.


While it seems that the exploit in use could be further exploited in order to make this into a virus, the particular attack in question is being used for a very targeted attack against a small set of targets.


So, don’t panic into thinking that you need immediate and urgent protection right now. There are very, very few cases of this that have been discovered even by those that are actively looking for it with full knowledge of what they are looking for.


What’s my take on this?


It’s a great opportunity to remind your users that they still need to pay attention to the usual methods of incursion – peer-to-peer “file sharing” (or, if you prefer, “theft”); attachments to emails; active content on web-sites of dubious provenance; the latest “gotta see this animation” or “gotta play this Flash game”; etc. Note that many of these infection vectors are contingent on you being so excited about, as Jesper and Steve put it, “seeing the naked dancing pigs”, that you will approve any elevation of privilege required to do so. It’s simply a rich irony that schemes designed to make you want something this bad, and using your friends and co-workers to egg you on, are called “viral marketing”.


The more you are being persuaded by peer pressure, the more you want to ask yourself “have I assessed the risks of this?”


Your mother always warned you, after all, “if everyone else wanted to jump off a cliff, would you?”


I’m the lone lemming, thinking to myself “I don’t really know if it’s a good idea to go cliff-diving right now – can I even swim?”

Okay, scratch what I said about SAL

Despite what Michael Howard says about how wonderful SAL is, and my own post from earlier today, I really shouldn’t be telling you about it.


Is that because it’s under NDA?  Is it because it’s a skill I learned at Microsoft, but can’t use outside because of a non-compete clause?


No.


It’s because most developers won’t be getting to use it (me included, for much of my work).


I think this is a thoroughly inappropriate decision on Microsoft’s part.


Restrict detailed profiling to the Enterprise versions all you like, maybe even restrict code testing, or the version control suite, or the team functions – but what on earth is the point of restricting code analysis tools that are designed to secure Windows applications?


I don’t understand this at all.


Windows gets most of its bad reputation for unsecure code because of the applications that run on it – and frequently, the third-party applications, many of which refuse to run unless the user is a full-blown administrator, despite them being choc-full of exploitable buffer overruns.


SAL could help fix that problem, were it to be made available to the multitude of developers.


But no, it’s a “premium feature”, restricted to “the Enterprise” (not a space-ship, just big businesses).


Sometimes, Microsoft does something that I just cannot understand. This is one of those times, and I’m really irritated at them for it.

SAL – pipped at the post by Michael Howard.

I’ve been spending some time this week in the evenings thinking on how I should introduce SAL – the Standard Annotation Language – to you all. Then Michael Howard managed to do it before I could get there.


It’s been in use at Microsoft for some time now, albeit frequently rather grudgingly. I was introduced to it as a Longhorn Quality Gate – something that had to be done in order to get a piece of software approved for checking in to the Longhorn Source Control tree.


Some teams choose to have everyone add SAL annotations to their own code; others do as our team did, and assign one or two people to learn SAL as much as they can before applying annotations to every piece of code under that team’s ownership.


I very quickly discovered that, while SAL was indeed a pain in the neck to add to your code, it forcibly removes a lot of bad coding from your source tree, because there are some things that just cannot be SAL-annotated correctly – and those constructs are the ones that cause the most buffer overflows.


An example of such a construct? Sure – strcpy(). It’s been the single biggest source of buffer overflows, either in reading when the string is not null-terminated, or in writing, when the destination buffer is shorter than the source. You just can’t properly SAL-annotate strcpy, because you don’t know the sizes of either of the strings, so you’re forced to re-write it appropriately (and you get something like strcpy_s).


So can I add anything to Mike’s descriptions of SAL?


Yes – mainly just this: if you cannot find a way to specify the type in SAL, you probably should think about defining your types using typedefs, rather than inlining the whole type definition. SAL definitions carry on through a typedef – for examples, look in WinNT.h, where you’ll find definitions like PZPCWSTR: “typedef __nullterminated PCWSTR *PZPCWSTR;


Finally, make sure that you use the same SAL annotation in the header file as you do in the C/CPP file.  Otherwise, you’ll get counter-intuitive results from the /analyze switch.


If you have any questions about SAL, please post them as comments below, or email me using the “Contact” link in the top right of this blog page.

How to scan SSL/TLS sites.

The other day, I hit a conundrum.

We couldn’t make LDAPS connections to a couple of domain controllers. A quick “TS” over to the systems in question indicated that we had a correct certificate in place, and that it was valid, but when we connected using “LDP” over port 636, we would be told that the certificate exchange wasn’t allowed to finish.

A look at the event viewer showed us that the certificate had expired. That’s ludicrous, though because the certificates show in the certificate manager as valid.

The only thing we could figure out is that we must have been running into the problem specified in KB 321051 – “How to enable LDAP over SSL…” – or at least, the last issue on “Pre-SP3 SSL certificate caching issue”. Sure, we’re on SP4, but that’s the only thing we could figure out.

To test it further, we had to view the certificate, and the brutal method we chose was to open up Internet Explorer and open https://192.168.0.1:636 – where “192.168.0.1” was the address of the DC whose certificate we wanted to view.

An error comes up on screen when you do this, because the IP address doesn’t match the name. You want this error, because it allows you to press “View Certificate”.

Sure enough, the certificate that came up was old, and expired. And, just as the article assured us we shouldn’t have to do on a Windows 2000 SP4 box, a re-boot fixed the server to using the right certificate.

If you’re thinking like we were, your next though will be “gee, I wonder how many of our other servers have this problem?” Obviously, we would be hard-pressed to scan all of our servers using an Internet Explorer error dialog, so it was time to write a program to do it for us.

Less than an hour later, I had the program I like to call “SSLScan”.

Sure, it’s in C# .NET 2.0, but it’s probably the shortest piece of SSL code I’ve written that wasn’t completely densely obfuscated.

using System;
using System.Net;
using System.Net.Sockets;
using System.Net.Security;
using System.Security.Authentication;
using System.Collections;
using System.Security.Cryptography.X509Certificates;
using System.IO;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;

namespace SSLScan
{
    class Program
    {
        static int port = 443; // We choose to default to HTTPS.
        // Other examples – LDAPS is 636.

        static void Main(string[] args)
        {
            //
            //  SSLScan – scan a list of SSL servers, connecting to each
            // in turn, and asking for their certificate.
            // Then display certificate information.
            //
            // We assume that each server responds to an SSL ClientHello
            // immediately upon connection – this will not work with some
            // SSL servers, for instance FTPS servers, that require some
            // sort of command(s) to be issued prior to sending the SSL
            // negotiation.
            //
            // For now, the only argument is the name of the file used to list
            // the hosts – or you can feed it as stdin.
            //
            switch (args.Length)
            {
                case 0:
                    Usage();
                    Console.Error.WriteLine(“Reading from keyboard”);
                    TestConnectionsFromStream(new System.IO.StreamReader(
System.Console.OpenStandardInput())); break; case 1: Console.Error.WriteLine(“Reading from file {0}”, args[0]); TestConnectionsFromStream(new System.IO.StreamReader(args[0])); break; default: Usage(); break; } } private static void TestConnectionsFromStream(
System.IO.StreamReader streamReader) { string server; bool blank = false; bool interactive = !streamReader.BaseStream.CanSeek; char[] delimiters = { ‘:’ }; string[] elements; do { if (blank) Console.WriteLine(); if (interactive) Console.Error.Write(“\nServerName[:PortNumber] > “); server = streamReader.ReadLine(); if (server == null || // end of file – no more. (interactive && server.Length==0)) // Interactive – blank line ends. break; if (server.StartsWith(“//”)) continue; // ignore comments – go to next server. if (server.Contains(“:”)) { elements = server.Split(delimiters); server = elements[0]; port = Convert.ToInt32(elements[1]); } try { TestConnectionToSite(server,port); Console.WriteLine(); } catch (Exception e) { Console.Error.WriteLine(“Exception on site {0}”, server); Console.Error.WriteLine(e.Message); } blank = true; } while (true); // Forever – or until “break” hit. } public static bool ValidateServerCertificate( object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { Console.WriteLine(“Subject: {0}”,certificate.Subject); Console.WriteLine(“Issuer : {0}”,certificate.Issuer); Console.WriteLine(“Serial : {0}”,certificate.GetSerialNumberString()); Console.WriteLine(“Expires: {0}”, certificate.GetExpirationDateString()); X509Certificate2 cert2 = new X509Certificate2(certificate); if (cert2.NotAfter < DateTime.Now) Console.WriteLine(“*** Certificate has expired!!!”); else if (cert2.NotAfter.AddDays(-30.0) < DateTime.Now) Console.WriteLine(“*** Certificate expires in thirty days or less!!!”); if (sslPolicyErrors == SslPolicyErrors.None) return true; Console.WriteLine(“Certificate error: {0}”, sslPolicyErrors); // Reject the SSL connection return false; } private static void TestConnectionToSite(string serverName, int port) { Console.Error.WriteLine(“Connecting to server: ” + serverName + “:” + port.ToString()); TcpClient client = new TcpClient(serverName, port); Console.Error.WriteLine(“Client connected.”); // Create an SSL stream that will enclose the client’s stream. SslStream sslStream = new SslStream( client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null ); // The server name must match the name on the server certificate. try { sslStream.AuthenticateAsClient(serverName); } catch (AuthenticationException e) { // Uncomment this lot of code if you want to see the exceptions. // Console.WriteLine(“Exception: {0}”, e.Message); // if (e.InnerException != null) // { // Console.WriteLine(“Inner exception: {0}”, e.InnerException.Message); // } // Console.WriteLine(“Authentication failed – closing the connection.”); client.Close(); return; } client.Close(); } static void Usage() { Console.Error.WriteLine(“Usage:”); Console.Error.WriteLine( “SSLScan [filename] – if filename is blank, reads from stdin.”); Console.Error.WriteLine( “Scans multiple systems with an SSL connection,” + ” listing the certificates.”); Console.Error.WriteLine( “Sites are specified as host:port – the ‘:port’ part” + ” is optional, and if not”); Console.Error.WriteLine( “specified, will default to the previous port value,” + ” or the default of ” + port.ToString()); } } }

Today’s bulletins.

Bulletin MS06-018:

Vulnerability in Microsoft Distributed Transaction Coordinator Could Allow Denial of Service (913580)


Okay, that’s special – a denial of service in MSDTC, and the workaround is to … disable MSDTC.  Clearly the workaround does exactly what the bulletin is trying to protect you against, so if you have any applications that rely on MSDTC, you will want to install this patch.


Bulletin MS06-019:


Vulnerability in Microsoft Exchange Could Allow Remote Code Execution (916803)


This one’s nasty – someone can send your users a mail message containing a meeting request or appointment, and run code on your Exchange Server.  If you use Exchange Server, this one’s really necessary – you could just block calendar attachments, but really, do you want your users stood outside your office with torches and pitchforks?


Bulletin MS06-020:


Vulnerabilities in Macromedia Flash Player from Adobe Could Allow Remote Code Execution (913433)


A remote code execution flaw in Flash?  Now there’s a novelty.  This flaw was mentioned back in March, in a Security Advisory.  Hopefully you upgraded then, if not, update now.