Heartbleed–musings while it’s still (nearly) topical

Hopefully, you’ll all know by now what Heartbleed is about. It’s not a virus, it’s a bug in a new feature that was added to a version of OpenSSL, wasn’t very well checked before making it part of the standard build, and which has for the last couple of years meant that any vulnerable system can have its running memory leached by an attacker who can connect to it. I have a number of approaches to make to this, which I haven’t seen elsewhere:

Behavioural Changes to Prevent “the next” Heartbleed

You know me, I’m all about the “defence against the dark arts” side of information security – it’s fun to attack systems, but it’s more interesting to be able to find ways to defend.

Here are some of my suggestions about programming practices that would help:

  1. Don’t adopt new features into established protocols without a clear need to do so. Why was enabling the Heartbeat extension a necessary thing to foist on everyone? Was it a MUST in the RFC? Heartbeat, “keep-alive” and similar measures are a waste of time on most of the Internet’s traffic, either because the application layer already keeps up a constant communication, or because it’s easy to recover and restart. Think very carefully before adopting as mandatory a new feature into a security related protocol.
  2. This was not a security coding bug, it was a secure coding bug, but in a critical piece of security code. Secure code practices in general should be a part of all developers’ training and process, much like hand-washing is important for doctors whether they’re performing surgery or checking your throat for swelling. In this case, the code submitted should have tripped a “this looks odd” sense in the reviewer, combined with its paucity of comments and self-explanation (but then, OpenSSL is by and large that way anyway).
  3. Check lengths of buffers. When data is structured, or wrapped in layers, like SSL records are, transform it back into structures and verify at each layer. I’m actually trying to say write object-oriented code to represent objects – whether the language is object-oriented or not. [It’s a matter of some pride for me that I reviewed some of the Fortran code I wrote as a student back in the mid eighties, and I can see object-orientedness trying to squeeze its way out.]
  4. Pay someone to review code. Make it their job, pay them well to cover the boredom, and hold them responsible for reviewing it properly. If it matters enough to be widely used, it matters enough to be supported.
  5. Stop using magic numbers. No “1 + 2 + 16” – use sizeof, even when it’s bleedin’ obvious.
  6. Unit tests. And then tests written by a QA guy, who’s trying to make it fail.

There’s just a few ideas off the top of my head. It’s true that this was a HARD bug to find in automated code review, or even with manual code review (though item 2 above tells you that I think the code looked perverse enough for a reviewer to demand better, cleaner code that could at least be read).

National Security reaction – inappropriate!

Clearly, from the number of sites (in all countries) affected negatively by this flaw, from the massive hysteria that has resulted, as well as the significant thefts disclosed to date, this bug was a National Security issue.

So, how does the US government respond to the allegations going around that they had knowledge of this bug for a long time?

By denying the allegations? By asserting they have a mandate to protect?

No, by reminding us that they’ll protect US (and world) industries UNLESS there’s a benefit to spying in withholding and exploiting the bug.

There was even a quote in the New York Times saying:

“You are not going to see the Chinese give up on ‘zero days’ just because we do.”

No, you’re going to see “the Chinese” [we always have to have an identifiable bogeyman] give up on zero days when our response to finding them is to PATCH THEM, not hold them in reserve to exploit at our leisure.

Specifically, if we patch zero days when we find them, those weapons disappear from our adversaries’ arsenals.

If we hold on to zero days when we find them, those weapons are a part of our adversaries’ arsenals (because the bad guys share better than the good guys).

National Security officials should recognise that in cyberwar – which consists essentially of people sending postcards saying “please hit yourself” to one another, and then expressing satisfaction when the recipient does so – you win by defending far more than by attacking.

Many eyeballs review is surprisingly incomplete

It’s often been stated that “many eyeballs” review open source code, and as a result, the reviews are of implicitly better quality than closed source code.

Clearly, OpenSSL is an important and widely used piece of security software, and yet this change was, by all accounts, reviewed by three people before being published and widely deployed. Only one of those people works full time for OpenSSL, and another was the author of the feature in question.

There are not “many” eyeballs working on this review. Closed source will often substitute paid eyeballs for quantity of eyeballs, and as a result will often achieve better reviews.

Remember, it’s the quality of the review that counts, and not whether the source is closed or open.

Closed source that is thoroughly reviewed by experts is better than open source that’s barely reviewed at all.

Finally, in case you’re not yet tired of Heartbleed analogies

Yes, XKCD delivered perhaps the most widely used analogy.

But here’s the one I use to describe it to family members.

Imagine you’re manning a reception desk.

Calls come in, you write down messages, and you send them off.

At some point, you realise that this is a waste of paper, so you start writing your messages on a whiteboard.

Wiping the whole whiteboard for each message is a waste of effort, so you only wipe out enough space to write each incoming message.

Some messages are long, some are short.

One day, you are asked to read a message back to the person who leaves it, just after you wrote it.

And to make it easy, they tell you how long their message is.

If someone gave you a six letter message, and asked you to read all six hundred letters of it back to them, you’d be upset, because that’s not how many letters they gave you.

Computers aren’t so smart, they are just really fast idiots.

The computer doesn’t get surprised that you sent six characters and ask for six hundred back, so it reads off the entire whiteboard, containing bits and pieces of every message you’ve had sent through you.

And because most messages are small, and only some are large, there’s almost an entire message in each response.

2 thoughts on “Heartbleed–musings while it’s still (nearly) topical”

    1. Not this year, no – was touring London, Paris and Barcelona, to celebrate the boy’s 18th, his graduating from high school, and my 20th wedding anniversary. Decided meeting smelly hackers could wait for another year. :)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>