Monthly Archives: September 2011

Too nerdy?

Am I too nerdy for wishing that Doctor Who Confidential would have made more of a mention that the most recent episode, “Closing Time”, featured the third appearance of Lynda Baron in a Doctor Who story?

If .NET is so good, why can’t I…?

OK, so don’t get me wrong – there are lots of things I like about .NET:

  • I like that there are five main programming styles to choose from (C#, Visual Basic, F#, PowerShell and C++)
  • I like that it’s so quick and easy to write nice-looking programs
  • I like the Code Access Security model (although most places aren’t disciplined enough to use it)
  • I like the lack of remote code execution through buffer overflow

And up until recently, when all I was really doing was reviewing other people’s .NET code, my complaints were relatively few:

  • Everything’s a bloody exception – even normal occurrences.
    • Trying to open a file and finding it not there is hardly an exceptional circumstance, for instance.
    • Similarly, trying to convert a text string to a number – that frequently fails, particularly in my line of work.
    • Exception-oriented programming in general gets taken to extremes, and this can lead to poor performance and/or unstable code, as programmers are inclined either to catch everything, or accept code that throws an exception at the slightest nod.
  • It’s pretty much only available on the one platform (yes, Mono, but really – are you going to pitch your project as running under Mono on Linux?)
  • You can’t search for .NET specific answers in a generic way.
    • “Java” occurs sufficiently infrequently in any programming context other than the Java platform / language, so you can search for “string endswith java” and be pretty sure that you’re looking at the Java string.endsWith member, rather than any other.
    • I’ve taken to searching for “C#” along with my search queries, because the hash is less likely to be discarded by search engines than the dot in “.NET”, but it’s still not all that handy.

But now that I’ve started trying to write .NET code of my own, I’m noticing that there are some really large, and really irritating, gaps.

  • Shell properties in Windows – file titles, description, comments, thumbnails, etc.
    • While there are a couple of additional helpers, such as the Windows API Code Pack, they still cause major headaches, and require the inclusion of another framework that is not maintained by usual update procedures.
    • Even with the Windows API Code Pack, there’s no access to the System.Thumbnail or System.ThumbnailStream properties (and presumably others)
  • Handling audio files – I was hoping to do some work on a sound file analyser, to determine if two allegedly-similar half-hour recordings are truly of the same event, and maybe to figure out which one is likely to be the better. I didn’t find any good libraries for FFT. Maybe this was because you just can’t search for .NET-specific FFT libraries.
  • Marshalling pointers in structs.
    • So many structures consist of lists of pointers to memory contents, it would be really nice to create this sort of a structure in a simple byte array, with offsets instead of pointers, and have the marshalling functions convert the offsets into pointers (and back again when necessary).
    • Better still, of course, would be to have these structures use offsets or counts instead of pointers, but hey, you have to support the existing versions.

So now I have to grapple between whether I want to write my applications in .NET and miss out on some of the power that I may want later in the app’s development, or carry on with native C++, take the potential security hit, but know what my code is doing from one moment to the next.

What other irritating gaps have you seen – or have you found great ways to fill those gaps?

DigiNotar – why is Google special?

So, you’ve probably heard about the recent flap concerning a Dutch Certificate Authority, DigiNotar, who was apparently hacked into, allowing for the hackers to issue certificates for sites such as Yahoo, Mozilla and Tor.

I’ve been reading a few comments on this topic, and one thing just seems to stick out like a sore thumb.

DigiNotar’s servers issued over 200 fraudulent certificates. These certificates were revoked – but, as with all certificate revocations, you can’t really get a list of the names related to those revoked certificates to go back and see which sites you visited recently that you might want to reconsider. [You can only check to see if a certificate you’re offered matches one that was revoked.]

What behaviour would you reconsider on recently visited sites? Well, I’d start by changing my passwords at those sites, at the very least, perhaps even checking to make sure nobody had used my account in my stead.

But that’s not the thing that sticks out

What does stick out is that DigiNotar’s own certificate was removed from, well, just about everyone’s list of trusted root Certificate Authorities, once it was discovered that a fraudulent certificate in the name of *.google.com had been issued, and had not yet been revoked.

Yeah, given the title of my blog posting, I’m sure you could guess that this was the thing that I was concerned about.

So, why is Google so special?

I’m not sure I buy that DigiNotar’s removal from the trusted certificate list was simply because they failed to find one fraudulently issued certificate. It seems like, if that fraudulent certificate was for Joe Schmoe Electrical Repair, it would just have been revoked like all of the other certificates.

Removing a CA from the trusted list, after all, is pretty much going to kill that CA – every certificate ever issued by them will suddenly fail. All of their customers will have to install a new certificate, and what’s the chance those customers will go back to the CA that caused them a sudden outage to their secure web site?

So it’s not something that companies like Google, Microsoft, Mozilla, etc would do at the drop of a hat.

It certainly seems like Google is special.

The argument I would buy

There is an argument I would buy, but no one is making it. It goes something like this:

“Back in July, when we first discovered the fraudulent certificates, we had no evidence that anyone was using them in the wild, and the CRL publication schedule allowed us to quietly and easily render unusable the certificates we had discovered. Anyone visiting a fraudulent web site would simply have seen the usual certificate error.

“Then, when we discovered in August that there was still one undiscovered certificate, and it was being used in the wild, it was not appropriate to revoke the certificate, because the CRL publishing schedule wasn’t going to bring it to people’s desks in time to prevent them from being abused. So, we had to look for other ways to prevent people from being abused by this certificate.

“We could have trusted to OCSP, but it’s unlikely that the fraudulent certificate pointed to a valid OCSP server. Besides, the use of a fraudulent certificate pretty much requires that you are a man-in-the-middle and can redirect your target to any site you like.

“We could have added this certificate to the ‘untrusted’ certificate list, but only Microsoft has a way to quickly publish that – the other browser and app vendors have to release new versions of their software, because they have a hard coded untrusted certificates list.

“And maybe there’s another certificate – or pile of certificates – that we missed.

“So we chose, in the interests of securing the Internet, and at the risk of adversely affecting valid customers, we chose to remove this one certificate authority from everyone’s list of trusted roots.”

I’ve indented that as if it’s a quote, but as I said, this is an argument that no one is making. So it’s just a fantasy quote.

Is there another possible argument I might be missing, but willing to accept?