More proof that crypto is harder than it needs to be.

I went looking today for a definitive statement on what purposes a certificate needs when it is created for an SMTP server that uses STARTTLS (I’m still looking, but I’m pretty certain I know what it needs).  I came across this gem of a piece from the Mac OS X guide to SSL:



The CSR and key are generated in the current directory, in a file called newreq.pem. When you enter:

cat newreq.pem

the system displays the file, which looks something like this:

—–BEGIN RSA PRIVATE KEY—–
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,21F13B37A796482C

XIY0c7gnv0BpVKkOqXIiqpyONx8xqW67wghzDlKyoOZt9NDcl9wF9jnddODwv9ZU
A1UECxMPT25saW5lIFNlcnZpY2VzMRowGAYDVQQDExF3d3cuZm9yd2FyZC5jby56 
YTBaMA0GCSqGSIb3DQEBAQUAA0kAMEYCQQDT5oxxeBWu5WLHD/G4BJ+PobiC9d7S 
6pDvAjuyC+dPAnL0d91tXdm2j190D1kgDoSp5ZyGSgwJh2V7diuuPlHDAgEDoAAw 
DQYJKoZIhvcNAQEEBQADQQBf8ZHIu4H8ik2vZQngXh8v+iGnAXD1AvUjuDPCWzFu 
QxS2zwfKG1u+YqS1c2v5ecBgqW78DQLvxMkpYU8+xge7vDeoYKE14w==
—–END RSA PRIVATE KEY—–

—–BEGIN CERTIFICATE REQUEST—–
MIIBPTCB6AIBADCBhDELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2Fw 
ZTESMBAGA1UEBxMJQ2FwZSBUb3duMRQwEgYDVQQKEwtPcHBvcnR1bml0aTEYMBYG 
A1UECxMPT25saW5lIFNlcnZpY2VzMRowGAYDVQQDExF3d3cuZm9yd2FyZC5jby56 
YTBaMA0GCSqGSIb3DQEBAQUAA0kAMEYCQQDT5oxxeBWu5WLHD/G4BJ+PobiC9d7S 
6pDvAjuyC+dPAnL0d91tXdm2j190D1kgDoSp5ZyGSgwJh2V7diuuPlHDAgEDoAAw 
DQYJKoZIhvcNAQEEBQADQQBf8ZHIu4H8ik2vZQngXh8v+iGnAXD1AvUjuDPCWzFu 
pRUR8Z0wiJBeaqiuvTDnTFMz6oCq6htdH7/tvKhh
—–END CERTIFICATE REQUEST—–

Now, you can take this CSR to a Certificate Authority (CA) such as Thawte and Verisign. Using the CSR, you can purchase an SSL certificate from one of these CAs, and then use it to authenticate your email server.



Okay, so they just said that you should take the PEM file above to your CA, as the CSR?


That’s bad.


Why’s it bad?


Because that’s not just the CSR, it also holds the private key.  Think of the private keys as being your internal organs – nobody gets to have them while you’re still alive.


Sure, it’s encrypted, but did you really want to take the chance that the guys with all that cryptographic experience?


The Certificate Request, for the record, is exactly that portion between “—–BEGIN CERTIFICATE REQUEST—–” and “—–END CERTIFICATE REQUEST—–” – that’s the part (along with those two markers) that you should send to the CA.


This is pretty entertaining, but it doesn’t beat the training guides for Mirosoft’s Windows 2000 Official Curriculum course, which read “Alice encrypts a message using Bob’s private key”.  If Alice has access to Bob’s private key, they should be able to share secret messages at the breakfast table without encryption.

Vista – new accounts will not be administrators!

I say “yay” to this post from David Cross:



“We have subsequently made the decision that in [Windows Vista] Beta 2, secondary user accounts will be standard users by default.”

As he says, it kind of gives the wrong message when you’re asked to enter the names of a half-dozen people to be created in your system, and they all get to be administrators of the system.  I installed Windows XP SP2 recently on my new laptop, and was more than a little dismayed that I couldn’t actually install it without entering a name of a user who then became an instant administrator.


Quite frankly, I’d be happy if installing my OS, I get one account created, called “Administrator”, with a blank or hard-to-guess password (blank for home, where physical access is an appropriate security limiter; hard-to-guess for work, where anyone and everyone can walk in).


My next step after installation is usually to join my domain – and I don’t want any local (non-domain) users beyond the administrator.  I only have to go through and delete their profiles … which you do through the User Profile tool, not by “RD /S /Q \Documents and settings\username”.


[Yes, I could join the domain as part of my installation, but I prefer to leave network cables unplugged until after installation is complete - these days, that's an unnecessary superstition, but it used to be that the firewall wasn't configured on by default.]

New hardening guides arrive early for April Fools’ Day.

Microsoft released a downloadable document today that discusses how to harden your Windows 98 and NT 4.0 systems.


It seems a little early for April Fools’ Day, so I opened it up and took a look.


It’s a 109-page document full of honest and useful advice for those of you in the untenable position of having to secure a network with components that date from the last century.


It could do with a little proof-reading (“The Security Configuration Manager is available from the Microsoft FTP server at http://microsoft.com/ntserver/techresources/security/securconfig.asp” – how is “http://” the start of an FTP server location?), but a quick skim through suggests that this is a good starting document for anyone who has to work with these older systems.


My only gripe on this initial read is that the suggestion to readers that their first step should be to try all means possible to upgrade these systems should have been in huge type, bold, and ideally a fetching shade of red to draw people’s attention to it.


The danger of publishing guides like these is that people will assume that their presence means that these systems can continue to be used, and are sufficiently secure for corporate use.


The danger of not publishing guides like these, however, is that people will assume that their absence means that these systems are already sufficiently secure for corporate use.


Just as abstinence programmes do little-to-nothing to counter teen pregnancy and sexually-transmitted diseases, so too a security program should be willing to say “Windows 98 and NT 4.0 are no longer supported by their vendor, and are not secure for today’s corporate environment,” but follow this up with “Here’s what you can do to make them more secure, if you find your enterprise in bed with these systems, and you cannot prevent what naturally occurs.”


Usual disclaimers apply: read the list of hardening methods, and their reasons for being, and assess the risks and benefits of each before choosing to apply or discard them.


Remember RFC 1925:


“With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead.”

More on the ActiveX behaviour change

Driving into work this morning (yes, I usually take the bus, but when there’s no space at the Park & Ride, it becomes a No Park & No Ride), I had a realisation about the ActiveX behaviour change that’s coming up.


Maybe it’s been brought about as a result of a patent lawsuit, but think on this… maybe it has a beneficial effect on security!


If it takes one more click to actually interact with an ActiveX popup or an advert than it does to close it with the big red X in the top right-hand corner, users will gravitate to that easier option.

Just test the thing and get on with it.

Microsoft’s Mike Nash has just stated that, although April 11′s patch to IE will include the updated behaviour change made necessary by the Eolas lawsuit, because people outside of Microsoft are concerned that they haven’t had a chance to test it enough on their LOB (line-of-business) applications, there will be an option to disable the behaviour change.


Message to corporate America:


Would you just test the thing and get on with it, please?


The behaviour change is simple, and has relatively minor effects.  It’s irritating chiefly in that it is a) so unnecessary, and b) more awkward and counter-intuitive than the behaviour Eolas claims is “non-obvious” and therefore patentable (leaving aside the issue of prior art).


And for those looking for explanations of why Microsoft is choosing to code around the feature that Eolas’ patent obscures, I’d point to recent patent cases such as the Blackberry / NTP settlement, where (I’m paraphrasing, I apologise if I get the legal terminology wrong) even after the USPTO said that NTP’s patent was invalid and would be revoked, the judge in the case ruled that NTP’s patent needed to be upheld, and that Blackberry (RIM, actually) should either settle with NTP to NTP’s tune, or turn off their service.


So, even if Microsoft believes they’ll win in the Eolas patent suit, they have to do what they can to lessen the intervening damage.

Microsoft’s new password collector.

Sorry, did I say that out loud?


No, it’s not really a password collector.


Probably.


What I’m talking about is a new tool from Microsoft that aims to tell you when a password is “Weak”, “Medium”, “Strong” or “Best”.


Try it for yourself – see that “This is my password.” is “BEST”, and “Cz!r4Tz” is “Weak”.


From that comparison, it’s obvious that this tool is only a guideline, and probably that’s all it can be – but you might want to try it on your users.  At the very least, many weak passwords will be shown to them as being weak.

Security koan #2

[Apologies if anyone finds the stereotype of the 'wiley Irishman' to be offensive. This story exists in many different forms, in many different cultures.]


Paddy works at a construction site.  Every night, he leaves his work, wheeling a wheelbarrow and a tarpaulin (canvas cloth covering the wheelbarrow) out past the security guard.


One night the security guard comes to him and says “Paddy, every night I see you wheel that wheelbarrow out of here, and I’m sure you must be stealing something.  Every so often, I stop you, I lift up the tarpaulin, and I see nothing’s in the wheelbarrow.  Somehow you’ve figured out which nights I’m going to search you for stolen gear.


“Now, it’s my last night on the job, I’m retiring tomorrow, and I don’t feel I owe my old bosses anything, so I’m not going to tell on you – but for my own sanity, I’ve got to know!


“What are you stealing from the job-site?”


Paddy motions the guard close, and whispers in his ear.


“Wheelbarrows, and tarpaulins.”

Flatten and pave; or don’t get infected.

E-Bitz’s article on whether “System Restore” should be used or destroyed when cleaning an infected machine reminds me of the other side of the debate – whether to clean at all.

Bitzie puts up a link to Jesper’s article describing the more academic side of this debate, that says the only thing you can do with an infected machine is to flatten and pave (i.e. delete everything, wipe the drive, and reinstall a new operating system, applications, etc).

This is great advice in theory, and it certainly is the only way to 99.99% guarantee that you have a machine free from infection.

Woah, wait, did you just say “99.99%”? Shouldn’t that be 100%?

No, because there’s always the possibility of a BIOS-infecting virus – what, you think that only Award, Phoenix get to write BIOS code? There’s PGP and Microsoft, who wouldn’t be able to get efficient and reliable whole-disk encryption going without it.

Then there’s the whole idea of the Virtual Rootkit, which emulates your installed OS from itself, so that even reinstalling from scratch won’t wipe it out, because you’re installing into a hosted environment, not the real PC.

So, maybe that’s 99.5% guaranteed if you do the flatten and pave approach to virus recovery.

Oh, but maybe you bought your machine from an OEM – I bought a laptop from Compaq (more gripes about this later) recently, and its only option for system recovery is a recovery partition.  Uh… who’s to say I can’t infect the recovery partition while I’m infecting the main body of the system?

Okay, let’s reduce that to, oh, 98% guaranteed safe.

Then, of course, my computer is of no use without my data. Let’s hope none of that is infected with macro viruses, buffer-overflows or other data-borne infections. The more recent (and therefore more useful) my backup, the more likely that it’s going to contain litter created by the virus.

Better make that 95%.

Hmm… looks like we’re heading for the same sort of recovery rate as if we just install a tool and try to remove the virus and its detritus.

Jesper can’t be this wrong – I’ve met the man, and he’s smart. Almost painfully so – you actually have to think while talking to him.

So I go back and I re-read the column. Carefully. Out loud (remind me to tell you about the tech-support teddy bear some day).

Here’s the good part…


“You can’t trust any data copied from a compromised system. Once an attacker gets into a system, all the data on it may be modified. In the best-case scenario, copying data off a compromised system and putting it on a clean system will give you potentially untrustworthy data. In the worst-case scenario, you may actually have copied a back door hidden in the data.”

“You may not be able to trust your latest backup. How can you tell when the original attack took place? The event logs cannot be trusted to tell you. Without that knowledge, your latest backup is useless. It may be a backup that includes all the back doors currently on the system.”


Woah – Jesper’s saying that you can’t trust your backups… any backups, although he just mentions the latest one (as he says, “how can you tell when the original attack took place?”).

What this article is really saying is that there is no way to make sure an infected system is clean, short of deleting all your applications and all your data, and typing the data back in by hand. As he says, it may be quicker to simply update your resume and leave.

I don’t think Jesper hammered this point home clearly enough.

If you want a clean system, you have two choices:

  1. After discovering an infected machine, wipe the machine and lose all your applications, data, backups and do the same to any machines with which the infected one communicated or shared data.
  2. Don’t get infected in the first place – patch when patches become available; protect against common routes of infection; run IPS (Intrusion Prevention Systems); be smart about what you do; run as administrator only when you absolutely can’t avoid it.

The reality (and if you are a business computer user, I hope you already spotted this) is that at some point, the risk of possibly being infected is outweighed by the cost of the data and productivity loss that’s caused by the flatten and pave.


So, the risk analysis view says that you do what you can to avoid getting infected, and when you detect an infection, you pick and choose very carefully what parts of your data you trust.

Error 0×80005000 and DirectoryEntry in .NET

So I’ve got a project that requires I write a web app that checks against Active Directory (an ADAM instance, as it happens).


It doesn’t seem to work, for the longest time.


I’ve got my server’s address set out, I remember to use the “Distinguished Name” format of the user name, and I have the right password.  I’ve selected the right AuthenticationType, and I still get an exception:


“Unknown exception (0×8000500)”.


Here’s the code that failed:

const string adamServer = “ldap://servername:389/DC=example,DC=com”;
const string adamSvcUser = “CN=userName,CN=Roles,DC=example,DC=com”;
const string adamSvcPassword = “cwazqa”;

protected void
subClick(string sUserName, string sPassword)
{
// Find User in ADAM
DirectoryEntry root = new DirectoryEntry(adamServer,
adamSvcUser, adamSvcPassword, AuthenticationTypes.None);


I just couldn’t see anything wrong.

I’ll come back and edit this post later with the answer…


EDIT…


Okay, so nobody else saw the answer either – that makes me feel better.


The answer is simply that I put “ldap://” at the start of the adamServer string.  The protocol specifier is case-sensitive.


Who thought that one up?  Is “ldap” really different from “LDAP”?  How?  To what protocol does “LDAP” refer, if not to “ldap”?


So there’s your answer – the string should have been “LDAP://servername:389/DC=example,DC=com” – elements in the string other than “LDAP” are all case-insensitive.

Security Through Obscurity

It’s long been held that “Security Through Obscurity” is no security at all.


Okay, so that’s not exactly true, because of course your password only works because it’s secret – obscured from others; your private key only works because it’s secret; etc,etc.


But these are all “exceptions that prove the rule” in a real sense – they are strings that you make up, or numbers that you choose randomly – and the knowledge that one password or one private key is little better than another is a part of the public review of the algorithms in question.


There are, unfortunately, many other examples of Security Through Obscurity that people don’t realise they are using.


“My operating system / application has far fewer bug reports than yours, so therefore it’s more secure,” is the example that keeps popping into my mind.


Your OS / app may very well be more secure than my choice, but if the only argument you have can be satisfied by the axiom that “more hackers attack this than any other target”, that’s not a winning argument.


Recent Apple vulnerabilities showed this – one vulnerability appeared, got some news coverage, and suddenly it was (a very brief) open season on Apple.


If you want to convince me that I have a long-term chance of security success in your environment, you have to tell me why it’s:


  • technically superior – there are proven useful roadblocks in your environment, that my environment doesn’t have (and that I may need).
  • procedurally superior – that the developers and providers of your environment have documented and enacted processes that turn flaws around faster and better than the developers and providers of my environment.
  • culturally superior – your environment is habitually operated and written for in a manner that mine is not (a.k.a. “Unix users never run as root, Windows users always do”)

These items are ranked in order of value – a technical superiority will last and last, a procedural superiority is likely to be detected before it changes for the worse, and cultural superiority relies on groups of people continuing to behave the way they do – if we could rely on that, Wayne Newton would still be on top of the hit parade.


Then you also have to persuade me that I can get the a better job done, with any re-tooling and re-training costs subtracted from the benefit you allege to provide.