It’s now officially 2006, at least in UTC – so I’m going to ask the following simple question:
What did you do with your extra leap second this year?
In related news, it’s worth noting that the US government is pushing for the leap second to be abolished.
I haven’t looked into it, but this seems to be really stupid. Most devices I’ve used fall into one of three categories:
|Inaccurate||These devices (or their owners / administrators) can’t keep time to anything like a second over a year, so they aren’t going to notice a change of a second, whether it’s added at the end of the year, or phased in throughout the year by making seconds longer.|
|Remotely set||I got one of these this year – a watch that receives a signal from the NIST giving the current date and time. Obviously, owners of these devices don’t have to do anything, because NIST keeps the signal updated with leap seconds.|
|Highly accurate||There are some devices – atomic clocks, etc – that need to maintain accurate time, because they are used to control or monitor astronomically important things, like telescopes, etc. Those machines need to know the time in relation to the progression of the earth through the solar system, so presumably they make good use of the leap seconds to ensure that the time they display is always close to the visible local time.|
So, where’s the fuss? What could it possibly benefit to kill the leap second? Is the leap second really causing anyone any confusion? I’d love to know.
I just noticed a new pair of functions in the Win32 API for Windows Server 2003 – FindFirstStreamW and FindNextStreamW. These are interesting from a security perspective, if for no other reason than that alternate data streams are useful places to hide data.
Before this function, a programmer had to open files with “Backup semantics”, and work his way laboriously through the structure of the streams within the file in order to find out what streams there are. Here’s the sort of code you had to write:
int EnumStreams(const WCHAR *file, StreamEnumProc *func, void *funcarg)
WIN32_STREAM_ID &wsId=*((WIN32_STREAM_ID *)wszStreamName);
DWORD dwRead, dwStreamHeaderSize, dw1, dw2;
HANDLE hFile=CreateFile(file, GENERIC_READ, FILE_SHARE_READ,
NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, 0);
bResult=BackupRead(hFile, (LPBYTE)&wsId, dwStreamHeaderSize,
&dwRead, FALSE, TRUE, &lpContext);
if (!bResult || dwRead!=dwStreamHeaderSize)
wsId.dwStreamNameSize, &dwRead, FALSE, TRUE,
if (!bResult || dwRead!=wsId.dwStreamNameSize)
func(file, &wsId, funcarg);
if (wsId.Size.LowPart || wsId.Size.HighPart)
BackupSeek(hFile, ULONG_MAX, LONG_MAX, &dw1,
BackupRead(hFile, NULL, 0, &dwRead, TRUE, FALSE, &lpContext);
Since the FindFirst/NextStreamW functions are only in Windows Server 2003, you’ll still have to do something like that mess on a Windows XP or previous system.
There are still no tools, however, in the base operating system that allow an IT Professional to search for alternate data streams that might be attached to files on his workstation. So, a while back, I created “sdir”, a program that allows you to list alternate data streams on a file or a directory, or recursively through a directory tree. You can find it at http://www.wftpd.com/downloads.htm
Several people have suggested that alternate data streams (or ADS, as they are often referred to) are ideal for infecting a system such that the virus scanner will not find the virus. That’d be true, except for a couple of things – first, that the virus would still need to arrive on the system using a method that would allow virus scanners to look for virus signatures. The virus would have to travel through a download – which doesn’t have a hidden stream – or an email – which doesn’t have a hidden stream – etc, etc. Second, the virus scanners have already added ADS scanning to their repertoire.
Reviewing the security for another application today, I find that it relies on Digest Authentication, which is a horrible thing to do to a secure system.
Why is that? Because it requires that you enable the check-box labeled “Store Passwords Using Reversible Encryption” (and once you’ve done so, any users who want to use Digest Authentication have to change their password, so that their password can be stored decryptably).
This is such a horrible thing to do that Microsoft frequently refers to this as storing passwords “in plaintext”. There’s really not much difference – anyone who can get access to the encrypted store will be able to decrypt the passwords.
Fortunately in IIS 6, along comes Advanced Digest Authentication. Now, this is not exactly described very clearly, and in some cases, the description says some really bad things – one description I found implies that this method hashes the user name, domain, and password, and then waits for the browser to send exactly that same hash in order to identify itself.
Fortunately, that’s not the case – the people in IIS are not idiots. What appears to be the case, and it’s almost impossible to find documentation backing this up (probably because of the “Not Invented Here” syndrome), is that what Microsoft terms “Advanced Digest Authentication” is nothing more complicated than the MD5-SESS Digest Authentication described in RFC 2617.
That does hash the username, password and domain name (or realm, if you want the proper term), and stores that at the server. But it’s not what it looks for from the client. The client takes that hash, appends a nonce provided by the server, and one provided by the client, and hashes that string.
This process of “take a hash, add something random to it, and hash it again” is a fairly common procedure in security protocols, and is designed to avoid replay attacks while simultaneously avoiding the use of stored passwords.
This is great… with one caveat. Now, the hash of the username, password and realm are essentially the password to that realm. If you were the sort of nasty person that could get a hold of the reversibly encrypted password, and decrypt it, you could just as easily get a hold of the hash – and that’s all you need to generate the Advanced Digest Authentication message.
All is not lost, though – this only allows an administrator of the realm to get access to his own realm as if he were one of his own users. The “rogue administrator” problem is one that doesn’t have a good solution (except for “trust your administrators not to go rogue”), and is rightly treated as a problem not worth investigating for most systems.
What was allowed under the old Digest Authentication is that the administrator could fetch the clear-text version of the user’s password, which is almost certainly the same as that user’s password on another system. Now this is a problem worth tackling, and the Advanced Digest Authentication method adequately prevents this from occurring. The administrator can only fetch a hash, and that hash is no use outside of his domain.
Oh, and those hashes are still only generated when the password is created, set, or changed, so as a result, if you change the realm, all users have to change their password again. I’m not quite sure if you have to do this when enabling the Advanced Digest Authentication feature.
Programmers are, by nature, a very arrogant bunch. We know this – and it comes from the nature of what we do. In our own little world inside the computer, we are a god.
For this reason (and perhaps a few others), it becomes very easy for us to forget to think outside of our little world, and remember that we are also acting as servant to the people that own the box we’re writing our software for.
This is particularly true of developers of off-the-shelf software, who spend next to no time actually dealing with the people that use, or will use, their programs.
So, I’m going to start a topic on the arrogance of developers.
My first example is multiple – Real Networks, Apple Quicktime, and several other programs insist on placing their icons on the system tray – down in the bottom right-hand corner, with the clock.
Now, if these were just icons, that would not be such a bad thing – after all, your system is littered with icons that represent shortcuts, data files, executables, and so on.
Unfortunately, the icons in the system tray are special. Each one represents a running program. Each one is placed there by a programmer who believes that his or her program is so important to all users that it should remain permanently running.
Me, I play a Quicktime Movie, or a Real Audio file, about once every couple of months. It can be a month or more before I notice that the icon is on my system tray, taking up time, communicating who knows what, and exposing goodness knows how many application-related flaws to the Internet.
So, a plea to developers – unless your software positively requires to be run all the time in all its possible installation modes, make it go away when I’m done using it.
No offence, but I’m just not that into your program.
[I was going to title this “PATRIOT – Piddling Around The Real Issues Of Terrorism”, but I figured that’d be a little too inflammatory.]
The other day, I was listening to good-old-fashioned talk radio, and something the host said surprised me. He was blathering about how Democrats wanted to make friends with terrorists.
It sounds really stupid when you put it in those terms, but yes – that’s essentially the approach that has to happen. Like a pyramid scheme, the terrorists at the top feed hatred down, and get power back up the chain. While that feed of hatred is accepted by their “down-line”, the feed of power up the line continues. You don’t stop terrorism by making friends with the guys at the top, you stop terrorism by making nice to the guys at the bottom; you remove the power-base by making it difficult for people to hate you.
So, how does that remotely connect to the usual topic of this blog, computer security? Like this:
Vendors [think Microsoft, but it also applies to small vendors like me] face this sort of behaviour, on a smaller level, when it comes to vulnerability reports. Rightly or not, there’s a whole pile of hatred built up among some security researchers against vendors, initially because over the years vendors have ignored and dismissed vulnerability reports on a regular basis. As a result, those researchers believe that the only way they can cause vendors to fix their vulnerabilities is to publicly shame the vendors by posting vulnerability announcements in public without first contacting the vendor.
I’m really not trying to suggest that vulnerability researchers are akin to terrorists. They’re akin to an oppressed and misunderstood minority, some members of which have been known to engage in acts which are inadvertently destructive.
Microsoft and others have been reaching out of late to vulnerability researchers, introducing them to the processes that a vendor must take when receiving a vulnerability report, and before a patch or official bulletin can be released. Some researchers are still adamant that immediate public disclosure is the only acceptable way; others have been brought over to what I think is the correct way of thinking – that it helps protect the users if the first evidence that exists in public is a bulletin and a patch.
The security industry gets regularly excited by the idea of a “zero-day exploit” – a piece of malware that exploits a vulnerability from the moment that the vulnerability is first reported. I think it’s about time we got excited about every release of a “zero-day patch”.
How many different classifications of document should you have?
The answer: two.
Documents should be “public” or “private”.
Public documents need not necessarily be published public documents, but contain information that is not important to keep from the public. By fact, any document that has been published is already public, no matter what you’d like it to be.
Private documents should be attached to an explicit or implicit list of people who are entitled to view them, and there should be policies, procedures, practices and phreakin’ ACLs in place to make sure that their privacy is not broken.
Can you think of a document secrecy category that isn’t covered by this?
So, I started my new job last week.
I spent much of the first week trying to stop the “message waiting” light from flashing. I knew what I had to do – call the voice-mail system, listen to all the old messages and dump them.
So, I press the button for voice-mail and get an alternating tone. What does that mean? Does it mean I’m in the voice mail system? Does it mean “enter your password”? I have no idea, so I enter my password, and it makes a different beep, so maybe that means “no, wrong password”.
I go to the “self-help” page, and the “phone training” pages. They disagree as to which is the default password. Great.
Now I have to do the thing I hate – I have to call the help-desk. So I call, and I let them know what the problem is. I give them my email account and all the other information that they need.
Finally, I come into work after the weekend, and I think I’ve figured it out. I leave the voice-mail button alone, and dial the voice-mail extension by hand. This time, it says something like “welcome to the voice-mail system, please enter your password”.
Seventeen messages later, fifteen of which are from before I started at the company, I reach the cracker. A message from the help-desk, telling me that maybe my voice-mail button isn’t programmed yet, and detailing the default password. They end by telling me “if you are still unable to access your voice-mail, please call the help-desk”.
I call the help-desk in return, and suggest that when people are having trouble with the phone system, that the phone system is not necessarily the best method of contacting them.