Part 3 – and I promise that’s the lot for now, because it’s starting to look like I’m obsessed or something.
Over the past week or so, you’ve read me talking about vulnerabilities in Fire fox’s protocol handlers, and how my perception is that Internet Explorer is neither the source of the flaw. A few others have weighed in on the issue in various directions, some at their own blogs, and others shuffling from blog to blog leaving comments.
Now, I think it’s time to look at Internet Explorer.
Some of my readers have suggested that I have been blinkered to what they see as Internet Explorer’s failings in this conversation, and in a sense, they’re right – I’ve been looking primarily at identifying where the actual security vulnerability lies, and deliberately not broadened my inspections to look at where related non-standard behaviour lies.
I’ve quoted RFC 3986 a couple of times, and in this article, it’s worth pointing out that although I believe it is correct that Internet Explorer should not percent-encode the URIs that it passes on to protocol handlers, I also believe that Internet Explorer should not be percent-decoding the URIs passed to protocol handlers.
Sadly, this is not the case – Internet Explorer decodes percent-encoded values, and also has a habit of percent-encoding some URIs on re-display. For instance, the URI “whatnot:hi there” becomes “whatnot:hi%20there” in the address display, even as it’s passed unfiltered to the protocol handler.
Just as Internet Explorer has no way to know what the URI’s intent in encoding is, it has no way to know what the URI’s intent at decoding is, and should feed the URI unvarnished to the protocol handler, for the protocol handler to deal with as it will. At least the developer documentation for writing a protocol handler documents that Internet Explorer decodes percent-encoded values before handling them to the protocol handler – and this is likely a result of noting how few protocol handlers were written by people who read RFC 3986.
What isn’t documented is that in addition to “%1”, there are other substitutions that can be made in the command line. %d and %l (that’s a letter ‘ell’) both appear to be the same as %1, as does %0, confusingly enough. %i gives some kind of identifier, in the form of :M:N, where N is the PID of the process that Internet Explorer is running – I have yet to figure out what M is. %s gives 1, and %h gives 0 – perhaps these indicate whether the handler is to be shown or hidden? Again, these are just guesses, and I have asked Microsoft if they can document these parameters.
So, now we’ve discussed that Internet Explorer decodes percent-encoded values on their way to the protocol handler, and encodes them on their way to the address bar, and I’ve stated my opinion that I think this is wrong. We’ve discussed that it’s documented behaviour, but that Internet Explorer exhibits other behaviour that ought to be documented.
Others have discussed that Internet Explorer does not encode values on their way to the protocol handler, and that they think this is wrong.
First, let me re-iterate that while that is definitely opinion on their part and mine, and I can’t call one definitively right or wrong, I am still going to say that Window Snyder is wrong to assess this behaviour as a critical vulnerability in Internet Explorer.
If Internet Explorer changes its behaviour, it will be as a convenience to the protocol handler developers, with the side effect of possibly protecting users from a small class of bugs in protocol handlers written by people with poor security skills (but not protecting against any number of wider classes of bugs that those developers might make). The critical vulnerability is still in any such exploitable protocol handler.
Many have pointed to Firefox’s quick fixes to the handler calling encoding as an example that Microsoft should make these changes themselves, and that Microsoft’s lack of change indicates a reluctance to address security.
We’ve seen a number of occasions where Microsoft has been quick to address security – and in a situation like this, you can bet that Microsoft staff have been asking the question “should we change this behaviour?”
If security was the only consideration, then making the change is still an unlikely decision – the flaw is not with Internet Explorer, and if you’re going to argue that “defence in depth” suggests that Internet Explorer should accommodate flawed protocol handlers, then you’re going to have to answer the question of whether you’re going to patch this in the TCP/IP stack, the Ethernet drivers, the Linksys/Cisco routers… All of these feature significantly in the path under consideration, and to all of these, the possibility of a malformed URI triggering a vulnerability in a protocol handler is but a miniscule fraction of the work they do.
Every time you change code, you run the risk of introducing related – and unrelated – damage. For Firefox, that risk is relatively small – the code-base is known for being changed at the drop of a hat, and vendors and users aren’t surprised to see weekly patches, some of which will kill functionality that they use. For Internet Explorer, there is a far greater expectation of stability. There is a far larger pool of documentation on its functionality, and if that documented functionality disappears or changes, users and developers call on Microsoft expecting assistance.
Don’t forget, also, that every Windows Firefox user is also an Internet Explorer user, and as Jesper found when delving into the bowels of the most recent Firefox bugs, Firefox on Windows is an Internet Explorer user.
As a result of all these things, Internet Explorer is going to always balance security against compatibility and usability – and where the security problem is external to Internet Explorer, there’s going to have to be a pretty powerful argument in place that Internet Explorer can best address the problem before changes will be made there.
To date, these exploits have centered around one vendor’s code. Should Internet Explorer be pushing a disruptive change to everyone just because that vendor calls IE by rude names in reaction to its own flaws? Should a change be forthcoming as a result of seeing how incompatible the URI behaviour is with RFC 3986?
I don’t think so. Maybe the next time significant disruptive change comes along – IE 8, perhaps. This time, why don’t you all test your apps while it’s still in beta, mm-kay?
I’ve been encouraged to collect together some comments that I’ve made over on other people’s blogs about the firefoxurl: vulnerability.
First, I do have to note with a little embarrassing schadenfreude that Mozilla’s Window Snyder, Chief Security Something-or-other, has acknowledged that Firefox exhibits the same simple parameter passing mechanism as Internet Explorer (curiously without mentioning Jesper’s discovery of it), and that they are researching what to do about this behaviour, which they view as a bug (note that this is a stark departure from her previous statement (the emphasis below is hers, not mine):
This patch for Firefox prevents Firefox from accepting bad data from Internet Explorer. It does not fix the critical vulnerability in Internet Explorer. Microsoft needs to patch Internet Explorer, but at last check, they were not planning to…
Mozilla recommends using Firefox to browse the web to prevent attackers from taking advantage of this vulnerability in Internet Explorer.
Embarrassing to have to recognise that “using Firefox to browse the web” does not “prevent attackers from taking advantage of this vulnerability”, but that’s something you learn – the more you poke holes in other people’s software, the more you need to be looking for similar holes in your own – you should, at least, check to see that you don’t have exactly the same hole. [Though, as I’ll explain below, I don’t think it’s a hole.]
Now, to poke holes in some arguments:
Rebuttal: No, they don’t.
At least, not from an intermediary. RFC 3986 specifically states that encoding should be performed once, by the party creating the URI, and decoding should be performed once, by the party processing the URI. This makes a lot of sense, as doubly-encoding and then singly-decoding, or vice-versa, is liable to lead to all manner of confusing results. Expect encoding at the URI creation point (but allow reasonable non-encoded URIs to pass through), and decode only when you’re about to process the parts of the URIs – and don’t mess with the URI’s content if you don’t natively host the protocol.
Rebuttal: Firefox prompts you the first time you activate a protocol handler, and offers you the opportunity to dismiss the dialog. So, the attacker can simply shoot off “firefoxurl:happy_place”, in the anticipation that you’ll approve the dialog, and then follow it up with “firefoxurl:nasty_stuff” because that will automatically be approved. The dialog tests only for the binary measure of “do I implicitly trust everything this protocol handler will do for any page I visit?” It’s a convenience feature, not a security measure.
Rebuttal: No, they don’t. Not even all Unixes do this.
Here’s how I put it in one or two comments:
If you choose C or C++ as your language of choice, for instance, the command line string as a whole is parsed in the CRT library routine parse_cmdline(), which (if you have Visual Studio installed) is in %ProgramFiles%\Microsoft Visual Studio 8\VC\crt\src\stdargv.c
If you use assembler, or write your program for Win32 (with a start function of WinMain instead of main), you’ll be given the command line as a single string from first character entered to the final character provided by your caller.
Double-quote processing is a feature of C and C++, NOT of the Windows executable calling mechanism.
To put it another way, double quotes are only special to Firefox because Firefox’s programmers chose to treat them specially. As such, it’s their responsibility to ensure that they are handled correctly when faced with data provided by untrusted third parties.
Rebuttal: This is actually the way the protocol handler mechanism is documented. The entire URL is passed to the protocol handler as a replacement for the %1 parameter.
Claim: The assumption with commandline parameters is that they come from the user, and are thus fully trusted.
Rebuttal: This is not merely a command line – it is a declared and documented handoff of untrusted data coming from a remote and untrusted third party, not the OS, and not the user, but a potential hacker.
When Firefox registers ‘firefox -url “%1″‘ as a protocol handler, their programmers have declared that they are aware that anything coming through in “%1” is untrusted and unfiltered data, potentially from a hacker. If they choose to fully trust that, then they are either asleep at the switch, or not aware of security concerns.
It’s vital for the protocol handler to see the “-url” argument as indication that everything following it is suspect. The first double-quote should not be taken as a sign that “%1” is over – the last double-quote before the end of the command line is that indicator.
Claim: This is a bug in Internet Explorer and Firefox, and at least Mozilla is producing a fix for it.
Rebuttal: Okay, let’s take a look at Mozilla’s fix. They’ve added code to percent-encode double quotes, and after a little discussion, to also percent-encode spaces. Not a security-savvy way of addressing this problem – if you’re going to start messing with that URL, then use the “known good” principle. [That means that, if you know that alphanumerics and a few symbols are ‘good’, and don’t need to be encoded, then everything else should be encoded.]
It’s also not, as I pointed out above, an RFC-compliant fix, because now the URI is doubly-encoded, and then singly-decoded. [Perhaps this is why the Mozilla team fixing this bug chose to only encode a couple of characters? To forestall possible backward compatibility issues?]
It’s not minimally invasive, and it still doesn’t address the issue that a protocol handler might be written by someone who forgot to think about the security implications of accepting third-party input. I’ve thought up dozens of ways in which you could write a dorky protocol handler that was open to attack, and none of them can be, or should be, addressed by the browser that calls them.
Claim: You can’t comment in an unbiased fashion, because you’re a Microsoft MVP.
Rebuttal: Can you not do any better than that? Seriously, attack the message, not the messenger. The MVP award is a retrospective award, given for the previous year’s contributions to educating and assisting the community of Microsoft users. Since there is no published criteria, and what known criteria there is changes from year to year, it’s useless to “try” and be an MVP, so I don’t behave in any special way to continue my MVP-ness. I certainly don’t kow-tow to Microsoft.
In summary, this isn’t a bug – it works as designed and as documented, and the design does not carry with it any ability for bad behaviours that you shouldn’t already be trying to handle anyway. Mozilla’s previous statements have backed them into a corner where they feel they have to “fix” the bug. The mature thing to do would be to analyse whether they really have to change behaviour, or whether that change in behaviour is more likely to generate problems than solutions.
That’s all I have time and energy for tonight – maybe there will be more soon.
Heard about the firefoxurl vulnerability?
It turns out that you can exploit Firefox by having Internet Explorer visit a link to a URL that starts with “firefoxurl:” (and a bunch of other code). [Assuming you have Firefox on your computer along with Internet Explorer]
This is because Internet Explorer blindly accepts and passes the entire contents of the URL to the handler for the firefoxurl URL type – that handler, as the URL scheme name implies, is Firefox. It’s also because Firefox can be exploited by command-line parameters, because Firefox’s protocol is handled by interpreting a command-line, and because Firefox interprets the command-line provided to it as if it is always well-formed.
There’s been a lot of discussion about whose problem this is, and where it needs fixing. Jesper’s a friend of mine, and I’m a fan of his, so I’d like to point to his posts on the discussion so far, here and here.
A number of people have made references to RFC 1738, and its description of which characters must, and may, be encoded in a URL. That’s all very interesting , if you’re engaged in academic discussion of how to create a URL, as the originator, or how to process it, as a consumer.
In this case, the discussion as to whether IE has a flaw should be centered on how much work an intermediary party should do when given something that is alleged to be a URL, before it hands it off to a third party for actual handling.
This makes this intermediary (Internet Explorer in the original exploit, but … are there others?) behave like a proxy for such protocol handlers, rather than a consumer or provider of the URL as a whole.
I’m sure we’d have heard a different tale if Microsoft’s Internet Explorer team had chosen to limit the set of characters that can be passed through to an underlying handler; instead we’d hear “why does my protocol handler have to interpret encoded character sequences? They weren’t encoded in the link, and there’s no reason for IE to encode them!”
As Markellos Diorinos, IE Product Manager, points out in the IEBlog, it’s not just the presence of uncomfortable quote characters that the protocol handler will have to cope with, it’s buffer overflows, invalid representations, and out-of-spec protocol portions of varying kinds. IE can’t possibly know all the things that your application might find uncomfortable, versus all the things that your protocol may need, so it doesn’t try to guess, or limit the possible behaviours of the protocol handler.
In short, IE does what any interface between transport layers does – it strips off the header (“firefoxurl:”), and passes the rest uninterpreted to the next layer. It is IE’s job, in this case, only to identify (from the scheme specifier) which protocol handler to fire up, and to pass its parameters to it.
Perhaps you think that’s not defence in depth – but then, defence in depth is not about enforcing the same defence at several layers, it’s about using knowledge specific to each layer to protect against attacks within each layer. Sometimes those protections are redundant, but unless there is different knowledge in that redundancy allowing the layers to do different defence work, there is little value to redundancy for redundancy’s sake.
Yes, the IE team could have decided that they’d enforce URL standards that were not being followed by the upstream provider (in this case, the creator of the link), and enforce them on the portion passed to the downstream, but such approaches tend to limit the flexibility of the protocol.
IE’s responsibility is to ensure that any URL that comes to it does not trigger a vulnerability in IE, that any URL that comes from it conforms to RFCs, and that any information that is supposed to pass unmolested through it actually passes unmolested.
It’s just a matter of some amusement that when Mozilla’s Window Snyder, Chief Security Something-or-other, called out this lack of extra preprocessing as a specific vulnerability in Internet Explorer, she did not think to confirm first that Firefox itself did not contain the same behaviour. I will be interested to see how they address this – whether they will ‘fix’ the behaviour, and if they do, what will be the resulting impact on compatibility with existing protocol handlers whose programmers assumed that their data would arrive unmolested, as documented, and who have already taken appropriate security measures to cope with this (such as not parsing anything past the beginning of the user data as if it was anything other than untrustworthy user data).
Finally, as a nod to my own past as a nit-picker of RFCs, here’s what RFC 3986, which obsoletes the generic URL specification portions of RFC 1738, has to say about intermediaries in the URI handling stream:
The URI syntax is organized hierarchically, with components listed in
order of decreasing significance from left to right. For some URI
schemes, the visible hierarchy is limited to the scheme itself:
everything after the scheme component delimiter (“:”) is considered
opaque to URI processing. Other URI schemes make the hierarchy
explicit and visible to generic parsing algorithms.
That suggests that a generic URI processor (such as a forwarding proxy) should see the URI after the scheme component as “opaque to URI processing” – in other words, that the processor should assume it can understand nothing about, and therefore should not inspect, the part after the colon.
Further down in the document:
Under normal circumstances, the only time when octets within a URI are percent-encoded is during the process of producing the URI from its component parts. This is when an implementation determines which of the reserved characters are to be used as subcomponent delimiters and which can be safely used as data. Once produced, a URI is always in its percent-encoded form.
When a URI is dereferenced, the components and subcomponents significant to the scheme-specific dereferencing process (if any) must be parsed and separated before the percent-encoded octets within those components can be safely decoded, as otherwise the data may be mistaken for component delimiters.
…Implementations must not percent-encode or decode the same string more than once, as decoding an already decoded string might lead to misinterpreting a percent data octet as the beginning of a percent-encoding, or vice versa in the case of percent-encoding an already percent-encoded string.
Clearly, if Internet Explorer (or any other web browser that supports this kind of protocol pass-through technique) were to encode characters that are not supposed to be in a URL, it would fall afoul of this definition in the usual case, by encoding “the same string more than once”, once at preparation by a conformant URI provider, and once again as it passed through IE.
IE’s best bet for compatibility and future extensibility (as well as compliance with current RFCs) is to not inspect or modify the scheme-specific component of any URI unless it is handling that URI itself.
“I’d also advise he configure his router to stop broadcasting the SSID altogether.”
That’s an excellent addendum. The SSID is network-name data your router transmits at an interval. Disabling it … is indeed a good idea, once you’ve configured the clients you wish to allow access.
No, no, no, no, no.
Don’t disable SSID broadcasting – at least, not for security reasons.
Think about it. Let’s say you’re setting up a blind date between two people. One of them is frail and small and is in serious danger of being a victim; the other is strong and beefy, and capable of serious self-protection.
Which one are you going to suggest should walk into the bar and yell out an identifying name, waiting for the other person to recognise that name and start talking?
Okay, so it’s a crude analogy, but bear with me…
Because if you configure your wireless access point or router to hide its SSID, then you’re going to have to configure all of your wireless clients – desktops and laptops, printers, etc – to broadcast the SSID whenever they need to create a connection. [And your roaming laptops are now going to give away your SSID not only to people in your neighbourhood, but people anywhere you take your laptop to. Hardly a security measure!]
The answer, of course, is to have an exchange of authenticating information (are you really Bobby or a serial killer pretending to be him?) – in the blind date example, you’d want both sides to see drivers’ licences to verify their identities. In the wireless example, you insist at the very least that your router and your devices use a pre-shared key for encrypted data exchange (that’s WEP authentication), or that they use certificate-based cryptography (that would be WPA).
Only use WEP if you can’t use WPA – if you have some devices that haven’t been upgraded to support WPA – and only use WPA if you can’t use WPA 2. WEP can be cracked in minutes by a determined attacker with sophisticated tools, but the truth is that you don’t meet many of those.
Immunity, which buys but does not disclose zero-day bugs, keeps tabs on how long the bugs it buys last before they are made public or patched. While the average bug has a lifespan of 348 days, the shortest-lived bugs are made public in 99 days. Those with the longest lifespan remain undetected for 1,080 days, or nearly three years, Aitel said.
“Bugs die when they go public, and they die when they get patched,” she said.
So, by “buying but not disclosing” these bugs, they’re preventing bugs from dying, by that logic – the only avenue left is for the bugs to get patched.
Fortunately, no, because although Immunity’s business model is based largely around keeping the rest of the world in the dark about new vulnerabilities as they notify their customers how to protect against them, it does seem from Microsoft’s Security Bulletins as though they report these vulnerabilities to Microsoft.
I hope Microsoft didn’t have to become a customer in order to get that information, though, because that would mean that it’s Immunity’s policy to keep bugs alive.
What happens if they find a bug in my software? Will I be told?
I’m playing with BitLocker a little, and I need a small temporary partition to encrypt and decrypt on a frequent basis.
No problem, right? I can just open up Computer Management, select Storage, Disk Management, and then shrink a volume that has lots of space. [I can do the same with “diskpart” from the command line, if I choose to]
Oh, now, that’s just perfect – I can’t shrink my partition, and even if I do, I’ll end up wiping out the existing partition?
Okay, so I realise that it’s not likely to be quite that severe, but there’s a little work needs to be put into the disk partition shrink mechanism in Windows Vista.
First, obviously, edge cases like the one above need to be handled properly.
Second, there needs to be an option that informs the administrator as specifically as possible what limited the shrink operation – which immovable file is sitting on the boundary of the maximum shrink area. That way, I can decide what the problem is – it’s not the hibernation file (because I’ve deleted that), and it’s not the pagefile (again, deleted); it’s not even volume shadow copies, because I’ve disabled System Restore.
Steve Riley posts on a topic he discussed at Tech-Ed – protecting the data, because everything else is just plumbing.
He has a point – after all, the thing most needing securing on your system is your data – the hardware, OS and tools can all be replaced at nominal cost (generally by buying a new machine, and installing from the original disks if you have them, or buying replacements), compared to the cost of replacing the data (and dealing with lost customer confidence, regulatory action, etc).
However, technological security boundaries are generally designed to prevent users from knobbling the system and its applications, rather than preventing the users from knobbling their own files. This does provide some protection between users, so user A can’t kill user B’s files unless they previously agreed to share.
But it doesn’t protect against user A killing user A’s files.
Perfect backups – maintaining every bit that had ever been on the system – are a little extreme, even if you could achieve that state physically, with huge storage requirements.
“Previous Version” (aka Shadow Copy) support goes a long way to providing for a functional “time warp” file system, where users can recover their own data from corruption – this functionality is in Windows XP, Server 2003 and Windows Vista (presumably also in Windows Server 2008, but no plans are solidified until the OS ships).
So, that’s recovery – but what about prevention? Other than today’s outdated-by-the-time-you-download-them antivirus programs, what good security measures do we have to protect the user from the unexpected consequences of processes running in their own security context?
Education, awareness and training count highly in that area – by convincing your users that they should be aware that the data they work with is a valuable commodity, and should be handled with some caution.
But there’s really little you can do from a technological standpoint to distinguish between a user’s request to modify or delete a file, and a virus acting on that user’s behalf.
Business rules are great for enforcing ‘common sense’ on your data at work, but who wants to set “business rules” up for home use? What about all those ‘unsupported applications’ – the Excel spreadsheets replete with macros, the Access databases built by the sales team, any scripts put together for a specific purpose, and then used year after year without any thought to modification for reliability?
Consider reliability and the application of common sense to be a part of security, and remind your users that it is within their skill to do the same. Even when every piece of software deployed in your business is controlled by Group Policy, and every technological measure has been applied, you still need to give your users the tools, the education, and the reasons, to keep your systems’ data secure.