UAC – The Emperor’s New Clothes

I heard a complaint the other day about UAC – User Account Control – that was new to me.

Let’s face it, as a Security MVP, I hear a lot of complaints about UAC – not least from my wife, who isn’t happy with the idea that she can be logged on as an administrator, but she isn’t really an administrator until she specifically asks to be an administrator, and then specifically approves her request to become an administrator.

My wife is the kind of user that UAC was not written for. She’s a capable administrator (our home domain has redundant DCs, DHCP servers with non-overlapping scopes, and I could go on and on), and she doesn’t make the sort of mistakes that UAC is supposed to protect users from.

My wife also does not appreciate the sense that Microsoft is using the users as a fulcrum for providing leverage to change developers to writing code for non-admin users. She doesn’t believe that the vendors will change as a result of this, and the only effect will be that users get annoyed.

But not me.

I like UAC – I think it’s great that developers are finally being forced to think about how their software should work in the world of least privilege.

So, as you can imagine, I thought I’d heard just about every last complaint there is about UAC. But then a new one arrived in my inbox from a friend I’ll call Chris.

“Why should I pretend to be different people to use my own PC?”

I must admit, the question stunned me.

Obviously, what Chris is talking about is the idea that you are strongly “encouraged” (or “strong-armed”, if you prefer) by UAC to work in (at least) two different security contexts – the first, your regular user context, and the second, your administrator context.

Chris has a point – you’re one person, you shouldn’t have to pretend to be two. And it’s your computer, it should do what you tell it to. Those two are axiomatic, and I’m not about to argue with them – but it sounds like I should do, if I’m going to answer his question while still loving UAC.

No, I’m going to argue with his basic premise that user accounts correspond to individual people. They correspond more accurately – particularly in UAC – to clothing.

Windows before NT, or more accurately, not based on the NT line, had no separation between user contexts / accounts. Even the logon was a joke – prompted for user name and password, but if you hit Escape instead, you’d be logged on anyway. Windows 9x and ME, then, were the equivalent of being naked.

In Windows NT, and the versions derived from it, user contexts are separated from one another by a software wall, a “Security Boundary”. There were a couple of different levels of user access, the most common distinctions being between a Standard (or “Restricted”) User, a Power User, and an Administrator.

Most people want to be the Administrator. That’s the account with all the power, after all. And if they don’t want to be the Administrator, they’d like to be at least an administrator. There’s not really much difference between the two, but there’s a lot of difference between them and a Standard User.

Standard Users can’t set the clock back, they can’t clear logs out, they can’t do any number of things that might erase their tracks. Standard Users can’t install software for everyone on the system, they can’t update the operating system or its global settings, and they can’t run the Thomas the Tank Engine Print Studio. [One of those is a problem that needs fixing.]

So, really, a Standard User is much like the driver of a car, and an administrator is rather like the mechanic. I’ve often appealed to a different meme, and suggested that the administrator privilege should be called “janitor”, so as to make it less appealing – it really is all about being given the keys to the boiler room and the trash compactor.

It’s about wearing dungarees rather than your business suit.

You wear dungarees when working on the engine of your car, partly because you don’t want oil drops on your white shirt, but also partly so your tie doesn’t get wrapped around the spinning transmission and throttle you. You don’t wear the dungarees to work partly because you’d lose respect for the way you look, but also because you don’t want to spread that oil and grease around the office.

It’s not about pretending to be different people, it’s about wearing clothes suited to the task. An administrator account gives you carte blanche to mess with the system, and should only be used when you’re messing with the system (and under the assumption that you know what you’re doing!); a Standard User account prevents you from doing a lot of things, but the things you’re prevented from doing are basically those things that most users don’t actually have any need to do.

You’re not pretending to be a different person, you’re pretending to be a system administrator, rather than a user. Just like when I pretend to be a mechanic or a gardener, I put on my scungy jeans and stained and torn shirts, and when I pretend to be an employee, I dress a little smarter than that.

When you’re acting as a user, you should have user privileges, and when you’re acting as an administrator, you should have administrative privileges. We’ve gotten so used to wearing our dungarees to the board-room that we think they’re a business suit.

So while UAC prompts to provide a user account aren’t right for my wife (she’s in ‘dungarees-mode’ when it comes to computers), for most users, they’re a way to remind you that you’re about to enter the janitor’s secret domain.

Silently fixing security bugs – how dare they!

Over in “Random Things from Dark Places“, Hellnbak posts about reducing vulnerability counts by applying the SDL (Security Development Lifecycle), and makes the very reasonable point that vulnerabilities found prior to release by a scan that is part of the SDL process cannot be counted as failures of the SDL process. What’s more, those vulnerabilities can be silently fixed by the vendor before shipping / deploying the product being reviewed. [Obviously, not fixing them would be a really bad idea]

What intrigued me, though, was this line:

But, as Ryan [Naraine] said — issues found in public code that are fixed silently are a real issue.  While I have picked on Microsoft specifically for this practice the sad reality (that I quickly learned after publicly picking on MS) is that pretty much all vendors do this.

So, let’s see now… this is talking about a patch, hotfix, or service pack, that removes a security vulnerability from a product, but where the vulnerability (and its fix) does not get announced publicly.

There are two reasons not to announce a security vulnerability, in my view:

  1. You don’t want to.
  2. You can’t.

Let’s subdivide reason 1, “You don’t want to”:

    1. You feel it would adversely affect public opinion, stock price, user retention…
      Well, that’s kind of bogus, isn’t it? Given some of the vulnerability announcements that have appeared, what on earth could be worse than remote execution, elevation of privilege, and complete control over your system? The only way to make this accusation is to assert that the vendor randomly picks vulnerabilities to announce or not announce, to somehow reduce the overall numbers – and then manages to do so in such a way that noone else notices the vulnerability that was fixed.
      That’s not security, and any vendor who did that would find its security staff soon revolting against that practice. There isn’t enough of a glut of security workers to be engaging in a practice that assumes you can hire more to replace the disgusted ones that quit.
    2. You’re tired of going through the process of documenting the bug, its workarounds and/or mitigations, and would rather be doing something else, like, oh, I don’t know, fixing more vulnerabilities.
      That’s not good security – create a more streamlined and automated process for creating the announcements, and do both – find and fix more vulnerabilities and make announcements for the ones you find. If you’re too busy to announce all the vulnerabilities in your product, you’re too busy to fix them all.
    3. You found the vulnerability internally, and would like to prevent it from being exploited, by releasing the patch along with an announced fix and hoping people install it.
      That’s not terribly reliable as a patching policy. It makes some small sense for related fixes, but then why wouldn’t you announce that as a related fix in the related announcement? Perhaps it makes sense for architectural fixes, where the only good fix is to go to the next level of service pack, but then wouldn’t you want to publicise workarounds for those who can’t apply the next service pack for one reason or another?
      But the biggest reason not to do this is that when you release a patch, people will reverse-engineer it, to figure out how to exploit the unpatched version – and they’ll find the change you didn’t mention as well as the one you did, and will exploit both of them. But your users will only be aware of one problem that needs patching, and may have decided that they can mitigate that without patching.
      So, pretty much bad security on that approach, too.

So, “You don’t want to” comes out as bad security, and it’s the sort of bad security that you would have to fix to employ – and continue to employ – a halfway decent security team.

What about “You can’t” – how could that come about?

    1. You have a legal judgement or contract requirement forbidding you from disclosing vulnerabilities. Hey, Microsoft has some of the best and most expensive lawyers on the planet, but even they get stuck with tough legal decisions that they have to abide with, and can’t do anything about. If a security vulnerability was considered to be a “threat to national security”, the current administration (and possibly many others) would be only too quick to deem it so secret that no-one could reveal its presence. And once you accept that possibility, it isn’t hard to think of too many circumstances where a company might be forced to keep a vulnerability quiet.
    2. You know enough to fix the code, but not enough to classify the vulnerability or explain its workarounds or mitigations.
      Yeah, that’s pretty much the truth for all the announced vulnerabilities, too – how many times have you seen a vulnerability announcement that says “this cannot be exploited remotely”, followed by one a few days later with updated information that reveals that, oh yes it can. This doesn’t appear to be a good reason not to announce a vulnerability.
    3. You don’t know the vulnerability is there, or you don’t realise that you fixed a vulnerability.

Okay, that last one’s the topper, isn’t it? How can you announce a fix for a vulnerability that you don’t know about?

Clearly, you can’t.

Just as clearly, perhaps you’re thinking, you can’t fix a vulnerability that you don’t know about, right?

Wrong. You can very easily fix a vulnerability about which you know nothing. Here’s a couple of hypothetical examples:

After we moved into our new house, we changed all the locks on the doors. Why? Because the new locks were prettier. In doing so, we fixed a vulnerability (the former owner could have kept the keys, and exploited us through the old locks) – but we didn’t intend to fix the vulnerability, we just wanted prettier locks.

Years ago, I needed a piece of functionality that wasn’t provided by the Win16 API, so I wrote my own routine to do file path parsing. A couple of years back, I dropped support for Windows 3.1, and in a recent code review, I spotted that the file path parsing routine was superfluous. So I removed it. In removing it, I didn’t spend a lot of time looking at the code – there was a vulnerability in there, but who does a code review of a function they’re removing? So now, I’ve fixed a vulnerability that I didn’t know existed.

Too many times, we assert evil intent for those actions that we disagree with. Ignorance is a far better explanation, along with incompetence, expediency, and just plain lack of choice. Note that ignorance is no bad thing – as in my hypothetical case, a genuine attempt to improve quality leads to a security improvement of which the developer was wholly ignorant.

Whether vendors don’t want to disclose all of their vulnerabilities when patching, or simply can’t, because they didn’t realise the scope of a fixed vulnerability, it’s important to stay current with patches wherever that would not interfere with your production applications. Because one day there will be a flaw patched, which your company will be attacked through. If you didn’t apply that patch, you will be owned.