I recently got around to converting an old MFC project from WinHelp format to HTML Help. Mostly this was to satisfy customers who are using Windows Vista or Windows Server 2008, but who donâ€™t want to install WinHlp32 from Microsoft. (If you do want to install WinHlp32, you can find it for Windows Vista or Windows Server 2008 at Microsoftâ€™s download site.]
Hereâ€™s a quick round trip of how I did it:
1. Convert the help file â€“ yeah, this is the hard part, but there are plenty of tools, including Microsoftâ€™s HTML Help Editor, that will do the job for you. Editing the help file in HTML format can be a little bit of a challenge, too, but many times your favourite HTML editor can be made to do the job for you.
2. Call EnableHtmlHelp() from the CWinApp-derived classâ€™ constructor.
3. Remove the line ON_COMMAND(ID_HELP_USING, CWinApp::OnHelpUsing), if you have it – there is no HELP_HELPONHELP topic in HTML.
4. Add the following function:
void CWftpdApp::HelpKeyWord(LPCSTR sKeyword)
akLink.cbStruct = sizeof(HH_AKLINK);
akLink.pszMsgText=(CString)”Failed to find information in the help file on ” + sKeyword;
akLink.pszMsgTitle=”HTML Help Error”;
5. Change your keyword help calls to call this new function:
((CWftpdApp *)AfxGetApp()->WinHelp((long)(char *)”Registering”);
6. If you want to trace calls to the WinHelp function to watch what contexts are being created, trap WinHelpInternal:
void CWftpdApp::WinHelpInternal(DWORD_PTR dwData, UINT nCmd)
TRACE(“Executing WinHelp with Cmd=%d, dwData=%d (%x)\r\n”,nCmd,dwData,dwData);
This trace comes in really, really (and I mean REALLY) handy when you are trying to debug â€śFailed to load helpâ€ť errors. It will tell you what numeric ID is being used, and you can compare that to your ALIAS file.
7. If your code gives a dialog box that reads:
HTML Help Author Message
HH_HELP_CONTEXT called without a [MAP] section.
What it means is that the HTML Help API could not find the [MAP] or the [ALIAS] section – without an [ALIAS] section, but with a [MAP] section, this message still will appear.
8. Donâ€™t edit the ALIAS or MAP sections of your help file in HTML Help Editor â€“ Microsoft has a long-standing bug here that makes it crash (losing much of your unsaved work, of course) unpredictably when editing these sections. Edit the HHP file by hand to work on these sections.
9. Most of your MAP section entries are automatically generated by the compiler, as .HM files, which hold macros appropriate for the specific control in the right dialog. Simply include the right HM file, and all you will need to do is create the right ALIAS mappings.
10. The MFC calls to HtmlHelp discard error returns from the function, so thereâ€™s really no good troubleshooting to go on when debugging access to help file entries.
Let me know if any of these helpful hints prove to be of use to you, or if you need any further clarification.
First rule of demonstrative writing â€“ lead off with an undeniable example of the point youâ€™re trying to make.
â€śLast year I was meeting with the CEO of a PC company who offered to give me a demo of his company’s gorgeous new top-of- the-line notebook, a machine that cost several thousand dollars and came loaded with Windows Vista, the latest version of Microsoft‘s operating system. He flipped open the laptop, pressed the power button, and â€¦ nothing. We waited. And waited. It was excruciating. He tried control-alt-delete. He tried holding down the power button. Finally he removed the battery and snapped it back into place. The machine started upâ€”slowlyâ€”while the CEO sat there fuming.â€ť
Um, yeah, OK, that sounds bad and all, but seriously, if youâ€™re pressing the power button on a turned-off machine and nothingâ€™s happening, thatâ€™s hardware. And if you blame hardware faults on the operating system, well, thatâ€™s just a CEO trying to ignore the fact that his hardware system and its developers arenâ€™t providing a totally balanced view of their work.
So, letâ€™s carry on reading. What else is a problem with Vista?
â€śIt was sluggish. It had trouble going to sleep and waking up. It wouldn’t work with some printers and accessories.â€ť
I didnâ€™t see â€śsluggishâ€ť, but then again, I bought a higher spec machine than my three-year-old laptop in order to run Vista, because itâ€™s a significant update to the OS. Many of its major features expect there to be lots of memory and a fast 3D video card.
The â€śtrouble going to sleep and waking upâ€ť part I definitely had some experience with â€“ but then, I have those problems in XP, too: over 1MB in my machine, and XP decided it was going to turn my laptop bag into a pizza oven â€“ to judge from the popularity of my blog post on the issue, Iâ€™m far from alone in this. Laptop manufacturers really havenâ€™t had the best of luck in XP or Vista persuading individual devices â€“ let alone the whole system â€“ that itâ€™s nighty-night time, or that itâ€™s time to wake up when you punch the â€śwake-upâ€ť key. Recent updates from Lenovo made my life a little easier, but the machine will still sometimes go to sleep never to wake up again. Really irritating when Iâ€™m in the middle of working as the bus arrives at its destination and I have to press the sleep button, praying that the machine will make it through the nap. And I can guarantee to hang the system if I press the sleep button and then close the lid.
And, as for printers and accessories, itâ€™s clear that any number of device drivers werenâ€™t actually used for any significant length of time in the Vista environment, or theyâ€™d have shown their incompatible designs. My HP printer, for instance, pops up this ugly dialog whenever I print from Internet Explorer:
Now, I donâ€™t know much about drivers, but I suspect that this could be fixed by signing the driver. My other HP printer continually offers up a new version of its drivers on Windows Update, and then the installation refuses to start, because the printer isnâ€™t plugged in to my machine. Well, of course not, itâ€™s a network printer.
As has been pointed out by numerous other writers, XP had this same sort of flack when it released (although I donâ€™t remember it going on for quite this long), and then as now, most of the problems were to do with software and hardware developers who werenâ€™t paying even limited attention to the statements Microsoft put out as to features that were deprecated (i.e. made obsolete, going away, or otherwise disappearing).
Of course, my wife hates Vista, and at some point Iâ€™ll be able to point you to her ideas on the topic, because she has some actually valid arguments as to why Vista sucks. And none of those arguments are represented in Dan Lyonsâ€™ Newsweek article.
Thanks to the excellent http://www.woot.com, I upgraded to a new MP3 player – this one, the Sansa e250 from SanDisk, has a little screen and shows video at an almost completely unacceptably small resolution. But I don’t mind that, I didn’t really buy it for the video. I don’t mind the big fat “REFURB” label stuck on the back, nor do I really mind all that much that it’s already lost a screw from the back.
What I do mind is that the developers of the software accompanying this player haven’t figured out that I might want to use it as a consumer device, rather than an Information Technology Administration Tool. Quite honestly, I can’t see how a media player – even if you count its ability to do video the size of my thumb – can be used to administer my system, but clearly that’s the intent of the designers, because the software all insists on running as administrator.
The software at fault is at least the following:
It almost makes me want to wipe the firmware in the device and replace it with the Open Source software “Rock Box“. Maybe then I can use ordinary tools to move my media onto the device, as an ordinary user.
We developers clearly have a loooong way to go before we grasp this concept that “administrator means the guy who makes changes to the configuration of the operating system”, and “standard user means the guy who spends his life actually using the operating system”.
I would love to be able to sort this out with technical support, but they insist on not talking to me in email, but requiring me to log on to a third party “eBox” from “customernation.com” – which sends out exhortations to visit your eBox as soon as Sansa’s support has put a message in it. These invites come with your user name and password – over unencrypted email. Nice.
I’d tell you what’s in my eBox, and what Sansa’s support said, but I haven’t been able to keep a connection up long enough for the painfully slow customernation.com web site to actually display anything. This is not a pleasant customer experience.
Okay, so the talk’s official title was “Dan Kaminsky’s DNS Discovery: The Massive, Multi-Vendor Issue and the Massive, Multi-Vendor Fix”.
Arcane details of TCP are something of a hobby of mine, so I attended the webcast to see what Dan had to say.
A little history first – six months ago, Dan Kaminsky found something so horrifying in the bowels of DNS that he actually kept quiet about it. He contacted DNS vendors – OS manufacturers, router developers, BIND authors, and the like – and brought them all together in a soundproofed room on the Microsoft campus to tell them all about what he’d discovered.
Everyone was sworn to secrecy, and consensus was reached that the best way to fix the problem would be to give vendors six months to release a coordinated set of patches, and then Dan Kaminsky would tell us all at BlackHat what he’d found.
Until then, he asked the security community, don’t guess in public, and don’t release the information if you know it.
Fast forward a few months, and we have a patch. I don’t think the patch was reverse-engineered, but there was enough public guessing going on that someone accidentally slipped and leaked the information – now the whole world knows.
Kaminsky confirmed this in today’s webcast, detailing how the attack works, to forge the address of www.example.com:
Note that this is a simple description of the new behavior that Kaminsky found – step 3 allows the DNS server’s cache to be poisoned with a mapping for www.example.com to 220.127.116.11, even if it was already cached from a previously successful search.
If that was all that Kaminsky could do, even on an unpatched server, he’d have a 1 in 65535 chance of guessing the transaction ID to make his forgery succeed. However, old known behaviours simply make it easier for the attacker to make the forgery work:
Kaminsky’s tests indicate that a DNS server’s cache can be poisoned in this way in under ten seconds. There are metasploit plugins that ‘demonstrate’ this (or, as with all things metasploit, can be used to exploit systems).
The patch, by randomizing the source port of the DNS resolver, raises the difficulty of this attack by a few orders of magnitude.
The long-term fix, Kaminsky said, is to push for the implementation of DNSSEC, a cryptographically-signed DNS system, wherein you refuse to pass on or accept information that isn’t signed by the authoritative host.
One novel wrinkle that Kaminsky hadn’t anticipated is that even after application of the patch to DNS servers, some NATs apparently remove the randomness in the source port that was added to make the attack harder. To quote Kaminsky “whoops, sorry Linksys” (although Cisco was one of the companies he notified of the DNS flaw, and they now own Linksys). Such de-randomising NATs essentially remove the usefulness of the patch.
Patching is not completely without its flaws, however – Kaminsky didn’t mention some of the issues that have been occurring because of these patches:
Metrics and statistics:
The overall message of the webcast is this:
This attack is real, and traditional defences of using a high TTL will not protect you. Patching is the way to go. If you can’t patch, configure those unpatched DNS servers to forward to a local new (patched) DNS server, or an external patched server like OpenDNS. Scan your site for unexpected DNS servers.
Picture the scene at Security Blogs R Us:
“We’re so freakin’ clever, we’ve figured out Dan Kaminsky’s DNS vulnerability”
“Yeah, but what if someone else figures it out – won’t we look stupid if we post second to them?”
“You’re right – but we gave Dan our word we wouldn’t publish.”
“So we won’t publish, but we’ll have a blog article ready to go if someone else spills the beans, so that we can prove that we knew all about it anyway.”
“Yeah, but we’d better be careful not to publish it accidentally.”
>>WHOOP, WHOOP, WHOOP<<
“What was that?”
“The blog alert – someone else is beating us to the punch as we speak.”
“Publish or perish! Damn the torpedoes – false beard ahead!”
“What? Are you downloading those dodgy foreign-dubbed pirated anime series off BitTorrent through the company network again?”
“Yes – I found a way around your filters.”
It’s true (okay, except for all of the made-up dialog above), a blog at one of the security vulnerability research crews (ahem, Matasano) did the unthinkable and rushed a blog entry out on the basis that they thought someone else (ahem, Halvar Flake) was beating them to it. And now we all know. The genie is out of the bag, the cat has been spilled, and the beans are out of the bottle.
Now we all know how to spoof DNS.
Okay, so Matasano pulled the blog pretty quickly, but by then it had already been copied to server upon server, and some of those copies are held by people who don’t want to take the information off the Internet.
Clearly, Information Wants To Be Free.
There’s an expression I never quite got the hang of – “Information Wants To Be Free”, cry the free software guys (who believe that software is information, rather than expression, which is a different argument entirely) – and the sole argument they have for this is that once information is freed, it’s impossible to unfree it. A secret once told is no longer a secret.
There’s an allusion to the way in which liquid ‘wants to be at its lowest level’ (unless it’s liquid helium, which tends to climb up the sides of the beaker when you’re not looking), in that if you can’t easily put something back to where it used to be, then where it used to be is not where it wants to be.
So, information wants to be free, and Richard Stallmann’s bicycle tyre wants to have a puncture.
But back to the DNS issue.
I can immediately think of only one extra piece of advice I’d have given to the teams patching this on top of what I said in my previous blog, and that’s something that, in testing, I find the Windows Server 2003 DNS server was doing anyway.
So, that’s alright then.
Well, not entirely – I do have some minor misgivings that I hope I’ve raised to the right people.
But in answer to something that was asked on the newsgroups, no I don’t think you should hold off patching – the patch has some manual elements to it, in that you have to make sure the DNS server doesn’t impinge on your existing UDP services (and most of you won’t have that many), but patching is really a whole lot better than the situation you could find yourself in if you don’t patch.
And Dan, if you’re reading this – hi – great job in getting the big players to all work together, and quite frankly, the secrecy lasted longer than I expected it to. Good job, and thanks for trying to let us all get ourselves patched before your moment of glory at BlackHat.
After applying the patch for MS08-037 – KB 953230 (the multi-OS DNS flaw found by Dan Kaminski), you may notice your Windows Server 2003 machine gets a little greedy. At least, mine sucks up 2500 – yes, that’s two thousand five hundred – UDP sockets sitting there apparently waiting for incoming packets.
This is, apparently, one of those behaviours sure to be listed in the knowledge base as “this behavior is by design” – a description that graces some of the more entertaining elements of the Microsoft KB.
Why does this happen? I can only guess. But here’s my best guess.
The fix to DNS, implemented across multiple platforms, was to decrease the chance of an attacker faking a DNS response, by increasing the randomness in the DNS requests that has to be copied back in a response.
I don’t know how this was implemented on other platforms, but I do know that it’s already been reported that BIND’s implementation is slower than it used to be (hardly a surprise, making random numbers is always slower than simply counting up) – and maybe that’s what Microsoft tried to forestall in the way that they create the random sockets.
Instead of creating a socket and binding it to a random source port at the time of the request, Microsoft’s patched DNS creates 2500 sockets, each bound to a random source port, at the time that the DNS service is started up. This way, perhaps they’re avoiding the performance hit that BIND has been criticised for.
There are, of course, other services that also use a UDP port. ActiveSync’s connection to Exchange, IPsec, IAS, etc, etc. Are they affected?
Randomly, and without warning or predictability. Because hey, the DNS server is picking ports randomly and unpredictably.
[Workaround: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ReservedPorts is a registry setting that lists multiple port ranges that will not be used when binding an ephemeral socket. The DNS server will obey these reservations, and not bind a socket to ports specified in this list. More explanation in the blog linked above, or at http://support.microsoft.com/kb/812873]
DNS, you see, is a fundamental underpinning of TCP/IP services, and as such needs to start up before most other TCP/IP based services. So if it picks the port you want, it gets first pick, and it holds onto that port, preventing your application from binding to it.
This just doesn’t seem like a fix written by someone who ‘gets’ TCP/IP. Perhaps I’m missing something that explains why the DNS server in Windows Server 2003 works this way, but I would be inclined to take the performance hit of binding and rebinding in order to find an unused random port number, rather than binding before everyone else in an attempt to pre-empt other applications’ need for a port.
There are a couple of reasons I say this:
I’d love to know if I’m missing something here, but I really hope that Microsoft produces a new version of the DNS patch soon, that doesn’t fill your netstat -a output with so many bound and idle sockets, each of which takes up a small piece of nonpaged pool memory (that means real memory, not virtual memory).
I have a little time over the next couple of weeks to devote to developing WFTPD a little further.
This is a good thing, as it’s way past time that I brought it into Vista’s world.
I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.
The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.
But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.
I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.
Here’s my list of significant changes to make over the next couple of weeks:
As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.
Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.
Another side-project of mine is an EFS tool. I may use some time to work on that.
Totally unscientifically, I have carried out a poll of people who like UAC (okay, a few security geeks like myself), and those who hate UAC – mostly my wife.
Something struck me as both a surprising common factor, and also a rather obvious explanation of why the two opinions are so polarised.
[Note for the pedants – yes, I’m using the term “UAC” here to mean “Elevation” – there are other portions of UAC that I’m not discussing, such as Protected Mode in Internet Explorer, and so on.]
The UAC-lover seems to have ‘got least-privilege religion’ at least several years ago, and runs most of the time as a standard, restricted user. Most UAC-lovers do not seem to be “Administering the system all the time” types.
As a result, they use UAC as a means to elevate privilege on those occasions when they need to do something administrative, or when they need to run a program that has not yet been coded to run with least privilege.
When they’re doing something administrative, they’re comparing the UAC “Over-the-shoulder” (OTS) prompt against the methods that used to be available to them:
Given these as alternatives, it’s no wonder that UAC and OTS elevation prompts are considered better.
The UAC-hater is fundamentally disinterested in least-privilege, at least as it applies to users. Least-privilege is an obvious and good programming strategy, a program shouldn’t ask for more privileges than it needs, but to this user, that’s something that the programmers should care about.
This user wants to be instantly, and automatically, elevated whenever she calls on a feature that would require it. This is how she’s used to running the computer, because she’s always called on to do administrative tasks – and she’s careful and knowledgeable enough to have avoided causing damage through doing so.
To this user, UAC is an impediment to that process – now, instead of merely running the administrative tool she wants, she has to ask to be allowed to run it as administrator.
With UAC set to automatically elevate for administrators, however, she’s far happier. Still not perfectly happy, because there are still occasions when she has to ask specifically to run elevated – when the program is capable of running as non-administrator, for instance. Such programs run as non-administrator by default, and don’t elevate themselves. These programs are irritating to such a user.
Typically, such programs appear to break when run with UAC disabled (or set to automatically elevate) – they fail to run, sometimes with bizarre error messages, often just crashing through failure to execute some action that the developers expected would succeed.
Other causes of breakage could be when an application is registered to a user, and the licence information is written to a file in the Program Files folder – when you’re running under UAC’s protection, files in the Program Files folder may be virtualised (i.e. the program thinks it’s accessing the file in the Program Files folder, but it’s really accessing a file in the user’s home directory tree), and when you’re running elevated, those same file accesses are not virtualised.
So, voila, instant loss of licence information, saved settings, or any number of other files that the program expected to find in Program Files.
So, the message is clear – for installations with administrators who like the system to let them be administrators, don’t disable UAC, make UAC elevate silently for administrators instead.
This system works, too, for the restricted users. It allows them to operate as restricted users, except when they absolutely know they need to elevate. Over-the-shoulder elevation prompting is still available for them, should they need it.
What this option doesn’t do is cover what appears to be Microsoft’s reason for creating the elevation prompts in the first place. Without UAC prompting at random points, the administrators in control of a system have no clear sign that they’ve just fired up “Mary Kate and Ashley’s Dance Party of the Century” only to be forced to run it as an administrator.
Even supposing you figure out that there’s a program you’re using which doesn’t adequately run in restricted user mode, or which doesn’t elevate itself where necessary, where can you go to get assistance from the developers of the application?
Microsoft’s own support is an example of how off-putting such a process can be. Microsoft Money refused to update on one of our systems, and I eventually determined it was because the update needed to be elevated, but was expecting to find some files that were virtualised by UAC. It failed with a meaningless error message. To call support costs $25 for Microsoft to even pick up the phone – and if the support tech believes that this is an “advanced” issue, he may charge about ten times that much. Perhaps later, after they realise the problem is their own fault, Microsoft will refund you the money – but many small businesses and individual users don’t have that sort of money to loan to Microsoft, or other vendors.
So, is there any good way to persuade developers to quit their bone-headed “start with most privilege” behaviour? Maybe Visual Studio and compilation tools should refuse to run in an administrator session. Okay, so perhaps that’s not tenable, because there are development projects that do require you to be an administrator, because you’re developing something administrative – but what measure would make developers do the right thing for security (and for their users) naturally?
File and registry virtualisation appears to be a messy kludge on top of the sledge-hammer of UAC elevation, whose primary design goal appears to be to irritate end-users enough to persuade developers to stop doing the kind of things that requires virtualisation as a workaround, and the kind of things that requires administrator accounts in the first place.
Perhaps it’s time that, instead of kludging for these bad developers, Microsoft simply said “It stops. Now.” – if it’s not registered (at install time, or by manifest) as an administration tool, it doesn’t get administrative access – or virtualised access to HKLM or Program Files. Yes, that will mean admins will have two links to regedit, and similar tools – one to run in an administrator’s session, giving access to HKLM, another to run in their user’s session, giving access to HKCU.
I heard a complaint the other day about UAC – User Account Control – that was new to me.
Let’s face it, as a Security MVP, I hear a lot of complaints about UAC – not least from my wife, who isn’t happy with the idea that she can be logged on as an administrator, but she isn’t really an administrator until she specifically asks to be an administrator, and then specifically approves her request to become an administrator.
My wife is the kind of user that UAC was not written for. She’s a capable administrator (our home domain has redundant DCs, DHCP servers with non-overlapping scopes, and I could go on and on), and she doesn’t make the sort of mistakes that UAC is supposed to protect users from.
My wife also does not appreciate the sense that Microsoft is using the users as a fulcrum for providing leverage to change developers to writing code for non-admin users. She doesn’t believe that the vendors will change as a result of this, and the only effect will be that users get annoyed.
But not me.
I like UAC – I think it’s great that developers are finally being forced to think about how their software should work in the world of least privilege.
So, as you can imagine, I thought I’d heard just about every last complaint there is about UAC. But then a new one arrived in my inbox from a friend I’ll call Chris.
I must admit, the question stunned me.
Obviously, what Chris is talking about is the idea that you are strongly “encouraged” (or “strong-armed”, if you prefer) by UAC to work in (at least) two different security contexts – the first, your regular user context, and the second, your administrator context.
Chris has a point – you’re one person, you shouldn’t have to pretend to be two. And it’s your computer, it should do what you tell it to. Those two are axiomatic, and I’m not about to argue with them – but it sounds like I should do, if I’m going to answer his question while still loving UAC.
No, I’m going to argue with his basic premise that user accounts correspond to individual people. They correspond more accurately – particularly in UAC – to clothing.
Windows before NT, or more accurately, not based on the NT line, had no separation between user contexts / accounts. Even the logon was a joke – prompted for user name and password, but if you hit Escape instead, you’d be logged on anyway. Windows 9x and ME, then, were the equivalent of being naked.
In Windows NT, and the versions derived from it, user contexts are separated from one another by a software wall, a “Security Boundary”. There were a couple of different levels of user access, the most common distinctions being between a Standard (or “Restricted”) User, a Power User, and an Administrator.
Most people want to be the Administrator. That’s the account with all the power, after all. And if they don’t want to be the Administrator, they’d like to be at least an administrator. There’s not really much difference between the two, but there’s a lot of difference between them and a Standard User.
Standard Users can’t set the clock back, they can’t clear logs out, they can’t do any number of things that might erase their tracks. Standard Users can’t install software for everyone on the system, they can’t update the operating system or its global settings, and they can’t run the Thomas the Tank Engine Print Studio. [One of those is a problem that needs fixing.]
So, really, a Standard User is much like the driver of a car, and an administrator is rather like the mechanic. I’ve often appealed to a different meme, and suggested that the administrator privilege should be called “janitor”, so as to make it less appealing – it really is all about being given the keys to the boiler room and the trash compactor.
You wear dungarees when working on the engine of your car, partly because you don’t want oil drops on your white shirt, but also partly so your tie doesn’t get wrapped around the spinning transmission and throttle you. You don’t wear the dungarees to work partly because you’d lose respect for the way you look, but also because you don’t want to spread that oil and grease around the office.
It’s not about pretending to be different people, it’s about wearing clothes suited to the task. An administrator account gives you carte blanche to mess with the system, and should only be used when you’re messing with the system (and under the assumption that you know what you’re doing!); a Standard User account prevents you from doing a lot of things, but the things you’re prevented from doing are basically those things that most users don’t actually have any need to do.
You’re not pretending to be a different person, you’re pretending to be a system administrator, rather than a user. Just like when I pretend to be a mechanic or a gardener, I put on my scungy jeans and stained and torn shirts, and when I pretend to be an employee, I dress a little smarter than that.
When you’re acting as a user, you should have user privileges, and when you’re acting as an administrator, you should have administrative privileges. We’ve gotten so used to wearing our dungarees to the board-room that we think they’re a business suit.
So while UAC prompts to provide a user account aren’t right for my wife (she’s in ‘dungarees-mode’ when it comes to computers), for most users, they’re a way to remind you that you’re about to enter the janitor’s secret domain.
I’ve been trying back and forth to get CS-RCS Pro, a version control suite, to work on Windows Vista.
I like CS-RCS Pro for a number of reasons:
But that last point is the cause of a big problem.
Here’s the sequence I have to deal with:
I had originally intended to follow the appropriate installation practice for an enterprise application – that it should be installed by a recognised administrator, and then any post-install setup to customise for the end-user would be carried out by that end-user for themselves.
This didn’t work, as CS-RCS Pro configured the version control tree to be used by the administrative user, making it impossible for my restricted user to access the files.
I tried simply editing the ownerships and ACLs – that didn’t work – and then to additionally edit the configuration files, where it mentioned the name of my administrative user. That worked for a short while, but I noticed that every time I used MSTSC – Remote Console – also known as the Terminal Services Client – to access the system, the shell extension that CS-RCS Pro installs took up 100% CPU, and required that I restart Explorer. There are still a few applications that don’t work well when you kill Explorer from underneath them, and so this was somewhat of an untenable position.
Besides, this was an awful lot of effort to go through in order to get version control going.
Finally, it hit me how I should do this properly. It’s not clean and it’s not clever, and ComponentSoftware, the folks behind CS-RCS Pro, should consider how to change their installer to avoid this issue.
The simple five-step process is as follows – let’s say Wayne, an administrator, wants to install the software for Sharon, a restricted user:
(*) Note that asterisk – that’s the troubling part. Actually, step 1 is troubling too, but only because Sharon may have other processes trying to log in with elevated rights, should they ever be granted.
Step 2 requires either that Wayne allows his user, restricted though she is meant to be, to log on as an administrator – what if she quickly runs some tool that you don’t want her to run?
Okay, so you drag her away from the console immediately after she types her password – but what if she’s got startup items to add an administrative user on her behalf, or simply to stay in memory (as a service, say) and run with those enhanced privileges, to allow exploit later?
Alright, so what’s the safest way? The only good way I can think of is this:
Some of you are probably reading this and wondering why I bother – after all, in many environments, developers insist on running as administrator all the time, because their development tools don’t support anything else.
Well, it’s time your developers – and their tools – grew up. Yes, I can quote, just as any other developer can, a number of cases where administrative access is required – although many developers actually get this wrong. You can run Visual Studio 2005 as a non-administrator. You can debug your own code running in your own logon session as a non-administrator.
Developers are very often the only people to run some sections of the code that they build, until it reaches the hands of the users. As such, developers need to spend as much time as possible, when they run their code, working in the same kind of user context as their users will have.
In general, developers should follow the same principle as other administrators – their day-to-day tasks (e-mail, web browsing, and yes, development) should be done in restricted user accounts; administrative user accounts should be available, but their use should be restricted to those operations which absolutely require administrative access, and those operations should be reviewed often enough to ensure that they need administrative access. Tools and environments grow and change, and a tool which yesterday required administrative access may run tomorrow without. LogonUser, for instance, used to require complete system access – today it can be called by any user.