Think of it as the "janitor" account.

While I was at Microsoft, every so often the question would arise “how can we do more to prevent users from running all the time as administrator?”

There’s something sexy and powerful about being “administrator”.  Suggest taking administrator access away from someone who has it now – say, a developer, or a small business’ financial officer (thanks, Quickbooks!), or a home user (thanks, Turbotax! – by the people who brought you Quickbooks) – and you’ll get thrown the look of an alcoholic who’s just realised that you’ve figured out where he’s stashed his hooch.

Okay, so undeniably, there is power in that account – and that’s the main reason why you should spend as little time with that power as possible.  “Power corrupts”, remember, and in this case, the thing most likely to get corrupted, by that power being constantly “on”, is the important data you use to run your business.

In Vista and Longhorn, this has been significantly addressed by use of UAP / UAC / LUA or whatever it’s called this morning.

For some reason, nobody ever took up my suggestion, which was brought on by the observation that my kid thinks the guy with power at his school is the janitor.  He has the keys to every classroom, he knows where the secret tunnels are, and how to open up the locked cabinets with the electricity in them.  To those of us beyond secondary education (high school), the janitor is somewhat less cool – without him, the school couldn’t function, but we wouldn’t like to do his job unless it was absolutely necessary that we do so.

So, I think that we should rename “administrator” to “janitor”, at least in our minds, if not in our systems.

This highlights that administrator access should only be used when you need to work on the ‘plumbing’ of the system.  It’s not really the power-house, and the secret areas to which it has the keys are only the boiler-rooms and fuse-boxes of your system.

Where’s the harm in being administrator all the time?  It’s like leaving all those locked cabinets open, for any old virus to abuse as it pretends to be you; it’s like spending time in the boiler room, where you could drop your bottle of cheap whisky and set off a fire that burns down the whole school.

Okay, enough with the analogy, here’s some real reasons why.  If you run as administrator, a virus or trojan that you run (and you will run one, one day) will be allowed to destroy not just your immediate files, but the entire system on which you depend, or worse, install extra components that can be used to attack others, or to filch off your private information.  If you run as administrator, you will accidentally type a command that deletes an important system setting or another user’s important files.

Do I run as administrator?  No.  In my job I run as a Restricted User.  Not even “Power User” (another bad term that equates to “administrator”).  I spend my day as a Security Engineer, and Developer, in Restricted User mode, because I don’t trust that I can detect every virus or trojan, or that I can control my actions sufficiently well not to do something disastrous.  At times, it sucks, because there are programs I can’t run (but there are usually alternatives), and features I can’t access (but I can often open them up with appropriate tools and settings).  I still can’t debug as easily in Visual Studio .NET 2003 (but the 2005 version fixes this).

There will always be “Elevation of Privilege” attacks, sure, but the answer is not to give up on separation of privilege completely.  It’s tricky to right code to use least privilege, because you constantly have to think “what access do I have to this object, and what access do I need?”  Again, that’s no excuse for doing the wrong thing.  Any time you see a company whose software insists on unnecessarily running as administrator, think to yourself “I’m running a tool that is written by people who haven’t learned anything new since at least 1995”.

Peer-to-peer server dies; expect more spyware.

Raids close file-sharing server” says the BBC headline, on a story covering the closure of a major site in the eDonkey peer-to-peer “file sharing” network.  Okay, so we know that “file sharing” is generally a pseudonym for “we want to watch movies or listen to music, but we don’t want to pay anyone for the privilege, but we’ll find ways to claim that it isn’t really theft”, and so obviously this is a “good thing” from the view of content providers.

[I’m a content provider – I develop software, I write documentation, even this blog is copyrighted text, by virtue of the fact that I wrote it.]

However, from the point of view of system administrators, I predict this may lead to an increase in the spyware load on your systems.

Seems bizarre, right, that I am suggesting that spyware goes up when a p2p site goes down?  You’d think that would interrupt the flow of spyware through infected files.  Here’s my reasoning:

The average user of eDonkey has been using it for some time, and has got to the point where he/she subliminally knows what is safe content, and he/she has a version of eDonkey that might not be current and up-to-date, but is ‘good enough’.

That’s a stable system – you’ve already managed any spyware that may have come with the distribution of eDonkey, and the user has essentially educated themselves to not introduce more into the system.

Now, the system is made unstable – the server that was being used for the p2p sharing is no longer accessible, and the user panics trying to find another server.  Maybe they can’t find one, or maybe the server they find won’t accept their old version of eDonkey.  The user may go and download a new p2p program, with new attached spyware, and new servers to download from.  In addition to what comes with whatever new p2p program they download, they’ll also find that the users of this new p2p program and new server behave in different ways – requiring that the user re-learn how to intuit spyware’s presence in the files they are downloading.

This isn’t an argument for leaving p2p file-servers up, it’s an argument that you need to expect a spike in spyware, plan for it, and protect yourselves.

Coincidentally, Microsoft recently released Beta 2 of their anti-spyware product named “Windows Defender” just a few days ago.  Unlike anti-virus programs, you generally need more than one anti-spyware product on your system, so I’ll also recommend Lavasoft’s Ad-Aware and Spybot Search & Destroy (be careful of programs with “Spy” in the name – many of them are spyware masquerading as spyware-removal – Spybot Search & Destroy is not such a rogue program, though).

Making more sense of service SDDL

Thanks to Dana Epp’s blog for drawing my attention to Microsoft’s rather easier-to-read explanation of SDDL as it applies to services in the KB article “Best practices and guidance for writers of service discretionary access control lists“.

Oh, and of course, thanks to Microsoft for explaining it all.  I’m sure I’m not the only service author or administrator that has been confused by the SDDL output from “sc sdshow”.  Now, if only we could get some tools that would allow us to surf through DACL-space…  I’m brainstorming for ideas, but haven’t yet had any that I can put into code.

The really scary part about DACLs, of course, is that anyone can create a new secured object, and define what the various bit-fields of the ACE mean… there’s no good way to enforce documentation of security flags, and (as we’ve seen here) few tools or documentation already existing that help you interpret even the system-enforced security object DACLs.

Sometimes it’s good to be a foreigner

Every now and again, I go looking to foreign versions of various web sites – mostly, because of my own background, I go to UK sites (because that’s where I grew up), or US sites (because that’s where I live now).

Here’s a UK gem that US developers aren’t aware of: – MSDN Nuggets – a 10-15 minute exposition on a developer topic, for people who are too short for time to make it through an entire one-hour webcast (and I’ve met very few developers who can sit still for a whole hour).

Then there’s the sites where I don’t even read the language – but in the case of Microsoft’s Japanese Security pages, you don’t have to!  [Wouldn’t it be nice if the English Security notices included such a simple demonstration of how an attack can proceed?]

Of course, for the most part, the feeling is the other way around – the US sites get all the best offers, the new features, etc, etc, but occasionally local ingenuity allows you to get something useful from a site outside the US.

Broaden your horizons – there’s a whole world of information out there, and it’s all as close as your desk!

SDDL – easier to read, except when it’s not.

SDDL was introduced by Microsoft in Windows 2000, as a counter to the difficulty developers had in writing (and administrators had in reading) Security Descriptors, and specifically the Access Control Lists that come with them.

The recent advisory about service security settings (the title says “possible vulnerability” – as far as I’m concerned, it’s definite – I’ve exploited it on a couple of our own machines in XP SP1) led me to check on some other services, particularly the one that I make and sell.

My service turned out to be alright, and then a friend emailed me to ask about our favourite target: Quickbooks.  The new Quickbooks 2006 includes a system service.  I got Susan to list the SD on the service:

 C:\Documents and Settings\Administrator>sc sdshow QuickBooksDB


Wow – that’s confusing, isn’t it?  Okay, let’s deconstruct it – “D:” at the start indicates it’s a “Discretionary ACL” or “DACL” – this is a list of things that users / groups can / cannot do.  The “S:” towards the end is for a “SACL” – “System ACL”, which lists what gets logged.

Let’s look at a sample DACL Access Control Entry (ACE):


The “A” means “Allow” – this ACE lists what the user is allowed to do.  The “SY” means that the user being described is the local system.

The rights in the middle are made up of selections of pairs of letters:


So, that explains it, right?  Well, not exactly – what does it mean to “Create Child” on a service?  To “List Child” on a service?

After a lot of looking, I find that there really isn’t any sensible meaning to those.  The trick is to ignore those names.  Instead, think of the pairs of letters as representing numbers:

CC is listed as being equivalent to SDDL_CREATE_CHILD, or ADS_RIGHT_DS_CREATE_CHILD – and that last name has the value ‘1’ in the header file IADS.H.

Oh yes, you have to have the Platform SDK or other source of Windows Include Files to figure this out.

Then you go to the header file WinSvc.h, and find that SERVICE_QUERY_CONFIG is a right, and has the value 1.


To help you, I did the work and came up with:

CC – SERVICE_QUERY_CONFIG – ask the SCM for the service’s current configuration
LC – SERVICE_QUERY_STATUS – ask the SCM for the service’s current status
SW – SERVICE_ENUMERATE_DEPENDENTS – list dependent services
RP – SERVICE_START – start the service
WP – SERVICE_STOP – stop the service
DT – SERVICE_PAUSE_CONTINUE – pause / continue the service
LO – SERVICE_INTERROGATE – ask the service its current status
CR – SERVICE_USER_DEFINED_CONTROL – send a service control defined by the service’s authors
RC – READ_CONTROL – read the security descriptor on this service.

SDDL turns out to be absolutely no use whatever in figuring any of this out, and I couldn’t find a tool on Microsoft’s site that adequately lists service rights in such a way that an admin might understand them.  Maybe I’m just not looking in the right place – if you know of any, please let me know!

Is it any wonder that there’s a difficulty with service writers and administrators incorrectly setting access rights?  How do you guys configure security descriptors on objects like services?

"Windows Access Control Demystified" – demystified.

There’s been a little fuss raised in security research circles about a paper by Sudhakar Govindavajhala and Andrew Appel, titled “Windows Access Control Demystified“.

From looking at what it does, the paper quoted is actually surprisingly simple – but there is brilliance in the decision to attack the problem (who would guess it was a tractable problem?) and make it so simple in the first place.

Sudhakar & Appel use Prolog – an old, old, old AI language – to document simple logical predicates and conclusions, such as “if group G can access a resource R, and user U is a member of G, then U can access R”, which is really easily expressed in Prolog.  The real goodies are conditions like “if U is compromised, and U can write to R, and user V can execute R, then V is compromised”.  Then he just lets Prolog go off and do its thing.

Prolog’s “thing” is to take such a list of conditions, and a few starting axioms (the two obvious ones Sudhakar uses are “user Guest is compromised”, and “a Restricted User is compromised”), and infer conclusions such as “Local System account is compromised”, or “Administrator account is compromised”.

Embarrassing fact: that’s nothing surprising – we’re all well aware that there are likely to be numerous routes to compromise hanging around in most systems.  These are called “privilege escalation attacks”, and they are why good administrators pay attention to what their users do even after they have restricted them as to what they may do.

The joy of Prolog is that it tells you how it got to its answer.  For instance, in the case of yesterday’s Microsoft Security Advisory 914457, it’ll tell you that if the Restricted User is compromised, the Local System is compromised because the SSDP service can be configured by any Restricted User to run any executable under any account.

Most of the task of creating the input to Prolog can be done automatically, by analysing the system’s current configuration and generating lists of access privileges to system objects.

I hope that this tool – or something like it – will initially be used by Microsoft to help lock down their systems, and later be used by application developers to help lock down their own application installs; eventually, it would be nice to find it in place in businesses, where system administrators would use it (among others) to keep track of their security exposure as they change configurations.

Sadly, I predict that it will be used at greater speed by hackers and crackers in order to find and exploit vulnerabilities.

Expect a rash of security advisories from Microsoft and third-party software developers, and a number of viruses that use these privilege-escalation exploits along with standard social engineering to attempt to gain access to your systems.

However, it is important to note that this tool does not discover points of entry – it assumes that you already have some access to the system, either as Guest, or as a Restricted User. 

The best way to keep these elevation of privilege attacks from being a bother to you is to not grant any privilege to people you can’t in some way trust. 

The information in this paper cannot be used to create a worm, because it does not document or examine the points of entry into a system.  As with all security tools, even if you fixed all the problems it reported, you couldn’t say that your system is secure, because the tool is only as good as the data it gets – and that data is going to be incomplete.  Tools alone cannot secure a system.

AOL, Yahoo introduce "pay to spam" service.

Okay, so that’s not exactly what they call it, but really, what else are we supposed to read into this announcement?

New Service Would Charge E-mail Senders

[Note, that’s Yahoo’s web site, so it might be a teensy-bit biased on the story.]

Here’s a quote:

“[Yahoo and AOL] plan to introduce a service that would charge senders a fee to route their e-mail directly to a user’s mailbox without first passing through junk mail filters”

Uh, yeah, that sounds like “sell access to their users’ inboxes to spammers”, in any other language.

How much is your spam-free inbox worth?

“The fees, which would range from 1/4 cent to 1 cent per e-mail” – kinda cheap, compared to junk snail mail.

Maybe they could spin the service a little more for us?  Yes, the sentence continues: “are the latest attempts by the companies to weed out unsolicited ads, commonly called spam, and identity-theft scams. In exchange for paying, e-mail senders will be guaranteed their messages won’t be filtered and will bear a seal alerting recipients they’re legitimate.”

Let me get this straight… if I’m one of your customers, you’re going to sell access to my inbox, to anyone who has my address, allowing them to bypass the spam filters that would normally have kept them out of it, and this is to “weed out … spam”?

How exactly does that work?

Sounds crazy to me.

Thank goodness I control my own spam filters, and I’m not a customer of either Yahoo’s or AOL’s.

A final quote from the article:

“AOL and Yahoo would get a cut of the fees charged by Goodmail.”

So, that’s alright, then.

Update: The BBC’s news web site has an article at that suggests that this will reduce spam by making it too expensive to spam.  If the filters currently in place aren’t already stopping spam, how will this change?  And if I’m a bona-fide company whose legitimate mail to willing recipients gets caught in Yahoo / AOL’s spam filters, do I have to pay to get my mail to my customers?  Where’s the incentive for AOL / Yahoo to make their spam filters accurate, since they are going to “get a cut of the fees”?

Sometimes it’s hard to be a developer…

I’ve spent a while adding new features to my FTP client, and you’ll get to see some of that in a month or two, when it all comes together as a release.

One of the latest is “drag and drop” – and I went searching for advice on how to do the drag and drop from “virtual folders” – i.e. when the files don’t exist locally until you actually do the drop.  So you can’t do the easy method of passing a file location to the drop target.  I found a great document that perfectly describes what I need to do:

“Handling Shell Data Transfers”

Great, clear, simple advice (simple, if you’re a developer) on how to do drag and drop – and it serves a second purpose, that of cut/copy and paste.

So, I get the drag and drop working, and then I go to look at the cut/copy and paste, and add the few lines of extra code needed.

It doesn’t work.

I realise quickly that the following lines are the problem:

7. When the paste is complete, the target calls the IDataObject::SetData method with the CFSTR_PASTESUCCEEDED format set to DROPEFFECT_MOVE.

8. When the source’s IDataObject::SetData method is called with the CFSTR_PASTESUCCEEDED format set to DROPEFFECT_MOVE, it must check to see if it also received the CFSTR_PERFORMEDDROPEFFECT format set to DROPEFFECT_MOVE. If both formats are sent by the target, the source will have to delete the data. If only the CFSTR_PASTESUCCEEDED format is received, the source can simply remove the data from its display. If the transfer fails, the source updates the display to its original appearance.

No matter how much I try, I can’t get CFSTR_PASTESUCCEEDED into my SetData method.  I get the CFSTR_PERFORMEDDROPEFFECT format, so I know I’m setting it up correctly, but I get no CFSTR_PASTESUCCEEDED.

Days of trying different code, different ways of triggering the paste, go by, and I finally realise that there is no CFSTR_PASTESUCCEEDED coming from Windows Explorer – and since this is the de-facto drop target, that’s exactly the model that is going to be followed by every other drop target I’m going to meet.

I’d really love to tell Microsoft about this, and get the documentation fixed.  This used to be dead easy – I’d click on the “SDK Feedback” button, and send an email.

No “SDK Feedback” button any more.

Why not?  Where did it go?

What’s in its place?  Nothing terribly useful – a link to a web page that I can fill in.

Can I attach a file of source code to demonstrate this?


Can I italicise words that are important?


Does the web page offer any extra capabilities that can’t be done in an email?


So, why did this happen?  Beats me.

I finally decided to give up and ignore CFSTR_PASTESUCCEEDED entirely.  It seems superfluous.  I do find it thoroughly irritating that my time is wasted once again, because I tried to follow specific documentation that turned out not just to be wrong, but to have unnecessary, and incorrect, extra detail.  This isn’t the first time (the last time was when I was following documentation on how to close SSL connections), and I doubt that it will be the last.

Some guidelines about fax.

Susan Bradley, SBS Diva and Security Guru Extraordinaire, posts about some guidelines for making your fax modems work reliably over at

This reminds me that I am continually amazed by people who call our company asking for instructions on faxing us – not because they want to fax us, but because of why they want to fax us.

Their perception is that fax is more secure than email.

I’ll pause while you think on that.


Okay, so here we go…

  • Data Encryption: A fax cannot be encrypted, but an email can. Sure, there are some encrypting fax machines and/or software, but the ones I’ve seen all require that your peer has the same machine / software; email has standardised encryption methods – the most common of which is S/MIME.
  • Peer identification: An email can be diverted by hijacking the DNS settings in the sender’s DNS servers for the recipient’s domain – possible, but hardly trivial (and resolved by using encryption with a peer whose certificate you know). A fax can’t be so easily diverted, but when a company moves, its phone number gets assigned to someone else. That explains why we get a number of faxes every week for a rock quarry; some of these include ordering information.
  • Non-repudiation: again, S/MIME and others come into their own here, by providing the ability to sign an email. You can’t sign a fax, except with a hand signature that is so ludicrously easy to duplicate (cut it out of a previous fax, and paste it onto the next one).
I’m normally all over the idea that users should be using solutions with which they are comfortable, and whose failure modes and security mechanisms they are already familiar with – but it seems like too few people have ever considered these issues for facsimile machines, and they’ve all been told that email is unsecure.

By now, we should all be comfortable sending signed and encrypted messages, using self-signed certificates.

Update 2006-02-07: Prudential’s customers have their data sent, by the thousands, to a herbal remedy store, because the two fax numbers differ by only a digit.  Private information should be sent through secure channels – fax is not a secure channel.