So, I’m reading my copy of Information Security Magazine, when I read an article by Jay Heiser about “Military Madness“. Go read it yourself – I’ll wait.
The article amused me particularly because my boss and one or two of my co-workers came out of the military.
I don’t think the business mind-set really always helps all that much, either, sadly.
The most glaring examples are the “your data is now ours, and we can sell it to whomever” issues that have been plaguing various credit card processing companies for some time.
Where I work, in health-insurance, I’ve found the safest position to occupy is to view the member’s data as belonging to the member, and held
in trust by us solely for the purpose of doing business on behalf of that member.
I think that a lot can be achieved simply by re-coupling benefit and loss, or risk. Take credit cards, for instance, where the benefit accrues to the banks, and the loss accrues to the vendor and rarely to the customer.
Vendors can’t tell credit card companies that they’re going to go to an all-cash basis, and they won’t get far by telling customers that their credit cards are too risky to take. So, the banks cleverly lump all of the risk with the group least able to manage it – while at the same time making commercials that imply that identity theft is all the fault of those nasty vendors.
There’s no feedback within the credit card system that allows or encourages it to self-strengthen. Any strengthening measures are imposed from outside, such as the US law that limits a card-holder’s liability to $50 – but even this is short-sighted, as it does not offer any protection to the merchant, who gets charged on the transaction coming in, the cost of any goods they send out as a result, a second charge on the refunding of the money to the card-holder, and often a $25 “chargeback fee” for having the temerity to accept the credit card in the first place, despite not having any reliable way to verify that the card is in the possession of the card-holder.
This is a fun paper. It talks about a rootkit that inserts itself as a virtualisation layer underneath the existing OS.
And yes, because it’s a Microsoft Research paper, they had to point out that they could achieve exactly the same result against a Linux box.
Obviously, the best way to detect such a rootkit is to note its rather unwieldy effect on performance (but would that be necessarily always so?), along with a detection routine that runs outside of the regular boot process (i.e. edit CMOS boot settings to boot from removable media, and boot a read-only known-clean OS with up-to-date detection tools). Sadly, the bootable read-only known-clean OS with detection tools is not something you can make from a Microsoft OS without skirting dangerously close to licence violations, or by paying through the nose for a licence for XP Embedded, which comes with Windows PE, and is designed for OEMs to build an installation experience for Windows.
It would be nice if a licence for any Windows operating system also included a licence to create bootable DVD-Rs containing the user’s choice of recovery tools on a Windows PE subsystem.
The Virtual Rootkit has been covered at eWeak, where there’s a lovely TalkBack entry:
“Microsoft cannot seem to make its product impervious to malware attacks so the next best thing is to ensure that it nearest competitor, Linux/Open-source, is similarly vulnerable.”
Uh… Yeah. Microsoft did that. Right. đź™‚
Sadly, the Computer Science answer is that any modern computer (aside from Quantum computers) can be emulated by a finite Turing Machine, and it’s a piece of cake to implement a finite Turing Machine on a modern computer. As such, any modern computer may emulate any other modern computer; the only limits are accuracy of emulation, and performance of the emulated OS.
Quantum Computers may be an exception… possibly.
I was going to title this post “Microsoft Representative Says to Return Quickbooks for Refund”. Then I thought to myself:
“Oh, Jesper’s going to be so mad with me for that tagline.”
I’d probably also upset Steve Riley, who works a lot with Jesper, and gets irritated when Microsoft reps are badly misquoted, or when writers conflate an otherwise succinct message in order to demonise Microsoft.
Jesper didn’t actually say to return Quickbooks for a refund, and he wasn’t specifically, directly, referring to Quickbooks when he said:
“Two related issues usually come up at about this point in the conversation. The first one is that some application requires at least Power User privilege. If that application is not an inherently administrative one it is broken. Period. Return it for a refund or a fixed version.“
But, as you can see from http://www.threatcode.com, despite years of being prodded by Security MVPs and CPAs alike that Quickbooks shouldn’t be an admin-only product (what system administration task does it do? NONE!), Quickbooks remains solidly in the “Administrator or Power User” camp. At one point, a tech support rep at the company “responsible” even claimed that this was a good thing, because it meant that only trusted people were doing your accounts.
As a developer myself, it looks strikingly similar to “we didn’t want to have to do the hard work of figuring out how to share files across users without writing to files in the Program Files directory tree”.
Me, I trust my accountant to do my accounts; but I don’t trust my network admin to do my accounts, nor do I trust my accountant to administer my network.
So, my network admin has administrator privileges, and my accountant has the key to my filing cabinet. I’ll be upset if I find that they’re sharing them.
I was reading an article just the other day about attacks on the Microsoft Fingerprint Reader, that contained the important reminder that this isn’t a security device, it’s a convenience device; that it should not be used as credentials for logging on to a corporate system.
I’ve maintained on several occasions that a fingerprint is a claim of identity, and falls far short of being a proof of identity. It also has the interesting property that you can’t revoke it and issue a new one if it has been exploited, making it of limited use as a credential.
So, what can you do with a fingerprint?
Well, there are some identity-related uses I can think of.
Suppose you are in a busy hospital, with a number of terminals spread around the place. Accessing these terminals for private information should require strong credentials. But what about public information?
Does, say, a nurse occasionally need to verify the usual dosage for Tylenol? Would a doctor find it convenient to search for phone numbers of specialists whose work he has previously approved of? I’d say that’s likely – and each person will have their own favoured subset of public information, and starting point(s) for looking at it.
For such public information, of course, it would be great to walk up to a terminal, press your finger against the print reader, and have your chosen view on that information be rapidly displayed.
What other uses can you think of, where a false match would not reveal sensitive or private information, or provide privileged access to systems, but where a relatively good rate of true matches makes a system easier and quicker to use?
According to Identity Woman, Senator Ron Wyden has drafted and submitted a “Net Neutrality Bill”, and as with so many other pieces of government, the words it’s phrased in make it sound really wonderful. Unfortunately, they can be read in a way that isn’t quite so lovely.
Let’s re-draft some of the points she refers to:
– Broadband providers will not be allowed to interfere with, block, degrade, alter, modify or change traffic on the Internet;
Becomes: Broadband providers will not be allowed to provide filters for spam, illegal or obscene content, nor will they be allowed to quench floods of denial-of-service attacks on your web site.
– Broadband providers will not be allowed to create a priority lane where content providers can buy quicker access to customers, while those who do not pay the fee are left in the slow lane;
Becomes: You will not be allowed to create a quicker path for people with whom you have a business relationship – your customers will have to wait behind the spammers, if the spammers got there first.
– Broadband providers must allow consumers to choose which devices they use to connect to the Internet while they are on the net;
Becomes: every time you call for technical support, you will be told that the problems are caused by the device you chose to install (even if it’s the provider’s suggested model from last month), and you will have to spend time proving that this is not the situation, and/or replacing the device with one of the broadband provider’s choosing while they ‘troubleshoot’ the problem.
– Consumers should have non-discriminatory access and service; and
Becomes: No matter how flagrantly you may abuse the system, you get to try again with the same broadband provider.
– The broadband world should have a transparent system in which consumers, Internet content, and applications companies have access to the rates, terms, and conditions for Internet service.
Okay, so I can’t find much wrong with that one.
Am I just being too much of a curmudgeon, or is this as dangerous as it sounds?
Time was when we used to say “the only secure computer is one that has been turned off and unplugged”.
According to New Scientist, that’s not even safe any more:
What next, we have one that works when it’s unplugged, too?
Microsoft’s latest security advisory has me tickled color=#ffc0cb.
I can only guess that it is assumed by most people that any update to Internet Explorer must be a security fix, because this is an advisory to indicate that the patch to Internet Explorer is _not_ security related.
Long story short, Microsoft was found to be in breach of a patent – a patent for what sounds like a totally ridiculous and obvious process – and can no longer allow active content that executes automatically on page load. This update changes the behaviour of Internet Explorer such that you have to activate (in most cases, by clicking on its location) the active content.
My recommendation – Eolas won the lawsuit against Microsoft, not against you. Microsoft has to release this update, but you don’t have to install it. Don’t.
While I was at Microsoft, every so often the question would arise “how can we do more to prevent users from running all the time as administrator?”
There’s something sexy and powerful about being “administrator”. Suggest taking administrator access away from someone who has it now – say, a developer, or a small business’ financial officer (thanks, Quickbooks!), or a home user (thanks, Turbotax! – by the people who brought you Quickbooks) – and you’ll get thrown the look of an alcoholic who’s just realised that you’ve figured out where he’s stashed his hooch.
Okay, so undeniably, there is power in that account – and that’s the main reason why you should spend as little time with that power as possible. “Power corrupts”, remember, and in this case, the thing most likely to get corrupted, by that power being constantly “on”, is the important data you use to run your business.
In Vista and Longhorn, this has been significantly addressed by use of UAP / UAC / LUA or whatever it’s called this morning.
For some reason, nobody ever took up my suggestion, which was brought on by the observation that my kid thinks the guy with power at his school is the janitor. He has the keys to every classroom, he knows where the secret tunnels are, and how to open up the locked cabinets with the electricity in them. To those of us beyond secondary education (high school), the janitor is somewhat less cool – without him, the school couldn’t function, but we wouldn’t like to do his job unless it was absolutely necessary that we do so.
So, I think that we should rename “administrator” to “janitor”, at least in our minds, if not in our systems.
This highlights that administrator access should only be used when you need to work on the ‘plumbing’ of the system. It’s not really the power-house, and the secret areas to which it has the keys are only the boiler-rooms and fuse-boxes of your system.
Where’s the harm in being administrator all the time? It’s like leaving all those locked cabinets open, for any old virus to abuse as it pretends to be you; it’s like spending time in the boiler room, where you could drop your bottle of cheap whisky and set off a fire that burns down the whole school.
Okay, enough with the analogy, here’s some real reasons why. If you run as administrator, a virus or trojan that you run (and you will run one, one day) will be allowed to destroy not just your immediate files, but the entire system on which you depend, or worse, install extra components that can be used to attack others, or to filch off your private information. If you run as administrator, you will accidentally type a command that deletes an important system setting or another user’s important files.
Do I run as administrator? No. In my job I run as a Restricted User. Not even “Power User” (another bad term that equates to “administrator”). I spend my day as a Security Engineer, and Developer, in Restricted User mode, because I don’t trust that I can detect every virus or trojan, or that I can control my actions sufficiently well not to do something disastrous. At times, it sucks, because there are programs I can’t run (but there are usually alternatives), and features I can’t access (but I can often open them up with appropriate tools and settings). I still can’t debug as easily in Visual Studio .NET 2003 (but the 2005 version fixes this).
There will always be “Elevation of Privilege” attacks, sure, but the answer is not to give up on separation of privilege completely. It’s tricky to right code to use least privilege, because you constantly have to think “what access do I have to this object, and what access do I need?” Again, that’s no excuse for doing the wrong thing. Any time you see a company whose software insists on unnecessarily running as administrator, think to yourself “I’m running a tool that is written by people who haven’t learned anything new since at least 1995”.
“Raids close file-sharing server” says the BBC headline, on a story covering the closure of a major site in the eDonkey peer-to-peer “file sharing” network. Okay, so we know that “file sharing” is generally a pseudonym for “we want to watch movies or listen to music, but we don’t want to pay anyone for the privilege, but we’ll find ways to claim that it isn’t really theft”, and so obviously this is a “good thing” from the view of content providers.
[I’m a content provider – I develop software, I write documentation, even this blog is copyrighted text, by virtue of the fact that I wrote it.]
However, from the point of view of system administrators, I predict this may lead to an increase in the spyware load on your systems.
Seems bizarre, right, that I am suggesting that spyware goes up when a p2p site goes down? You’d think that would interrupt the flow of spyware through infected files. Here’s my reasoning:
The average user of eDonkey has been using it for some time, and has got to the point where he/she subliminally knows what is safe content, and he/she has a version of eDonkey that might not be current and up-to-date, but is ‘good enough’.
That’s a stable system – you’ve already managed any spyware that may have come with the distribution of eDonkey, and the user has essentially educated themselves to not introduce more into the system.
Now, the system is made unstable – the server that was being used for the p2p sharing is no longer accessible, and the user panics trying to find another server. Maybe they can’t find one, or maybe the server they find won’t accept their old version of eDonkey. The user may go and download a new p2p program, with new attached spyware, and new servers to download from. In addition to what comes with whatever new p2p program they download, they’ll also find that the users of this new p2p program and new server behave in different ways – requiring that the user re-learn how to intuit spyware’s presence in the files they are downloading.
This isn’t an argument for leaving p2p file-servers up, it’s an argument that you need to expect a spike in spyware, plan for it, and protect yourselves.
Coincidentally, Microsoft recently released Beta 2 of their anti-spyware product named “Windows Defender” just a few days ago. Unlike anti-virus programs, you generally need more than one anti-spyware product on your system, so I’ll also recommend Lavasoft’s Ad-Aware and Spybot Search & Destroy (be careful of programs with “Spy” in the name – many of them are spyware masquerading as spyware-removal – Spybot Search & Destroy is not such a rogue program, though).
Thanks to Dana Epp’s blog for drawing my attention to Microsoft’s rather easier-to-read explanation of SDDL as it applies to services in the KB article “Best practices and guidance for writers of service discretionary access control lists“.
Oh, and of course, thanks to Microsoft for explaining it all. I’m sure I’m not the only service author or administrator that has been confused by the SDDL output from “sc sdshow”. Now, if only we could get some tools that would allow us to surf through DACL-space… I’m brainstorming for ideas, but haven’t yet had any that I can put into code.
The really scary part about DACLs, of course, is that anyone can create a new secured object, and define what the various bit-fields of the ACE mean… there’s no good way to enforce documentation of security flags, and (as we’ve seen here) few tools or documentation already existing that help you interpret even the system-enforced security object DACLs.