Fake Anti-Malware is Apparently Microsoft’s Fault

Munir Kotadia, an IT Journalist in Australia, has finally managed to figure out how to blame Microsoft for the fake anti-malware epidemic. Apparently, the reason is that “Microsoft could save the world from fake security applications by introducing a whitelist for apps from legitimate security firms” and, presumably, has neglected to do so out of sheer malice.


I’m clearly not a thinker at the same level as Munir; maybe that is why I don’t fully get this white list he proposes. Does he want one only of security software? How would you identify security software? I can see only two ways. The first is to detect software that behaves like security software. If you scan files for viruses, hook certain APIs, quarantine things occassionally, and throw frequent incomprehensible warnings, you must be security software. The problem is, the fake ones only do the latter of those four. If you use heuristic detection of security software it would be absolutely trivial for the fake packages to not trip the warnings. They just have to avoid behaving like security software. Of course, if they actually DID behave like security software, we would not have this problem, would we?


 The second approach I can think of is to have all security software to identify themselves as such, both the fake and the real. They could set some bit in the application manifest, the file which describes the application. I propose that it should look like this:


<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>
<assembly xmlns=”urn:schemas-microsoft-com:asm.v1″ manifestVersion=”1.0″>
  <assemblyIdentity type=”win32″
                    name=”RBU.FakeAntiMalware.MyCurrentVersion”
                    version=”6.0.0.0″
                    processorArchitecture=”x86″
                    publicKeyToken=”0000000000000000″
                    securitySoftware=”True”
  />
</assembly>


Note the flag in the manifest above that identifies this package as security software. Now Microsoft can just compare the name of the package against a list of known good software and if it does not match, block it. This extremely simple mechanism works just as well as the “evil bit”: http://www.ietf.org/rfc/rfc3514.txt. In fact, if we simply change the manifest like this, we can avoid the whole white list altogether:


<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>
<assembly xmlns=”urn:schemas-microsoft-com:asm.v1″ manifestVersion=”1.0″>
  <assemblyIdentity type=”win32″
                    name=”RBU.FakeAntiMalware.MyCurrentVersion”
                    version=”6.0.0.0″
                    processorArchitecture=”x86″
                    publicKeyToken=”0000000000000000″
                    malicious=”True”
  />
</assembly>


There you have it! Microsoft should make it part of the logo guidelines to require all malicious software to identify itself as malicious. Problem solved! You may go back to surfing the intarwebs now.


The sharp-eyed security experts in the crowd may have spotted a minor flaw in this scheme, however. What if the malicious software refuses to identify itself? Curses to them! Maybe we need something better. Perhaps Munir’s whitelist is to be a whitelist of all software? That would be simpler to be sure. In fact, using Software Restriction Policies (SRP), which has been built into Windows for years, we can restrict which software can run. Now all we need is our whitelist. Of course, as Munir points out, it is Microsoft’s responsibility to produce that whitelist.


Producing the whitelist would be conceptually simple. Microsoft would simply have to create a division that ingested all third party software, tested it, and validated it as non-malicious. DOMUS (The Department of Made Up Statistics) estimate the number of third-party applications for Windows at somewhere between 5 and 10 million, including shareware, freeware, open source, commercial applications, in-house developed applications, line of business applications, the kiosk applications that drive your ATM, your gas pump, your car, and probably a space craft or two. In order to avoid becoming an impediment deployment, Microsoft would have to test all such software for malice, with an SLA of 24-48 hours, yet guarantee that software does not turn malicious after several weeks or months. It would also need to ensure that any updates do not introduce malicious functionality. In other words, to meet these requirements, Microsoft would need to do just two things: (a) develop a method of time travel, and (b) hire and train all of China to analyze software for malicicous action. I’m sure the Trustworthy Computing division is working on both problems.


I am not arguing that reputation scoring does not have some promise, which is what Symantec’s Rob Pregnall was actually talking about, and which Munir turned into an indictment of Microsoft. However, reputation systems are not only fallible and can be relatively easily manipulated. Without consumers actually understanding what the reputation score means, and learning how to value it over the naked dancing pigs, it will never help. Again, it comes down to how we educate consumers on how to be safe online and why, instead of scaring them into buying more anti-malware software. I may be mistaken, but I was under the impression that the reason Freedom of the Press is a cherished human right is because the Press is there to educate the public. Why is the press, along with government and the IT Industry, not doing more to educate the public on how to tell real from fake?



How Delegation Privileges Are Represented In Active Directory

One of the last areas where more tool support is needed is in monitoring the various attributes in Active Directory (AD). Recently I got curious about the delegation flags, and, more to the point, how to tell which accounts have been trusted for delegation. This could be of great import if, for instance, you have to produce reports of privileged accounts.


KB 305144 gives a certain amount of detail about how delegation rights are presented in Active Directory. However, it is unclear from that article how to discover accounts trusted for full delegation, as opposed to those trusted only for constrained delegation; and the various flags with "DELEGATION" in them are not as clearly explained as I would like. Nor was I able to glean any insight into this from the various security guides and recommendations for Windows. I asked around, and got great answers from Ken Schaefer. By spinning up a Windows Server 2003 Domain Controller in Amazon EC2 and running a few tests, I was able to verify that Ken was indeed correct.


Delegation rights are represented in the userAccountControl flag on the account object in AD, whether a user or a computer account. There are a couple of different flags involved, however. Here are the values set in various circumstances:


For a computer account, the default userAccountControl flag value is 0x1020, which is equivalent to the WORKSTATION_TRUST_ACCOUNT & PASSWD_NOTREQD values being set. A user account is set to 0x200 (NORMAL_ACCOUNT) by default.


When you enable full delegation, 0x80000, or TRUSTED_FOR_DELEGATION, gets ANDed to the userAccountControl flag. This is irrespective of domain functional level. In other words, in a Windows 2000 compatible domain, checking the "Trusted for delegation" box; and, in higher functional levels, checking "Trust this computer for delegation to any service" using the "Kerberos Only" setting, both result in the same flag being set. The same flag is set on user accounts when you check the "Account is trusted for delegation" checkbox.


In a Windows Server 2003 or higher functional level domain you gain the ability to trust an account for delegation only to specific services: constrained delegation. If you configure constrained delegation using Kerberos only, the userAccountControl value is not changed at all. The account simply gets a list of services it can delegate to in the msDS-AllowedToDelegateTo flag.


However, if you configure constrained delegation using any protocol, the userAccountControl value gets ANDed with 0x1000000, or TRUSTED_TO_AUTH_FOR_DELEGATION.


There is also a flag in userAccountControl called NOT_DELEGATED. This flag is set when you check the box "Account is sensitive and cannot be delegated."


This tie-back to the graphical user interface, as well as explanation of the various flags, should help an auditor construct a query that lists all accounts trusted for delegation in an arbitrary domain. Obviously, any account with TRUSTED_FOR_DELEGATION set should be considered extremely sensitive; as sensitive as a Domain Controller or Enterprise Admin account. An account with TRUSTED_TO_AUTH_FOR_DELEGATION set is probably less sensitive, depending on which specific services it can connect to, but still quite sensitive as it can use other protocols than Kerberos. Finally, and least sensitive of those accounts trusted for some form of delegation, are those that are only permitted to delegate to specific services using Kerberos.

Web Of Trust: RIP

It's official. I just received an e-mail from Thawte notifying me that, as of November 16, 2009, the most innovative and useful idea in PKI since its inception, the Web of Trust, will die.


Thawte was founded 14 years ago by Mark Shuttleworth. The primary purpose was to get around the then-current U.S. export restrictions on cryptography. Shuttleworth also had an idea that drew from PGP: rather than force everyone who wanted an e-mail certificate to get verified by some central entity – and pay for the privilege – why not have them verified by a distributed verification system, similar to the key signing system used by PGP, but more controlled. This was the Web of Trust. Anyone can get a free e-mail certificate, but to get your name in it instead of the default "Thawte FreeMail User" you had to get "notarized" by at least 2 people (or 1, if you managed to meet Shuttleworth himself or a few select others). The Web of Trust was a point-based system, and if you received 100 points (requiring at least three notary signatures) you became a notary yourself. The really cool idea was that it created a manageable system of trust based not so much on the six degrees of separation as on the fact that most of us are inherently trustworthy beings.


In 1999 Shuttleworth sold Thawte to Verisign for enough money for him to take a joyride into space, found the Ubuntu project, and to live without worries about money for the rest of his own life and that of several of his descendants. Verisign, of course, is in the business of printing money, only in the form of digital certificates, and certainly not in giving anything away for free. Not that there is anything inherently wrong with that, but it iscertainly at odds with Thawte's free service, so it was really just a matter of time before the latter was disbanded. WIth it goes the Web of Trust.


Finally, on November 16, 2009, the Web of Trust will be removed as a free competitor to Verisign's paid service that does the same thing. It will be a sad day indeed.

Passwords are here to stay

At least for the short to medium term. That is the, quite obvious, conclusion drawn in a Newsweek article entitled "Building a Better Password."  The article goes inside the CyLab at Carnegie-Mellon University to understand how passwords may one day be replaced. It is interesting reading all around.


The article is not without some "really?" moments though, such as this quote:


The idea of passphrases isn't new. But no one has ever told you about it, because over the years, complexity—mandating a mix of letters, numbers, and punctuation that AT&T researcher William Cheswick derides as "eye-of-newt, witches'-brew password fascism"—somehow became the sole determinant of password strength.


Actually, I do believe someone did tell you about it. Five years ago now, in fact.