HTML Help in MFC

I recently got around to converting an old MFC project from WinHelp format to HTML Help. Mostly this was to satisfy customers who are using Windows Vista or Windows Server 2008, but who don’t want to install WinHlp32 from Microsoft. (If you do want to install WinHlp32, you can find it for Windows Vista or Windows Server 2008 at Microsoft’s download site.]

Here’s a quick round trip of how I did it:

1. Convert the help file – yeah, this is the hard part, but there are plenty of tools, including Microsoft’s HTML Help Editor, that will do the job for you. Editing the help file in HTML format can be a little bit of a challenge, too, but many times your favourite HTML editor can be made to do the job for you.

2. Call EnableHtmlHelp() from the CWinApp-derived class’ constructor.

3. Remove the line ON_COMMAND(ID_HELP_USING, CWinApp::OnHelpUsing), if you have it – there is no HELP_HELPONHELP topic in HTML.

4. Add the following function:

void CWftpdApp::HelpKeyWord(LPCSTR sKeyword)
{
HH_AKLINK akLink;
switch (GetHelpMode())
{
case afxHTMLHelp:
akLink.cbStruct = sizeof(HH_AKLINK);
akLink.fReserved=FALSE;
akLink.fIndexOnFail=TRUE;
akLink.pszKeywords=sKeyword;
akLink.pszMsgText=(CString)”Failed to find information in the help file on ” + sKeyword;
akLink.pszMsgTitle=”HTML Help Error”;
akLink.pszWindow=NULL;
AfxGetApp()->HtmlHelp((DWORD_PTR)&akLink,HH_KEYWORD_LOOKUP);
break;
case afxWinHelp:
AfxGetApp()->WinHelp((long)(char *)sKeyword,HELP_KEY);
break;
}
}

5. Change your keyword help calls to call this new function:

((CWftpdApp *)AfxGetApp()->WinHelp((long)(char *)”Registering”);

Becomes:

HelpKeyWord(“Registering”,HELP_KEY);

6. If you want to trace calls to the WinHelp function to watch what contexts are being created, trap WinHelpInternal:

void CWftpdApp::WinHelpInternal(DWORD_PTR dwData, UINT nCmd)
{
TRACE(“Executing WinHelp with Cmd=%d, dwData=%d (%x)\r\n”,nCmd,dwData,dwData);
CWinApp::WinHelpInternal(dwData,nCmd);
}

This trace comes in really, really (and I mean REALLY) handy when you are trying to debug “Failed to load help” errors. It will tell you what numeric ID is being used, and you can compare that to your ALIAS file.

7. If your code gives a dialog box that reads:

—————————
HTML Help Author Message
—————————
HH_HELP_CONTEXT called without a [MAP] section.
—————————
OK
—————————

image_2

 

What it means is that the HTML Help API could not find the [MAP] or the [ALIAS] section – without an [ALIAS] section, but with a [MAP] section, this message still will appear.

8. Don’t edit the ALIAS or MAP sections of your help file in HTML Help Editor – Microsoft has a long-standing bug here that makes it crash (losing much of your unsaved work, of course) unpredictably when editing these sections. Edit the HHP file by hand to work on these sections.

9. Most of your MAP section entries are automatically generated by the compiler, as .HM files, which hold macros appropriate for the specific control in the right dialog. Simply include the right HM file, and all you will need to do is create the right ALIAS mappings.

10. The MFC calls to HtmlHelp discard error returns from the function, so there’s really no good troubleshooting to go on when debugging access to help file entries.

Let me know if any of these helpful hints prove to be of use to you, or if you need any further clarification.

Kaminsky Black-Hat Webcast: "By Any Other Name: DNS has doomed us all."

By any other name... Okay, so the talk’s official title was “Dan Kaminsky’s DNS Discovery: The Massive, Multi-Vendor Issue and the Massive, Multi-Vendor Fix”.

Arcane details of TCP are something of a hobby of mine, so I attended the webcast to see what Dan had to say.

The Past is Prologue

A little history first – six months ago, Dan Kaminsky found something so horrifying in the bowels of DNS that he actually kept quiet about it. He contacted DNS vendors – OS manufacturers, router developers, BIND authors, and the like – and brought them all together in a soundproofed room on the Microsoft campus to tell them all about what he’d discovered.

Everyone was sworn to secrecy, and consensus was reached that the best way to fix the problem would be to give vendors six months to release a coordinated set of patches, and then Dan Kaminsky would tell us all at BlackHat what he’d found.

Until then, he asked the security community, don’t guess in public, and don’t release the information if you know it.

Now is the winter of our DNS content (A records and the like)

Fast forward a few months, and we have a patch. I don’t think the patch was reverse-engineered, but there was enough public guessing going on that someone accidentally slipped and leaked the information – now the whole world knows.

Kaminsky confirmed this in today’s webcast, detailing how the attack works, to forge the address of www.example.com:

  1. Attacker persuades victim to ask for 1.example.com
  2. Victim’s DNS server queries for an A record for 1.example.com
  3. Attacker forges a response that says “I don’t know 1.example.com, but the DNS server at www.example.com knows, and it’s at 1.2.3.4”
  4. Victim’s DNS server accepts this response, queries 1.2.3.4 for 1.example.com, and now the attacker knows that the victim can be directed to www.example.com at 1.2.3.4, allowing the attacker to steal cookies, represent as a trusted web site, etc, etc.

Note that this is a simple description of the new behavior that Kaminsky found – step 3 allows the DNS server’s cache to be poisoned with a mapping for www.example.com to 1.2.3.4, even if it was already cached from a previously successful search.

If that was all that Kaminsky could do, even on an unpatched server, he’d have a 1 in 65535 chance of guessing the transaction ID to make his forgery succeed. However, old known behaviours simply make it easier for the attacker to make the forgery work:

  1. Because the attacker tells the victim to search for a site, the attacker controls when the race with the authoritative DNS server starts.
  2. The attacker can tell the victim to search several times, and can forge several possible responses, using the birthday paradox to be more likely to guess the transaction ID (and source port), so that his forged response is accepted.
  3. Because this attack overwrites cached entries, the attacker can try again and again (picture a site with a million 1-pixel images each causing a different DNS query) until he is successful. Stuffing the cache won’t protect you.
  4. The attacker can insert an obscenely huge TTL (time-to-live) on the faked entry, so that it remains in cache until the DNS service is flushed or restarted.

Kaminsky’s tests indicate that a DNS server’s cache can be poisoned in this way in under ten seconds. There are metasploit plugins that ‘demonstrate’ this  (or, as with all things metasploit, can be used to exploit systems).

The patch, by randomizing the source port of the DNS resolver, raises the difficulty of this attack by a few orders of magnitude.

The long-term fix, Kaminsky said, is to push for the implementation of DNSSEC, a cryptographically-signed DNS system, wherein you refuse to pass on or accept information that isn’t signed by the authoritative host.

A port, a port, my domain for a port

One novel wrinkle that Kaminsky hadn’t anticipated is that even after application of the patch to DNS servers, some NATs apparently remove the randomness in the source port that was added to make the attack harder. To quote Kaminsky “whoops, sorry Linksys” (although Cisco was one of the companies he notified of the DNS flaw, and they now own Linksys). Such de-randomising NATs essentially remove the usefulness of the patch.

Patching is not completely without its flaws, however – Kaminsky didn’t mention some of the issues that have been occurring because of these patches:

  1. ZoneAlarm decided that DNS queries from random source ports must be a sign of attack, and denied all such queries, essentially disconnecting the Internet from users of ZoneAlarm. I guess I can learn to live with that.
  2. BIND doesn’t check when binding to a random port to see if that port is already in use – as a result, when the named server sends out a DNS query, there’s a chance the response packet will come back to a service that isn’t expecting it. Because the outgoing query punches a return hole in most firewalls, this could mean that a service blocked by the firewall from receiving Internet traffic is now opened up to the Internet. The workaround is to set the avoid-udp-v4-ports configuration setting, listing any ports that named shouldn’t use.
  3. Windows’ DNS service takes a different tack, binding to 2500 (the number is configurable) random ports on startup. As with BIND, these ports might conflict with other services; different from BIND, however, is the behavior – since the ports are already bound by the DNS server, those other services (starting later than DNS, because most IP components require it) are now unable to bind to that port. As with BIND, the workaround is to tell the DNS server which ports not to use. The registry entry ReservedPorts will do this.
  4. Users are being advised to point their DNS server entries to OpenDNS. Single point of failure, anyone?

Metrics and statistics:

  1. When Kaminsky’s vulnerability detection tool was first made available at doxpara.com, 80+% of all checks indicated that the DNS server was vulnerable. This last week, 52% of all checks showed vulnerable servers. Patches are getting installed.
  2. The attack is noisy – output from the metasploit framework showed “poisoning successful after 13250 attempts” – that’s thirteen thousand DNS queries and 260,000 forged DNS responses. IDS and IPS tools should have signatures for this attack, and may be able to repel boarders.
  3. Metasploit exploits for this are at http://www.caughq.org/exploits/CAU-EX-2008-0003.txt if you want to research it further.

Tomorrow, and tomorrow, and tomorrow…

The overall message of the webcast is this:

This attack is real, and traditional defences of using a high TTL will not protect you. Patching is the way to go. If you can’t patch, configure those unpatched DNS servers to forward to a local new (patched) DNS server, or an external patched server like OpenDNS. Scan your site for unexpected DNS servers.

Whoops – Information Wanted to be Free Again.

Picture the scene at Security Blogs R Us:

“We’re so freakin’ clever, we’ve figured out Dan Kaminsky’s DNS vulnerability”

“Yeah, but what if someone else figures it out – won’t we look stupid if we post second to them?”

“You’re right – but we gave Dan our word we wouldn’t publish.”

“So we won’t publish, but we’ll have a blog article ready to go if someone else spills the beans, so that we can prove that we knew all about it anyway.”

“Yeah, but we’d better be careful not to publish it accidentally.”

>>WHOOP, WHOOP, WHOOP<<

“What was that?”

“The blog alert – someone else is beating us to the punch as we speak.”

“Publish or perish! Damn the torpedoes – false beard ahead!”

“What? Are you downloading those dodgy foreign-dubbed pirated anime series off BitTorrent through the company network again?”

“Yes – I found a way around your filters.”

“Good man.”


It’s true (okay, except for all of the made-up dialog above), a blog at one of the security vulnerability research crews (ahem, Matasano) did the unthinkable and rushed a blog entry out on the basis that they thought someone else (ahem, Halvar Flake) was beating them to it. And now we all know. The genie is out of the bag, the cat has been spilled, and the beans are out of the bottle.

Now we all know how to spoof DNS.

Okay, so Matasano pulled the blog pretty quickly, but by then it had already been copied to server upon server, and some of those copies are held by people who don’t want to take the information off the Internet.

Clearly, Information Wants To Be Free.


There’s an expression I never quite got the hang of – “Information Wants To Be Free”, cry the free software guys (who believe that software is information, rather than expression, which is a different argument entirely) – and the sole argument they have for this is that once information is freed, it’s impossible to unfree it. A secret once told is no longer a secret.

There’s an allusion to the way in which liquid ‘wants to be at its lowest level’ (unless it’s liquid helium, which tends to climb up the sides of the beaker when you’re not looking), in that if you can’t easily put something back to where it used to be, then where it used to be is not where it wants to be.

So, information wants to be free, and Richard Stallmann’s bicycle tyre wants to have a puncture.


But back to the DNS issue.

I can immediately think of only one extra piece of advice I’d have given to the teams patching this on top of what I said in my previous blog, and that’s something that, in testing, I find the Windows Server 2003 DNS server was doing anyway.

So, that’s alright then.

Well, not entirely – I do have some minor misgivings that I hope I’ve raised to the right people.

But in answer to something that was asked on the newsgroups, no I don’t think you should hold off patching – the patch has some manual elements to it, in that you have to make sure the DNS server doesn’t impinge on your existing UDP services (and most of you won’t have that many), but patching is really a whole lot better than the situation you could find yourself in if you don’t patch.

And Dan, if you’re reading this – hi – great job in getting the big players to all work together, and quite frankly, the secrecy lasted longer than I expected it to. Good job, and thanks for trying to let us all get ourselves patched before your moment of glory at BlackHat.

DNS Server Reserves 2500 Ports.

After applying the patch for MS08-037KB 953230 (the multi-OS DNS flaw found by Dan Kaminski), you may notice your Windows Server 2003 machine gets a little greedy. At least, mine sucks up 2500 – yes, that’s two thousand five hundred – UDP sockets sitting there apparently waiting for incoming packets.

Output of 'netstat -bona -p udp' command, showing ports bound by DNS.EXE

This is, apparently, one of those behaviours sure to be listed in the knowledge base as “this behavior is by design” – a description that graces some of the more entertaining elements of the Microsoft KB.

Why does this happen? I can only guess. But here’s my best guess.

The fix to DNS, implemented across multiple platforms, was to decrease the chance of an attacker faking a DNS response, by increasing the randomness in the DNS requests that has to be copied back in a response.

I don’t know how this was implemented on other platforms, but I do know that it’s already been reported that BIND’s implementation is slower than it used to be (hardly a surprise, making random numbers is always slower than simply counting up) – and maybe that’s what Microsoft tried to forestall in the way that they create the random sockets.

Instead of creating a socket and binding it to a random source port at the time of the request, Microsoft’s patched DNS creates 2500 sockets, each bound to a random source port, at the time that the DNS service is started up. This way, perhaps they’re avoiding the performance hit that BIND has been criticised for.

There are, of course, other services that also use a UDP port. ActiveSync’s connection to Exchange, IPsec, IAS, etc, etc. Are they affected?

Sometimes.

Randomly, and without warning or predictability. Because hey, the DNS server is picking ports randomly and unpredictably.

[Workaround: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ReservedPorts is a registry setting that lists multiple port ranges that will not be used when binding an ephemeral socket. The DNS server will obey these reservations, and not bind a socket to ports specified in this list. More explanation in the blog linked above, or at http://support.microsoft.com/kb/812873]

DNS, you see, is a fundamental underpinning of TCP/IP services, and as such needs to start up before most other TCP/IP based services. So if it picks the port you want, it gets first pick, and it holds onto that port, preventing your application from binding to it.

This just doesn’t seem like a fix written by someone who ‘gets’ TCP/IP. Perhaps I’m missing something that explains why the DNS server in Windows Server 2003 works this way, but I would be inclined to take the performance hit of binding and rebinding in order to find an unused random port number, rather than binding before everyone else in an attempt to pre-empt other applications’ need for a port.

There are a couple of reasons I say this:

  1. Seriously, how many Windows Server 2003 users out there have such a high-capacity DNS server that they will notice the performance hit?
  2. Most Windows Server 2003-based DNS servers are small caching servers for businesses, rather than Internet infrastructure servers responsible for huge numbers of requests per second – even if you implement this port-stealing method, it shouldn’t be the default, because the majority of users just don’t need that performance.
  3. If you do need the performance, get another server to handle incoming requests. Because the cost of having your DNS server’s cache poisoned is considerably greater than the cost of increasing the number of servers in your pool, if you’re providing major DNS service to that many customers.
  4. A major DNS service provider will be running fewer services that would pre-empt a DNS server request to bind to a random port, whereas systems running several UDP-based services are going to need less performance on their outgoing DNS requests.

I’d love to know if I’m missing something here, but I really hope that Microsoft produces a new version of the DNS patch soon, that doesn’t fill your netstat -a output with so many bound and idle sockets, each of which takes up a small piece of nonpaged pool memory (that means real memory, not virtual memory).

Vistafy Me.

I have a little time over the next couple of weeks to devote to developing WFTPD a little further.

This is a good thing, as it’s way past time that I brought it into Vista’s world.

I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.

The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.

So, WFTPD and WFTPD Pro run in Windows Vista and Windows Server 2008.

But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.

I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.

Here’s my list of significant changes to make over the next couple of weeks:

  1. Convert the Help file from WinHelp to HTML Help.
    • WinHelp is not supported in Vista – you can download a WinHelp version, but it’s far better to support the one format of Help file that Windows uses. So, I’m converting from WinHelp to HTML Help.
  2. Changing the Control Panel Applet for WFTPD Pro.
    • CPL files still work in Windows Vista, but they’re considered ‘old’, and there’s an ugly user experience when it comes to making them elevate – run as administrator.
    • There are two or three ways to go here –
      1. one is to create an EXE wrapper that calls the old CPL file. That’s fairly cheap, and will probably be the first version.
      2. Another is to write an MMC plugin. That’s a fair amount of work, and requires some thought and design. That’s going to take more than a couple of weeks.
      3. A third option is to create some form of web-based interface. I don’t want to go that way, because I don’t want to require my users to install IIS or some other web server.
    • So, first blush it seems will be to wrap the existing interface, and secondly I’ll be investigating what an MMC should look like.
  3. Support for IPv6.
    • I already have this implemented in a trial version, but have yet to fully wire it up to a user interface that I’m willing to unleash on the world. So that’s on the cards for the next release.
  4. Multiple languages
    • There are two elements to support for multiple languages in FTP:
      1. File names in non-Latin character sets
      2. Text messages in languages other than English
    • The first, file names in different character sets, will be achieved sooner than the second. If the second ever occurs, it will be because customers are sufficiently interested to ask me specifically to do it.
  5. SSL Client Certificate authentication
    • SSL Client Certificate Auth has been in place for years – it’s a secret feature. The IIS guys warned me off developing it, saying “that’s really hard, don’t try and do anything with client certs”.
    • I didn’t have the heart to tell them I had the feature working already (but without an interface), and that it simply required a little patience.
  6. Install under Local Service and Network Service accounts
  7. Build in Visual Studio 2008, to get maximum protection using new compiler features.
    • /analyze, Address Space Layout Randomisation, SAL – all designed to catch my occasional mistakes.

As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.

Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.

Another side-project of mine is an EFS tool. I may use some time to work on that.

UAC – The Emperor’s New Clothes

I heard a complaint the other day about UAC – User Account Control – that was new to me.

Let’s face it, as a Security MVP, I hear a lot of complaints about UAC – not least from my wife, who isn’t happy with the idea that she can be logged on as an administrator, but she isn’t really an administrator until she specifically asks to be an administrator, and then specifically approves her request to become an administrator.

My wife is the kind of user that UAC was not written for. She’s a capable administrator (our home domain has redundant DCs, DHCP servers with non-overlapping scopes, and I could go on and on), and she doesn’t make the sort of mistakes that UAC is supposed to protect users from.

My wife also does not appreciate the sense that Microsoft is using the users as a fulcrum for providing leverage to change developers to writing code for non-admin users. She doesn’t believe that the vendors will change as a result of this, and the only effect will be that users get annoyed.

But not me.

I like UAC – I think it’s great that developers are finally being forced to think about how their software should work in the world of least privilege.

So, as you can imagine, I thought I’d heard just about every last complaint there is about UAC. But then a new one arrived in my inbox from a friend I’ll call Chris.

“Why should I pretend to be different people to use my own PC?”

I must admit, the question stunned me.

Obviously, what Chris is talking about is the idea that you are strongly “encouraged” (or “strong-armed”, if you prefer) by UAC to work in (at least) two different security contexts – the first, your regular user context, and the second, your administrator context.

Chris has a point – you’re one person, you shouldn’t have to pretend to be two. And it’s your computer, it should do what you tell it to. Those two are axiomatic, and I’m not about to argue with them – but it sounds like I should do, if I’m going to answer his question while still loving UAC.

No, I’m going to argue with his basic premise that user accounts correspond to individual people. They correspond more accurately – particularly in UAC – to clothing.

Windows before NT, or more accurately, not based on the NT line, had no separation between user contexts / accounts. Even the logon was a joke – prompted for user name and password, but if you hit Escape instead, you’d be logged on anyway. Windows 9x and ME, then, were the equivalent of being naked.

In Windows NT, and the versions derived from it, user contexts are separated from one another by a software wall, a “Security Boundary”. There were a couple of different levels of user access, the most common distinctions being between a Standard (or “Restricted”) User, a Power User, and an Administrator.

Most people want to be the Administrator. That’s the account with all the power, after all. And if they don’t want to be the Administrator, they’d like to be at least an administrator. There’s not really much difference between the two, but there’s a lot of difference between them and a Standard User.

Standard Users can’t set the clock back, they can’t clear logs out, they can’t do any number of things that might erase their tracks. Standard Users can’t install software for everyone on the system, they can’t update the operating system or its global settings, and they can’t run the Thomas the Tank Engine Print Studio. [One of those is a problem that needs fixing.]

So, really, a Standard User is much like the driver of a car, and an administrator is rather like the mechanic. I’ve often appealed to a different meme, and suggested that the administrator privilege should be called “janitor”, so as to make it less appealing – it really is all about being given the keys to the boiler room and the trash compactor.

It’s about wearing dungarees rather than your business suit.

You wear dungarees when working on the engine of your car, partly because you don’t want oil drops on your white shirt, but also partly so your tie doesn’t get wrapped around the spinning transmission and throttle you. You don’t wear the dungarees to work partly because you’d lose respect for the way you look, but also because you don’t want to spread that oil and grease around the office.

It’s not about pretending to be different people, it’s about wearing clothes suited to the task. An administrator account gives you carte blanche to mess with the system, and should only be used when you’re messing with the system (and under the assumption that you know what you’re doing!); a Standard User account prevents you from doing a lot of things, but the things you’re prevented from doing are basically those things that most users don’t actually have any need to do.

You’re not pretending to be a different person, you’re pretending to be a system administrator, rather than a user. Just like when I pretend to be a mechanic or a gardener, I put on my scungy jeans and stained and torn shirts, and when I pretend to be an employee, I dress a little smarter than that.

When you’re acting as a user, you should have user privileges, and when you’re acting as an administrator, you should have administrative privileges. We’ve gotten so used to wearing our dungarees to the board-room that we think they’re a business suit.

So while UAC prompts to provide a user account aren’t right for my wife (she’s in ‘dungarees-mode’ when it comes to computers), for most users, they’re a way to remind you that you’re about to enter the janitor’s secret domain.

Why you don’t run as root

[… or administrator, or whatever]

I like Roger Grimes, he’s a nice guy, and he generally makes me think about what he has to say. That’s a good thing, because otherwise he’d either be part of the same choir as me, or he’d be the sort of guy whose ideas I dismiss with a wave of the paw and a barely audible “Pah.”

Today, though, I think he’s missing something fundamental – and perhaps you are too.

He writes in the InfoWorld Security Adviser column that “UAC will not work”, on the simple basis that malware can still do all the things it wants to do without having to execute under a privileged account.

That’s true, and it always will be – the day that a computer can see my attempt to “delete the Johnson account, and forward that instruction to the following addresses”, and determine whether it’s malicious or appropriate, is the day when the computer can do the whole job for me, by simply choosing all possible actions and seeing which are malicious and which are appropriate.

However, what I can rely on, if the malware has been held out of privileged accounts, is the integrity of the system, and (unless they were prone to activating the same malware) the other users on that system. [By system, I may mean one machine or several networked together to perform a function.]

So while it’s true that the old cross-platform virus “forward this message to everyone in your address book, then delete all your data” is still going to function if the user stays out of administrator roles, at least the operation of the system can be restored, as well as whatever data has been backed up.

You don’t run as a restricted user to prevent viruses from happening – you run as a restricted user to prevent viruses from happening to the people and systems with whom you work. You run as a restricted user, so that when some system falls over, you can say “it couldn’t possibly have been me”. You run as a restricted user because if there is a bug in the program you run, its effects will be limited to only that portion of the OS and its data to which you are restricted.

Sure, least privilege is somewhat of an artificial construct – but the alternative is that users get more privileges than they need. That quickly boils down to “everyone can do anything”.

I’ve been on that kind of a network before, and when we found one guy’s stash of truly offensive porn (this wasn’t the occasional Rubens painting) on the server, we had no way of finding out who it was, let alone punishing them by firing them. The company I worked for was fortunate that whoever found it didn’t sue for fostering the creation of a hostile workplace.

So, no, UAC won’t stop malware – but then that’s not its purpose. It’s purely a beneficial, incidental, and temporary side-effect that it will stop much of today’s malware.

Is a NAT a security device?

I’ve been working lately on a couple of IPv6-related projects. First, there’s a chapter for an upcoming book, and second, there’s the effort to make WFTPD and WFTPD Pro work on IPv6, since it’s enabled by default in Windows Vista and Windows Server 2008 [more on that in a future post].

A big argument to my mind, as an old-school Internet user, for enabling IPv6 is that every one of your hosts becomes a fully-fledged Internet participant, like it used to be with IPv4 back in the ’90s.

What do I mean by that?

I mean that every machine is reachable at its own address on every port that it chooses to open, rather than requiring someone to tinker with a NAT to open port mappings for specific applications.

IPv6 removes the need for a NAT at all.

Wow. To a security professional, that’s a shocking statement. It feels rather like saying that living in a tent removes the need for locks. How on earth do you protect your stuff without a NAT?

The answer is that a NAT was never intended to be a security device – it just happened, somewhat accidentally, that requiring address translation and port mapping to be statically configured created a security barrier.

Unfortunately, NATs also killed a lot of protocols (H.323 for webcams, FTP for file transfers – particularly when secured, IPsec) that quote IP addresses in their traffic.

To some extent this was fixed with ALGs – Application Layer Gateways – but never very satisfactorily (particularly in the case of secured FTP). What would be far better is to have a device that had the blocking advantages of a NAT, but didn’t require IP addresses and ports to be altered in transit.

There’s a name for such a device:

A firewall.

[Only if the firewall is configured by default to list all ports as “closed”. An open-by-default firewall is not a firewall, it’s a router.]

And a firewall is a far simpler program than a NAT (even if it’s in hardware, it’s the program’s simplicity that matters most). If it matches incoming traffic to ports that are opened, it allows that traffic in. If outgoing traffic occurs on a port that was closed, the firewall usually opens that port for the reverse traffic, so that clients on the inside of the firewall can get a response.

So, when the time comes that your network is required to transition to IPv6, don’t beg for an IPv6 NAT. I actually hope such a device doesn’t actually exist, and that nobody’s stupid enough to develop one. What you should insist on is an IPv6 firewall.

“But what about the problem that the layout of my network inside of the firewall will be revealed?” you might ask.

It won’t, because IPv6 addresses are sparsely allocated.

“How about machines that won’t ever need to be accessed by, or access out to, anything outside my company? What’s the IPv6 equivalent of an RFC 1918 address?”

No problem – there’s a standard for link-local and site-local (Unique Local Unicast, technically) addressing, which will never be routed outside of your site.

Any other reasons you’re clinging to the idea that a NAT is a security device?