FTP – Untrustworthy? I Don’t Think So!

Lately, as if writers all draw from the same shrinking paddling-pool of ideas, I’ve noticed a batch of stories about how unsafe, unsecure and untrustworthy is FTP.

SC Magazine says so.

First it was an article in the print version of SC Magazine, sadly not repeated online, titled “2 Minutes On… FTP integrity challenged”, by Jim Carr. I tried to reach Jim by email, but his bounce message tells me he doesn’t work for SC Magazine any more.

This article was full of interesting quotes.

“8,700 FTP server credentials were being used to access and infect more than 2,000 legitimate websites in the US”. The article goes on to quote Finjan’s director of security research who says they were “most likely hijacked by malware” – since most malware can do keystroke logging for passwords, there’s not much can be done at the protocol level to protect against this, so this isn’t really an indictment of FTP so much as it is an indication of the value and ubiquity of FTP.

Then we get to a solid criticism of FTP: “The problem with FTP is it transfers data, including authorization credentials, in plain text rather than in encrypted form, says Jeff Debrosse, senior research analyst at security vendor ESET”. Okay, that’s true – but in much the same vein as saying that the same problems all apply to HTTP.

Towards the end of the article, we return to Finjan’s assertion that malware can steal credentials for FTP sites – and as I’ve mentioned before, malware can get pretty much any user secret, so again, that’s not a problem that a protocol such as FTP – or SFTP, HTTP, SSH, SCP, etc – can fix. There’s a password or a secret key, and once malware is inside the system, it can get those credentials.

Fortunately, the article closes with a quote from Trent Henry, who says “That means FTP is not the real issue as much as it is a server-protection issue.”

OK, But a ZDNet blogger says so, too.

Well, yeah, an article in a recent ZDNet blog entry – on storage, not networking or security (rather like getting security advice from Steve Gibson, a hard-drive expert) – rants on about how his web site got hacked into (through WordPress, not FTP), and as a result, he’s taken to heart a suggestion not to use FTP.

Such a non-sequitur just leaves me breathless. So here’s my take:

FTP Has Been Secure for Years

But some people have just been too busy, or too devoted to other solutions, to take notice.

FTP first gained secure credentials with the addition of support for SASL and SKey. These are mechanisms for authenticating users without passing a password or password-equivalent (and by “password-equivalent”, I’m including schemes where the hash is passed as proof that you have the password – an attacker can simply copy the hash instead of the password). These additional authentication methods give FTP the ability to check identity without jeopardising the security of the identified party. [Of course, prior to this, there were IPsec and SOCKS solutions that work outside of the protocol.]

OK, you might say, but that only protects the authentication – what about the data?

FTP under GSSAPI was defined in RFC 2228, which was published in October 1997 (the earliest draft copy I can find is from March 1995), from a draft developed over the preceding couple of years. What’s GSSAPI? As far as anyone really needs to know, it’s Kerberos.

This inspired the development of FTP over SSL in 1996, which became FTP over TLS, and which finally became RFC 4217. From 1997 to 2003, those of us in the FTPExt Working Group were wondering why the standard wasn’t yet an RFC, as draft after draft were submitted with small changes, and then apparently sat on by the RFC editor – during this time, several compatible FTP clients, servers and proxies were produced that compatibly supported FTP over TLS (and/or SSL).

Why so long from draft to publication?

One theory that was raised is that the IETF were trying to get SSH-based protocols such as SFTP out before FTP over TLS (which has become known as “FTPS”, for FTP over SSL).

SFTP was abandoned after draft 13, which was made available in July 2006; RFC 4217 was published in October 2005. So it seems a little unlikely that this is the case.

The more likely theory is simply that the RFC Editor was overworked – the former RFC Editor, Jon Postel, died in 1998, and it’s likely that it took some time for the new RFC Editor to sort all the competing drafts out, and give them his attention.

What did the FTPExt Working Group do while waiting?

While we were waiting for the RFC, we all built compatible implementations of the FTP over TLS standard.

One or two of us even tried to implement SFTP, but with the draft mutating rapidly, and internal discussion on the SFTP mailing list indicating that no-one yet knew quite what they wanted SFTP to be when it grew up, it was like nailing the proverbial jelly to a tree. Then the SFTP standardisation process ground to a halt, as everyone lost interest. This is why getting SFTP implementations to interoperate is sometimes so frustrating an experience.

FTPS, however – that was solidly defined, and remains a very compatible protocol with few relevant drawbacks. Sadly, even FTP under GSSAPI turned out to have some reliability issues (the data transfer and the control connection, though over different asynchronous channels, share the same encryption context, which means that the receiver must synchronise the two asynchronous channels exactly as the sender did, or face a loss of connection) – but FTP over TLS remains strong and reliable.

So, why does no-one know about FTPS?

Actually, there’s lots of people that do – and many clients and servers, proxies and tunnels, exist in real life implementations. Compatibility issues are few, and generally revolve around how strict servers are about observing the niceties of the secure transaction.

Even a ZDNet blogger or two has come across FTPS, and recommends it, although of course he recommends the wrong server.

My recommendation?

WFTPD Pro. Unequivocally. Because I know who wrote it, and I know what went into it. It’s all good stuff.

Kaminsky Black-Hat Webcast: "By Any Other Name: DNS has doomed us all."

By any other name... Okay, so the talk’s official title was “Dan Kaminsky’s DNS Discovery: The Massive, Multi-Vendor Issue and the Massive, Multi-Vendor Fix”.


Arcane details of TCP are something of a hobby of mine, so I attended the webcast to see what Dan had to say.


The Past is Prologue


A little history first – six months ago, Dan Kaminsky found something so horrifying in the bowels of DNS that he actually kept quiet about it. He contacted DNS vendors – OS manufacturers, router developers, BIND authors, and the like – and brought them all together in a soundproofed room on the Microsoft campus to tell them all about what he’d discovered.


Everyone was sworn to secrecy, and consensus was reached that the best way to fix the problem would be to give vendors six months to release a coordinated set of patches, and then Dan Kaminsky would tell us all at BlackHat what he’d found.


Until then, he asked the security community, don’t guess in public, and don’t release the information if you know it.


Now is the winter of our DNS content (A records and the like)


Fast forward a few months, and we have a patch. I don’t think the patch was reverse-engineered, but there was enough public guessing going on that someone accidentally slipped and leaked the information – now the whole world knows.


Kaminsky confirmed this in today’s webcast, detailing how the attack works, to forge the address of www.example.com:


  1. Attacker persuades victim to ask for 1.example.com
  2. Victim’s DNS server queries for an A record for 1.example.com
  3. Attacker forges a response that says “I don’t know 1.example.com, but the DNS server at www.example.com knows, and it’s at 1.2.3.4”
  4. Victim’s DNS server accepts this response, queries 1.2.3.4 for 1.example.com, and now the attacker knows that the victim can be directed to www.example.com at 1.2.3.4, allowing the attacker to steal cookies, represent as a trusted web site, etc, etc.

Note that this is a simple description of the new behavior that Kaminsky found – step 3 allows the DNS server’s cache to be poisoned with a mapping for www.example.com to 1.2.3.4, even if it was already cached from a previously successful search.


If that was all that Kaminsky could do, even on an unpatched server, he’d have a 1 in 65535 chance of guessing the transaction ID to make his forgery succeed. However, old known behaviours simply make it easier for the attacker to make the forgery work:


  1. Because the attacker tells the victim to search for a site, the attacker controls when the race with the authoritative DNS server starts.
  2. The attacker can tell the victim to search several times, and can forge several possible responses, using the birthday paradox to be more likely to guess the transaction ID (and source port), so that his forged response is accepted.
  3. Because this attack overwrites cached entries, the attacker can try again and again (picture a site with a million 1-pixel images each causing a different DNS query) until he is successful. Stuffing the cache won’t protect you.
  4. The attacker can insert an obscenely huge TTL (time-to-live) on the faked entry, so that it remains in cache until the DNS service is flushed or restarted.

Kaminsky’s tests indicate that a DNS server’s cache can be poisoned in this way in under ten seconds. There are metasploit plugins that ‘demonstrate’ this  (or, as with all things metasploit, can be used to exploit systems).


The patch, by randomizing the source port of the DNS resolver, raises the difficulty of this attack by a few orders of magnitude.


The long-term fix, Kaminsky said, is to push for the implementation of DNSSEC, a cryptographically-signed DNS system, wherein you refuse to pass on or accept information that isn’t signed by the authoritative host.


A port, a port, my domain for a port


One novel wrinkle that Kaminsky hadn’t anticipated is that even after application of the patch to DNS servers, some NATs apparently remove the randomness in the source port that was added to make the attack harder. To quote Kaminsky “whoops, sorry Linksys” (although Cisco was one of the companies he notified of the DNS flaw, and they now own Linksys). Such de-randomising NATs essentially remove the usefulness of the patch.


Patching is not completely without its flaws, however – Kaminsky didn’t mention some of the issues that have been occurring because of these patches:


  1. ZoneAlarm decided that DNS queries from random source ports must be a sign of attack, and denied all such queries, essentially disconnecting the Internet from users of ZoneAlarm. I guess I can learn to live with that.
  2. BIND doesn’t check when binding to a random port to see if that port is already in use – as a result, when the named server sends out a DNS query, there’s a chance the response packet will come back to a service that isn’t expecting it. Because the outgoing query punches a return hole in most firewalls, this could mean that a service blocked by the firewall from receiving Internet traffic is now opened up to the Internet. The workaround is to set the avoid-udp-v4-ports configuration setting, listing any ports that named shouldn’t use.
  3. Windows’ DNS service takes a different tack, binding to 2500 (the number is configurable) random ports on startup. As with BIND, these ports might conflict with other services; different from BIND, however, is the behavior – since the ports are already bound by the DNS server, those other services (starting later than DNS, because most IP components require it) are now unable to bind to that port. As with BIND, the workaround is to tell the DNS server which ports not to use. The registry entry ReservedPorts will do this.
  4. Users are being advised to point their DNS server entries to OpenDNS. Single point of failure, anyone?

Metrics and statistics:


  1. When Kaminsky’s vulnerability detection tool was first made available at doxpara.com, 80+% of all checks indicated that the DNS server was vulnerable. This last week, 52% of all checks showed vulnerable servers. Patches are getting installed.
  2. The attack is noisy – output from the metasploit framework showed “poisoning successful after 13250 attempts” – that’s thirteen thousand DNS queries and 260,000 forged DNS responses. IDS and IPS tools should have signatures for this attack, and may be able to repel boarders.
  3. Metasploit exploits for this are at http://www.caughq.org/exploits/CAU-EX-2008-0003.txt if you want to research it further.

Tomorrow, and tomorrow, and tomorrow…


The overall message of the webcast is this:


This attack is real, and traditional defences of using a high TTL will not protect you. Patching is the way to go. If you can’t patch, configure those unpatched DNS servers to forward to a local new (patched) DNS server, or an external patched server like OpenDNS. Scan your site for unexpected DNS servers.

Whoops – Information Wanted to be Free Again.

Picture the scene at Security Blogs R Us:

“We’re so freakin’ clever, we’ve figured out Dan Kaminsky’s DNS vulnerability”

“Yeah, but what if someone else figures it out – won’t we look stupid if we post second to them?”

“You’re right – but we gave Dan our word we wouldn’t publish.”

“So we won’t publish, but we’ll have a blog article ready to go if someone else spills the beans, so that we can prove that we knew all about it anyway.”

“Yeah, but we’d better be careful not to publish it accidentally.”

>>WHOOP, WHOOP, WHOOP<<

“What was that?”

“The blog alert – someone else is beating us to the punch as we speak.”

“Publish or perish! Damn the torpedoes – false beard ahead!”

“What? Are you downloading those dodgy foreign-dubbed pirated anime series off BitTorrent through the company network again?”

“Yes – I found a way around your filters.”

“Good man.”


It’s true (okay, except for all of the made-up dialog above), a blog at one of the security vulnerability research crews (ahem, Matasano) did the unthinkable and rushed a blog entry out on the basis that they thought someone else (ahem, Halvar Flake) was beating them to it. And now we all know. The genie is out of the bag, the cat has been spilled, and the beans are out of the bottle.

Now we all know how to spoof DNS.

Okay, so Matasano pulled the blog pretty quickly, but by then it had already been copied to server upon server, and some of those copies are held by people who don’t want to take the information off the Internet.

Clearly, Information Wants To Be Free.


There’s an expression I never quite got the hang of – “Information Wants To Be Free”, cry the free software guys (who believe that software is information, rather than expression, which is a different argument entirely) – and the sole argument they have for this is that once information is freed, it’s impossible to unfree it. A secret once told is no longer a secret.

There’s an allusion to the way in which liquid ‘wants to be at its lowest level’ (unless it’s liquid helium, which tends to climb up the sides of the beaker when you’re not looking), in that if you can’t easily put something back to where it used to be, then where it used to be is not where it wants to be.

So, information wants to be free, and Richard Stallmann’s bicycle tyre wants to have a puncture.


But back to the DNS issue.

I can immediately think of only one extra piece of advice I’d have given to the teams patching this on top of what I said in my previous blog, and that’s something that, in testing, I find the Windows Server 2003 DNS server was doing anyway.

So, that’s alright then.

Well, not entirely – I do have some minor misgivings that I hope I’ve raised to the right people.

But in answer to something that was asked on the newsgroups, no I don’t think you should hold off patching – the patch has some manual elements to it, in that you have to make sure the DNS server doesn’t impinge on your existing UDP services (and most of you won’t have that many), but patching is really a whole lot better than the situation you could find yourself in if you don’t patch.

And Dan, if you’re reading this – hi – great job in getting the big players to all work together, and quite frankly, the secrecy lasted longer than I expected it to. Good job, and thanks for trying to let us all get ourselves patched before your moment of glory at BlackHat.

DNS Server Reserves 2500 Ports.

After applying the patch for MS08-037KB 953230 (the multi-OS DNS flaw found by Dan Kaminski), you may notice your Windows Server 2003 machine gets a little greedy. At least, mine sucks up 2500 – yes, that’s two thousand five hundred – UDP sockets sitting there apparently waiting for incoming packets.


Output of 'netstat -bona -p udp' command, showing ports bound by DNS.EXE


This is, apparently, one of those behaviours sure to be listed in the knowledge base as “this behavior is by design” – a description that graces some of the more entertaining elements of the Microsoft KB.


Why does this happen? I can only guess. But here’s my best guess.


The fix to DNS, implemented across multiple platforms, was to decrease the chance of an attacker faking a DNS response, by increasing the randomness in the DNS requests that has to be copied back in a response.


I don’t know how this was implemented on other platforms, but I do know that it’s already been reported that BIND’s implementation is slower than it used to be (hardly a surprise, making random numbers is always slower than simply counting up) – and maybe that’s what Microsoft tried to forestall in the way that they create the random sockets.


Instead of creating a socket and binding it to a random source port at the time of the request, Microsoft’s patched DNS creates 2500 sockets, each bound to a random source port, at the time that the DNS service is started up. This way, perhaps they’re avoiding the performance hit that BIND has been criticised for.


There are, of course, other services that also use a UDP port. ActiveSync’s connection to Exchange, IPsec, IAS, etc, etc. Are they affected?


Sometimes.


Randomly, and without warning or predictability. Because hey, the DNS server is picking ports randomly and unpredictably.


[Workaround: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ReservedPorts is a registry setting that lists multiple port ranges that will not be used when binding an ephemeral socket. The DNS server will obey these reservations, and not bind a socket to ports specified in this list. More explanation in the blog linked above, or at http://support.microsoft.com/kb/812873]


DNS, you see, is a fundamental underpinning of TCP/IP services, and as such needs to start up before most other TCP/IP based services. So if it picks the port you want, it gets first pick, and it holds onto that port, preventing your application from binding to it.


This just doesn’t seem like a fix written by someone who ‘gets’ TCP/IP. Perhaps I’m missing something that explains why the DNS server in Windows Server 2003 works this way, but I would be inclined to take the performance hit of binding and rebinding in order to find an unused random port number, rather than binding before everyone else in an attempt to pre-empt other applications’ need for a port.


There are a couple of reasons I say this:


  1. Seriously, how many Windows Server 2003 users out there have such a high-capacity DNS server that they will notice the performance hit?
  2. Most Windows Server 2003-based DNS servers are small caching servers for businesses, rather than Internet infrastructure servers responsible for huge numbers of requests per second – even if you implement this port-stealing method, it shouldn’t be the default, because the majority of users just don’t need that performance.
  3. If you do need the performance, get another server to handle incoming requests. Because the cost of having your DNS server’s cache poisoned is considerably greater than the cost of increasing the number of servers in your pool, if you’re providing major DNS service to that many customers.
  4. A major DNS service provider will be running fewer services that would pre-empt a DNS server request to bind to a random port, whereas systems running several UDP-based services are going to need less performance on their outgoing DNS requests.

I’d love to know if I’m missing something here, but I really hope that Microsoft produces a new version of the DNS patch soon, that doesn’t fill your netstat -a output with so many bound and idle sockets, each of which takes up a small piece of nonpaged pool memory (that means real memory, not virtual memory).

Vistafy Me.

I have a little time over the next couple of weeks to devote to developing WFTPD a little further.

This is a good thing, as it’s way past time that I brought it into Vista’s world.

I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.

The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.

So, WFTPD and WFTPD Pro run in Windows Vista and Windows Server 2008.

But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.

I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.

Here’s my list of significant changes to make over the next couple of weeks:

  1. Convert the Help file from WinHelp to HTML Help.
    • WinHelp is not supported in Vista – you can download a WinHelp version, but it’s far better to support the one format of Help file that Windows uses. So, I’m converting from WinHelp to HTML Help.
  2. Changing the Control Panel Applet for WFTPD Pro.
    • CPL files still work in Windows Vista, but they’re considered ‘old’, and there’s an ugly user experience when it comes to making them elevate – run as administrator.
    • There are two or three ways to go here -
      1. one is to create an EXE wrapper that calls the old CPL file. That’s fairly cheap, and will probably be the first version.
      2. Another is to write an MMC plugin. That’s a fair amount of work, and requires some thought and design. That’s going to take more than a couple of weeks.
      3. A third option is to create some form of web-based interface. I don’t want to go that way, because I don’t want to require my users to install IIS or some other web server.
    • So, first blush it seems will be to wrap the existing interface, and secondly I’ll be investigating what an MMC should look like.
  3. Support for IPv6.
    • I already have this implemented in a trial version, but have yet to fully wire it up to a user interface that I’m willing to unleash on the world. So that’s on the cards for the next release.
  4. Multiple languages
    • There are two elements to support for multiple languages in FTP:
      1. File names in non-Latin character sets
      2. Text messages in languages other than English
    • The first, file names in different character sets, will be achieved sooner than the second. If the second ever occurs, it will be because customers are sufficiently interested to ask me specifically to do it.
  5. SSL Client Certificate authentication
    • SSL Client Certificate Auth has been in place for years – it’s a secret feature. The IIS guys warned me off developing it, saying “that’s really hard, don’t try and do anything with client certs”.
    • I didn’t have the heart to tell them I had the feature working already (but without an interface), and that it simply required a little patience.
  6. Install under Local Service and Network Service accounts
  7. Build in Visual Studio 2008, to get maximum protection using new compiler features.
    • /analyze, Address Space Layout Randomisation, SAL – all designed to catch my occasional mistakes.

As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.

Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.

Another side-project of mine is an EFS tool. I may use some time to work on that.