FTP

No more IPv4 /8s – Oodles of IPv6 /64s and /48s.

[Additional note: Bing and Juniper Networks just announced that they will also be joining in World IPv6 Day.]

IANA just held a ceremony (streamed live, and with a press conference following at 10amEST) to hand out the last of the IPv4 /8 blocks to Regional Internet Registries – RIRs.

imageSNAGHTML556f35a1

It’s a quiet, but historical moment, as it truly marks the time we can finally tell people “yes, I know nothing appeared to be happening, but finally it’s happened”. Preparing for IPv6 has to happen, because there just isn’t any stopping this particular juggernaut. IPv4 addresses will run out, and there will arise a time when web sites can no longer find a public IPv4 address.

BEFORE that happens, something has to change to allow us to work together on an IPv6 Internet. I’m doing what I can.

As a client user, I live on the Hurricane Electric IPv6 Tunnel Broker, because Comcast have yet to extend their IPv6 trial to my neck of the woods, seeing as how I live in technology-deprived Seattle.

I’m still trying to persuade my web site’s ISP, 1&1, to put an IPv6 capability in place before World IPv6 Day on June 8, so I can host my web page there in IPv6, but I definitely have my FTP server software, WFTPD and WFTPD Pro, ready to support IPv6 fully.

What are you doing?

Bye bye, IPv4!

OK, so IPv4 is probably right to be acting like the old man in Monty Python and the Holy Grail, and screaming “I’m not dead yet!”, but we certainly shouldn’t hold out any hope that it’ll be getting any better. Clonk it on the head as soon as possible, because really, it’s been extremely poorly for many years now.

I’ve mentioned before that my biggest argument that IPv4 has already exhausted itself is the mere presence of aggregating NATs – Network Address Translators, whose sole purpose is to take multiple hosts inside a network, and expose them to the outside as if they were really only processes on one host with one IP address. If IPv4 were large enough, we wouldn’t have needed these at all, and at best, they were a stop-gap measure, and an inconvenient one at that.

Well, now we can’t really stop the gap any longer. We’ve hit the first of a set of dominoes that leads to us not even having enough IPv4 addresses to support the Internet with NATs in place.

No more slash-eights

imageThat’s right, no more /8 networks are left in IANA’s pool to assign. OK, I know it says ”5/256” are left, but that’s only because the IANA (Internet Assigned Numbers Authority) haven’t yet announced that they’ve given out those last five, and they have previously announced that when they get down to five, those five will automatically be distributed.

imageYesterday, the counter said “7/256”, but earlier today, APNIC – the RIR for the Asia & Pacific region (RIR – Regional Internet Registry) – bought two entries to serve their ever-growing Internet market. That will trigger the IANA to distribute the remaining five blocks.

And no, Egypt’s IP blocks are not available for re-use.

Seems to be working fine for me, thanks

Yes, that’s right, this isn’t a “shut everything off and go home” moment – as I said before, this is merely the tipping of an early domino in a chain. Next, the last five /8s will be given to the five RIRs, and then they will use those to continue handing out addresses to their ISPs. At some point, the supply will dry up, and it will either become impossible, or expensive, to get new public-facing addresses. Existing addresses will still work even then, of course, and several of the new IPv4 address assignments will, ironically, be aggregating NATs that will allow the IPv6 Internet to access old IPv4 sites!

IPmageddon? IPocalypse? RagNATok? V4gate?

The only question now is what we call this momentous slide into IPv4 exhaustion. Certainly, RagNATok has a pleasant ring to it, as it invokes the idea of a twilight of the old order, a decay into darkness, but this time with a renewal phase, as the new Internet, based entirely on IPv6, rises, if not Phoenix-like from the ashes, then at least alongside, and eventually much larger than the old IPv4 Internet.

As you can tell from my tone, I don’t think it’s doom and gloom – I’m quite looking forward to having the Internet back the way I remember it – with every host a full-class node on the network. It’s going to mean some challenges, particularly in the world of online security, where there will be new devices to buy (bigger addresses mean larger rule-sets, and existing devices are already pretty much operating at capacity), new terminology to learn, and new reasons to insist on best practices (authentication by IP address was never reliable, and is particularly a bad idea when every host has multiple addresses by default, and by design will change its source address on a regular basis).

Perhaps the Mayans were right in deciding that 2012 is the year when everything changes (to borrow a line from Torchwood).

The rather unassuming name that has been chosen for this particular date – when the last assignment leaves the IANA – is “X-Day” – X as in “eXhaustion”.

World IPv6 day

The next date for your calendars, then, is World IPv6 Day, two days after my birthday, or for those of you that don’t know me, June 8, 2011, which is when major Internet presences including Google, Yahoo and others, will be switching on full IPv6 service on their main sites, and seeing what breaks. Look forward to that, and in the meantime, test some known IPv6 sites, like http://ipv6.google.com to ensure that you’re getting good name resolution and connectivity.

If you’re running an FTP server on Windows, I encourage you to contact me at support@wftpd.com if you would like to test WFTPD or WFTPD Pro for IPv6 connectivity. We are currently beta-testing a version with much greater IPv6 support than before.

Texas Imperial Software DefCon 18 challenge

MVP Mug Shot 2I rarely write about my business on the blog here, and perhaps I should do so some more.


I mentioned in the post earlier today of how I’d “hacked” my badge (“hacked” in the sense of “that’s not programming, that’s typing”) to display the Texas Imperial Software and WFTPD logos, and the wftpd.com domain hosting our web site.


Also, that I’ll be wearing my bright orange Texas Imperial Software t-shirt.


So, here’s the competition:


Take a photo of the Texas Imperial Software logo either from my shirt or my badge, post it to your blog (or other web-site), along with a description of where you saw me, and a link to Texas Imperial Software’s web site, http://www.wftpd.com, send me an email with a link to your site, and when I get back to the office, I’ll email you a free copy of WFTPD Pro – and as long as your page stays there for six months, you’ll get free updates the same as the rest of our customers.


What can you do with the free copy of WFTPD Pro? You can host your own secured FTP server, using the FTP over TLS protocol defined in RFC 4217, and also known as FTPS. Of course, what I’m guessing you’re going to do is hack on it – and that’s OK, providing that you notify me by email before(*) publishing your results. If you turn that hacking into a paper for a con, give me the opportunity to support your presentation, whether that’s with rebuttal, fixes, or mere apologies (sorry, can’t afford money).


The closest thing I have to a catch for this is that it has to be your own unique photo – I’ll be comparing all submissions for similarity, and the best way to avoid duplicates is to have someone else take the photo for you, and put yourself in the picture. And don’t forget, I don’t read your blog, so you have to email me a link to it.


Thanks for participating,


Alun.
~~~~


(*) I’d prefer the Google-recommended sixty days to fix stuff, but if you’re the kind of hacker who believes all vendors need public spanking, then by all means post immediately after emailing me. After all, it’s not like you couldn’t do that with the trial version anyway. But if you do that, I’ll be all grumpy about it, and won’t buy you a drink next time I see you.

FTP is Secure; Is Your FTP Server Secure?

Stupid spammer is stupid, spamming me his stupid spam.

As far as I can tell, I have had no interactions with either Biscom or Mark Eaton. And yet, he sends me email every couple of months or so, advertising his company’s product, BDS. I class that as spam, and usually I delete it.

Today, I choose instead to comment on it.

Here’s the text of his email:

Alun

Although widely used as a file transfer method, FTP may leave users non-compliant with some federal and state regulatory requirements. Susceptible to hacking and unauthorized access to private information, FTP is being replaced with more secure file transfer technologies. Companies seeking ways to prevent data breaches and keep confidential information private should consider these FTP risks:

» FTP passwords are sent in clear text
» Files transferred with FTP are not encrypted
» Unpatched FTP servers are vulnerable to malicious attacks

Biscom Delivery Server (BDS) is a secure file transfer solution that enables users to safely exchange and transfer large files while maintaining a complete transaction and audit trail. Because BDS balances an organization’s need for security – encrypting files both at rest and in transit – without requiring knowledge workers to change their accustomed business processes and workflows, workers can manage their own secure and large file delivery needs. See how BDS works.

I would request 15 minutes of your time to mutually explore on a conference call if BDS can meet your current and future file transfer requirements. To schedule a time with me, please view my calendar here or call my direct line at 978-367-3536. Thank you for the opportunity and I look forward to a brief call with you to discuss your requirements in more detail.

Best regards,
Mark

Better than most spammers, I suppose, in that he spelled my name correctly. That’s about the only correct statement in the entire email, however. It’s easy to read this and to assume that this salesman and his company are deliberately intending to deceive their customers, but I prefer to assume that he is merely misinformed, or chose his words poorly.

In that vein, here’s a couple of corrections I offer (I use “FTP” as shorthand for “the FTP protocol and its standard extensions”):

  • “FTP may leave users non-compliant”
    • I have yet to see a standard that states that FTP is banned outright. While FTP is specifically mentioned in the PCI DSS specification, it is always with language such as “An example of an insecure service, protocol, or port is FTP, which passes user credentials in clear-text.” FTP has not required user credentials to be passed in clear text for over a decade. An FTP server with a requirement that all credentials are protected (using encryption, one-time password, or any of the other methods of securing credentials in FTP) would be accepted by PCI auditors, and by PCI analysis tools.
  • “Susceptible to hacking and unauthorized access to private information”
    • BDS’s suggested replacement, Biscom, relies on the use of email to send a URL, and HTTPS to retrieve files.
    • email is eminently susceptible to hacking, as it is usually sent in the clear (to be fair, there are encryption technologies available for email, but they are seldom used in the kind of environments under discussion)
    • HTTPS is most definitely also susceptible to hacking and unauthorised access.
  • “FTP is being replaced with more secure file transfer technologies”
    • FTP may be replaced, that’s for certain, but I have not seen it replaced with more secure, reliable, or standard file transfer technologies.
      • Biscom essentially puts an operational framework around the downloading of files from a web server. It doesn’t add any security that FTP is lacking.
    • One more secure file transfer technology could be, of course, a more modern FTP server and client, which handles modern definitions of the FTP protocol and its extensions – for instance, the standard for FTP over TLS (and SSL), known as FTPS, which allows for encryption of credential information and file transfers, as well as the use of client and server certificates (as with HTTPS) to mutually authenticate client and server.
      • While the FTPS RFC was finalised only in 2005, it had not changed substantially for several years prior to that. WFTPD Pro included functional and interoperable implementations in versions dating back to 2001.
  • “FTP passwords are sent in clear text”
    • “People walk out into traffic without looking” – equally true, but equally open to misinterpretation. Most people don’t walk out into traffic without looking; most FTP servers are able to refuse clear text logons.
    • FTP passwords are sent in clear text in the most naïve implementations of FTP. This is true. But FTP servers have, for a decade and more, been able to use encryption and authentication technologies such as Kerberos, SSL and TLS, to prevent the transmission of passwords in clear text. If passwords are being sent in clear text by an FTP implementation, that is a configuration issue.
  • “Files transferred with FTP are not encrypted”
    • Again, for a decade and more, FTP servers have encrypted files using Kerberos, SSL or TLS.
    • If your FTP transmission does not encrypt files when it needs to, it is because of faulty configuration.
  • “Unpatched FTP servers are vulnerable to malicious attacks”
    • So are unpatched HTTP servers; so are unpatched email servers and clients; the very technology on which BDS depends.
    • Are unpatched BDS servers invulnerable? I thoroughly doubt that any company could make such a claim.
  • “BDS [operates] without requiring knowledge workers to change their accustomed business processes and workflows”
    • … unless your accustomed business process is to use your FTP server as a secure and appropriate place in which to exchange files.

Finally, some things that BDS can’t, or doesn’t appear to, do, but which are handled with ease by FTP servers. (All of these are based on the “How BDS works” page. As such, my understanding is limited, too, but then I am clear in that, and not claiming to be a renowned expert in their protocol. All I can do is go from their freely available material. FTP, by contrast, is a fully documented standard protocol.)

  • Accept uploads.
    • The description of “How BDS works” demonstrates only the manual preparation of a file and sending it to a recipient.
    • To allow two-way transfers, it appears that BDS would require each end host their own BDS server. FTP requires only one FTP server, for upload and/or download. Each party could maintain FTP servers, but it isn’t required.
  • Automated, or bulk transfers.
    • Again, the description of “How BDS works” shows emailing a recipient for each file. Are you going to want to do that for a dozen files? Sure, you could zip them up, but the option of downloading an entire directory, or selected files chosen by the recipient, seems to be one you shouldn’t ignore.
    • If your recipients need to transfer files that are continually being added to, do you want them to simply log on to an established account, and fetch those files (securely)? In the BDS model, that does not appear to be possible.
  • Transfer from regular folders.
    • BDS requires that you place the files to be fetched on a specialised server.
    • FTP servers access regular files from regular folders. If encryption is required, those folders and files can be encrypted using standard and well-known folder and file-level encryption, such as the Encrypting File System (EFS) supplied for free in Windows, or other solutions for other platforms.
  • Reduce transmission overhead
    • FTP transmissions are slightly smaller than their equivalent HTTP transmissions, and the same is true of FTPS compared to HTTPS.
    • When you add in the email roundtrip that’s involved, and the human overhead in preparing the file for transmission, that’s a lot of time and effort that could be spent in just transferring the file.
  • Integration with existing identity and authorisation management technology.
    • Where FTP relies on using operating system authentication and access control methods, you can use exactly the same methods to control access as you do for controlling access to regular files. It does not seem as though BDS is tied into the operating system in the same way.
    • [FTP also usually offers the ability to manage authentication and access control separately from the operating system, should you need to do so]
  • Shared files
    • If your files need to be shared between a hundred known recipients, BDS appears to require you to create one download for each file, for each recipient. That’s a lot of overhead.
    • FTP, by comparison, requires you to place the files into a single location that all these users can access. Then send an email to the users (you probably use a distribution list) telling them to go fetch the file.
    • Similarly, you can use your own FTP server to host a secure shared file folder for several of your customers. BDS does not offer that feature.

So, all things told, I think that Biscom’s spam was not only unsolicited and unwanted, but it’s also at the very least incorrect and uninformed. The whitepaper they host at http://www.biscomdeliveryserver.com/collateral/wp/BDS-wp-aberdeen-200809.pdf repeats many of these incorrect statements, attributing them to Carol Baroudi of “The Aberdeen Group”. What they don’t link to is a later paper from The Aberdeen Group’s Vice President, Derek Brink, which is tagged as relating to FTPS and FTP – hopefully this means that Derek Brink is a little better informed, possibly as an indirect result of having Ipswitch as one of the paper’s sponsors. I’d love to read the paper, but at $400, it’s a little steep for a mere blog post.

So, if you’ve been using FTP, and want to move to a more secure file transfer method, don’t bother with the suggestions of a poorly-informed spammer. Simply update your FTP infrastructure if necessary, to a more modern and secure version – then configure it to require SSL / TLS encryption (the FTP over Kerberos implementation documented in RFC 2228, while secure, can have reliability issues), and to require encrypted authentication.

You are then at a stage where you have good quality encrypted and protected file transfer services, often at little or no cost on top of your existing FTP infrastructure, and without having to learn and use a new protocol.

Doubtless there are some features of BDS that make it a winning solution for some companies, but I don’t feel comfortable remaining silent while knowing that it’s being advertised by comparing it ineptly and incorrectly to my chosen favourite secure file transport mechanism.

Comcast aims for the future

I’m visiting the in-laws in Texas this weekend, and I use the SSTP VPN in Windows Server 2008 R2 to connect home (my client is Windows 7, but it works just as well with Vista). Never had many problems with it up until this weekend.

Apparently, on Friday, we had a power cut back at the house, and our network connectivity is still not happening. I’ve asked the house-sitter to restart the servers and routers where possible, but it’s still not there.

So I went online to Comcast, to track down whether they were aware of any local outage. Sadly not, so we’ll just have to wait until I get home to troubleshoot this issue.

What I did see at Comcast, though, got me really excited:

Comcast is looking for users to test IPv6 connectivity!

Anyone who talks to me about networking knows I can’t wait for the world to move to IPv6, for a number of reasons, among which are the following:

  • Larger address space – from 2^32 to 2^128. Ridiculously large space.
  • Home assignment of 64 bits to give a ridiculously large address space to each service recipient.
  • Multicast support by default. Also, IPsec.
  • Everyone’s a first-class Internet citizen – no more NAT.
  • FTP works properly over IPv6 without requiring an ALG.
  • Free access to all kinds of IPv6-only resources.

So I can’t but be excited that my local ISP, Comcast, is looking to test IPv6 support. I only hope that it’ll work well with the router we have (and the router we plan to buy, to get into the Wireless-N range). Last time I was testing IPv6 connectivity, it turned out that our router was not forwarding the GRE tunneling protocol that was used by the 6-in-4 protocol used by Hurricane Electric’s Tunnel Broker.

Who knows what other connectivity issues we’re likely to see with whatever protocol(s) Comcast is going to expect our routers and servers to support? I can’t wait to find out

My take on the SSL MITM Attacks – part 3 – the FTPS attacks

[Note – for previous parts in this series, see Part 1 and Part 2.]


FTP, and FTP over SSL, are my specialist subject, having written one of the first FTP servers for Windows to support FTP over SSL (and the first standalone FTP server for Windows!)


Rescorla and others have concentrated on the SSL MITM attacks and their effects on HTTPS, declining to discuss other protocols about which they know relatively far less. OK, time to step up and assume the mantle of expert, so that someone with more imagination can shoot me down.


FTPS is not vulnerable to this attack.


No, that’s plainly rubbish. If you start thinking along those lines in the security world, you’ve lost it. You might as well throw in the security towel and go into a job where you can assume everybody loves you and will do nothing to harm you. Be a developer of web-based applications, say. :-)


FTPS has a number of possible vulnerabilities


And they are all dependent on the features, design and implementation of your individual FTPS server and/or client. That’s why I say “possible”.



Attack 1 – renegotiation with client certificates


The obvious attack – renegotiation for client certificates – is likely to fail, because FTPS starts its TLS sessions in a different way from HTTPS.


In HTTPS, you open an unauthenticated SSL session, request a protected resource, and the server prompts for your client certificate.


In FTPS, when you connect to the control channel, you provide your credentials at the first SSL negotiation or not at all. There’s no need to renegotiate, and certainly there’s no language in the FTPS standard that allows the server to query for more credentials part way into the transaction. The best the server can do is refuse a request and say you need different or better credentials.


Attack 2 – unsolicited renegotiation without credentials


A renegotiation attack on the control channel that doesn’t rely on making the server ask for client credentials is similarly unlikely to succeed – when the TLS session is started with an AUTH TLS command, the server puts the connection into the ‘reinitialised’ state, waiting for a USER and PASS command to supply credentials. Request splitting across the renegotiation boundary might get the user name, but the password wouldn’t be put into anywhere the attacker could get to.


Attack 3 – renegotiating the data connection


At first sight, the data connection, too, is difficult or impossible to attack – an attacker would have to guess which transaction was an upload in order to be able to prepend his own content to the upload.


But that’s betting without the effect that NATs had on the FTP protocol.


Because the PORT and PASV commands involve sending an IP address across the control channel, and because NAT devices have to modify these commands and their responses, in many implementations of FTPS, after credentials have been negotiated on the control channel, the client issues a “CCC” command, to drop the control channel back into clear-text mode.


Yes, that’s right, after negotiating SSL with the server, the client may throw away the protection on the control channel, so the MitM attacker can easily see what files are going to be accessed over what ports and IP addresses, and if the server supports SSL renegotiation, the attacker can put his data in at the start of the upload before renegotiating to hand off to the legitimate client. Because the client thinks everything is fine, and the server just assumes a renegotiation is fine, there’s no reason for either one to doubt the quality of the file that’s been uploaded.


How could this be abused? Imagine that you are uploading an EXE file, and the hacker prepends it with his own code. That’s how I wrote code for a ‘dongle’ check in a program I worked on over twenty years ago, and the same trick could still work easily today. Instant Trojan.


There are many formats of file that would allow abuse by prepending data. CSV files, most exploitable buffer overflow graphic formats, etc.


Attack 3.5 – truncation attacks


While I’m on FTP over SSL implementations and the data connection, there’s also the issue that most clients don’t properly terminate the SSL connection in FTPS data transfers.


As a result, the server can’t afford to report as an error when a MitM closes the TCP connection underneath them with an unexpected TCP FIN.


That’s bad – but combine it with FTP’s ability to resume a transfer from part-way into a file, and you realize that an MitM could actually stuff data into the middle of a file by allowing the upload to start, interrupting it after a few segments, and then when the client resumed, interjecting the data using the renegotiation attack.


The attacker wouldn’t even need to be able to insert the FIN at exactly the byte mark he wanted – after all, the client will be sending the REST command in clear-text thanks to the CCC command. That means the attacker can modify it, to pick where his data is going to sit.


Not as earth-shattering as the HTTPS attacks, but worth considering if you rely on FTPS for data security.


How does WFTPD Pro get around these attacks?


1. I never bothered implementing SSL / TLS renegotiation – didn’t see it as necessary; never had the feature requested. Implementing unnecessary complexity is often cause for a security failure.


2. I didn’t like the CCC command, and so I didn’t implement that, either. I prefer to push people towards using Block instead of Stream mode to get around NAT restrictions.


I know, it’s merely fortunate that I made those decisions, rather than that I had any particular foresight, but it’s nice to be able to say that my software is not vulnerable to the obvious attacks.


I’ve yet to run this by other SSL and FTP experts to see whether I’m still vulnerable to something I haven’t thought of, but my thinking so far makes me happy – and makes me wonder what other FTPS developers have done.


I wanted to contact one or two to see if they’ve thought of attacks that I haven’t considered, or that I haven’t covered. So far, however, I’ve either received no response, or I’ve discovered that they are no longer working on their FTPS software.


Let me know if you have any input of your own on this issue.

How FTP Data Connections Work Part 2 (OR: Fun With Port 20)

As we mentioned in the 1st part of this series, FTP is a more complex protocol than many, using one control connection and one data connection.

A recap of the first post…

In typical Stream Mode operation, a new data connection is opened and closed for each data transfer, whether that’s an upload, a download, or a directory listing. To avoid confusion between different data connections, and as a recognition of the fact that networks may have old packets shuttling around for some time, these connections need to be distinguishable from one another.

In the previous article, we noted that two network sockets are distinguished by the five elements of “Local Address”, “Local Port”, “Protocol”, “Remote Address”, and “Remote Port”. For a data connection associated with any particular request, the local and remote addresses are fixed, as the addresses of the client and server. The protocol is TCP, and only the two ports are variable.

For a PASV, or passive data connection, the client-side port is chosen randomly by the client, and the server-side port is similarly chosen randomly by the server. The client connects to the server.

For a PORT, or active data connection, the client-side port is chosen randomly by the client, and the server-side port is set to port 20. The server connects to the client.

All of these work through firewalls and NAT routers, because firewalls and NAT routers contain an Application Layer Gateway (ALG) that watches for PORT and PASV commands, and modifies the control (in the case of a NAT) and/or uses the values provided to open up a firewall hole.

Isn’t there a totally predictable data connection?

For the default data connection (what happens if no PORT or PASV command is sent before the first data transfer command), the client-side port is predictable (it’s the same as the source port the client used when connecting the control channel), and the server-side port is 20. Again, the server connects to the client.

Because firewalls and NATs open up a ‘reverse’ hole for TCP sockets, the default data port works with firewalls and NATs that aren’t running an ALG, or whose ALG cannot scan for PORT and PASV commands.

Why would an ALG stop scanning for PORT and PASV commands?

There are a couple of reasons – the first is that it doesn’t know that the service connected to is running the FTP protocol. This is common if the server is running on a port other than the usual port 21.

The second reason is that the FTP control connection doesn’t look like it contains FTP commands – usually because the connection is encrypted. This can happen because you’re tunneling the FTP control connection through an encrypted tunnel such as SSH (don’t laugh – it does happen!), or hopefully it’s because you’re running FTP over SSL, so that the control and data connections can be encrypted, and you can authenticate the identity of the FTP server.

So how do you get FTP over SSL to work through a firewall?

In the words of Deep Thought: “Hmm… tricky”.

There are a couple of classic solutions:

  1. Allow PASV data connections, select a wide range of ports, and open that range for incoming traffic from all external addresses in your firewall configuration; hope that your FTP server can be configured to use only that range of ports (WFTPD Pro can), and that it has protections against traffic stealing attacks (again, WFTPD Pro has). Still, this option seems really risky.
  2. Block all PASV connections, and make the clients responsible for opening up holes in their firewalls. If you’re convinced the risk is too great to do this on your server, how does it look to convince your users that they should accept that risk?
  3. After you’ve authenticated the server and provided your username and password in the encrypted control connection, issue the “CCC” (Clear Control Channel) command, to switch the control connection back into clear-text. I dislike this as a solution, because it requires the ALG pay attention to a lot of SSL traffic in the hope that there might be clear-text coming up, and because you may want the control channel to remain encrypted.

Awright, clever clogs, you solve the problem.

The astute reader can probably see where I’m going with this.

The default data port is predictable – if the client connects from port U to port L at the server (L is usually 21), then the default data port will be opened from port L-1 at the server to port U at the client.

The default data port doesn’t need the firewall to do anything other than allow reverse connections back along the port that initiated the connection. You don’t need to open huge ranges at the server’s firewall (in fact you should be able to simply open port 21 inbound to your server).

The default data port is required to be supported by FTP servers going back a long way- at least a couple of decades. Yes, really, that long.

If it’s that simple, why isn’t everyone doing it?

Good point, that, and a great sentence to use whenever you wish to halt innovation in its tracks.

Okay, it’s obvious that there are some drawbacks:

  • In stream mode, the data transfer is ended by closing the stream. This means that you have to open a new control connection. Not good, given the number of round-trips you need for a logon, and the work needed to start an SSL connection.
  • Most FTP clients view the default data connection as, at best, a fail-over in case the PORT or PASV commands fail to work. Obviously, that means it’s not likely to be a well-tested or favoured solution on these clients.

Even with those drawbacks, there are still further solutions to apply – the first being to use Block-mode instead of Stream-mode. In Stream-mode, each data transfer requires opening and closing the data connection; in Block-mode, which is a little like HTTP’s chunked mode, blocks of data are sent, and followed by an “EOF” marker (End of File), so that the data connection doesn’t need to be closed. If you can convince your FTP client to request Block-mode with the default data connection, and your FTP server supports it (WFTPD Pro has done so for several years), you can achieve FTP over SSL through NATs and firewalls simply by opening port 21.

For the second problem, it’s worth noting that many FTP client authors implemented default data connections out of a sense of robustness, so default data connections will often work if you can convince the PORT and PASV commands to fail – by, for instance, putting restrictive firewalls or NATs in the way, or perhaps by preventing the FTP server from accepting PORT or PASV commands in some way.

Clearly, since Microsoft’s IIS 7.5 downloadable FTP Server supports FTPS in block mode with the default data port, there has been some consideration given to my whispers to them that this could solve the FTP over SSL through firewall problem.

Other than my own WFTPD Explorer, I am not aware of any particular clients that support the explicit use of FTP over SSL with Block-mode on the default data connection – I’d love to hear of your experiments with this mode of operation, to see if it works as well for you as it does for me.

How FTP Data Connections Work Part 1 (OR: Don’t Open Port 20 in your Firewall!)

This will be the first of a couple of articles on FTP, as I’ve been asked to post this information in an easy-to-read format in a public place where it can be referred to. I think my expertise in developing and supporting WFTPD and WFTPD Pro allow me to be reliable on this topic. Oh, that and the fact that I’ve contributed to a number of RFCs on the subject.

Enough TCP to be dangerous

First, a quick refresher on TCP – every TCP connection can be thought of as being associated with a “socket” at each device along the way – from one computer, through routers, to the other computer. The socket is identified by five individual items – the local IP address, the local port, the remote IP address, the remote port, and the protocol (in this case, the protocol is TCP).

Firewalls are essentially a special kind of router, with rules not only for how to forward data, but also rules on connection requests to drop or allow. Once a connection request is allowed, the entire flow of traffic associated with that connection request is allowed, also – any traffic flow not associated with a previously allowed connection request is discarded.

When you set up a firewall to allow access to a server, you have to consider the first segment – the “SYN”, or connection request from the TCP client to the TCP server. The rule can refer to any data that would identify the socket to be created, such as “allow any connection request where the source IP address is 10.1.1.something, and the destination port is 54321”.

Typically, an external-facing firewall will allow all outbound connections, and have rules only for inbound connections. As a result, firewall administrators are used to saying things like “to enable access to the web server, simply open port 80”, whereas what they truly mean is to add a rule that applies to incoming TCP connection requests whose source address and source port could be anything, but whose destination port is 80, and whose destination address is that of the web server.” This is usually written in some short hand, such as “allow tcp 0.0.0.0:0 10.1.2.3:80”, where “0.0.0.0” stands for “any address” and “:0” stands for “any port”.

Firewall rules for FTP

For an FTP server, firewall rules are known to be a little trickier than for most other servers.

Sure, you can set up the rule “allow tcp 0.0.0.0:0 10.1.2.3:21”, because the default port for the control connection of FTP is 21. That only allows the control connection, though.

What other connections are there?

In the default transfer mode of “Stream”, every file transfer gets its own data connection. Of course, it’d be lovely if this data connection was made on port 21 as well, but that’s not the way the protocol was built. Instead, Stream mode data connections are opened either as “Active” or “Passive” connections.

Active and Passive Data Connections

The terms "Active" and "Passive" refer to how the FTP server connects. The choice of connection method is initiated by the client, although the server can choose to refuse whatever the client asked for, at which point the client should fail over to using the other method.

In the Active method, the FTP server connects to the client (the server is the “active” participant, the client just lies back and thinks of England), on a random port chosen by the client. Obviously, that will work if the client’s firewall is configured to allow the connection to that port, and doesn’t depend on the firewall at the server to do anything but allow connections outbound. The Active method is chosen by the client sending a “PORT” command, containing the IP address and port to which the server should connect.

In the Passive method, the FTP client connects to the server (the server is now the “passive” participant), on a random port chosen by the server. This requires the server’s firewall to allow the incoming connection, and depends on the client’s firewall only to allow outbound connections. The Passive method is chosen by the client sending a “PASV” command, to which the server responds with a message containing the IP address and port at the server that the client should connect to.

The ALG comes to the rescue!

So in theory, your firewall now needs to know what ports are going to be requested by the PORT and PASV commands. For some situations, this is true, and you need to consider this – we’ll talk about that in part 2. For now, let’s assume everything is “normal”, and talk about how the firewall helps the FTP user or administrator.

If you use port 21 for your FTP server, and the firewall is able to read the control connection, just about every firewall in existence will recognise the PORT and PASV commands, and open up the appropriate holes. This is because those firewalls have an Application Level Gateway, or ALG, which monitors port 21 traffic for FTP commands, and opens up the appropriate holes in the firewall. We’ve discussed the FTP ALG in the Windows Vista firewall before.

So why port 20?

Where does port 20 come in? A rather simplistic view is that administrators read the “Services” file, and see the line that tells them that port 20 is “ftp-data”. They assume that this means that opening port 20 as a destination port on the firewall will allow FTP data connections to flow. By the “elephant repellant” theory, this is proved “true” when their firewalls allow FTP data connections after they open ports 21 and 20. Nobody bothers to check that it also works if they only open port 21, because of the ALG.

OK, so if port 20 isn’t needed, why is it associated with “ftp-data”? For that, you’ll have to remember what I said early on in the article – that every socket has five values associated with it – two addresses, two ports, and a protocol. When the data connection is made from the server to the client (remember, that’s an Active data connection, in response to a PORT command), the source port at the server is port 20. It’s totally that simple, and since nobody makes firewall rules that look at source port values, it’s relatively unimportant. That “ftp-data” in the Services file is simply so that the output from “netstat” has a meaningful service name instead of “:20” as a source port.

Coming up in part 2…

Next time, we’ll expand on this topic, to go into the inability of the ALG to process encrypted FTP control traffic, and the resultant issues and solutions that face encrypted FTP.

FTP – Untrustworthy? I Don’t Think So!

Lately, as if writers all draw from the same shrinking paddling-pool of ideas, I’ve noticed a batch of stories about how unsafe, unsecure and untrustworthy is FTP.

SC Magazine says so.

First it was an article in the print version of SC Magazine, sadly not repeated online, titled “2 Minutes On… FTP integrity challenged”, by Jim Carr. I tried to reach Jim by email, but his bounce message tells me he doesn’t work for SC Magazine any more.

This article was full of interesting quotes.

“8,700 FTP server credentials were being used to access and infect more than 2,000 legitimate websites in the US”. The article goes on to quote Finjan’s director of security research who says they were “most likely hijacked by malware” – since most malware can do keystroke logging for passwords, there’s not much can be done at the protocol level to protect against this, so this isn’t really an indictment of FTP so much as it is an indication of the value and ubiquity of FTP.

Then we get to a solid criticism of FTP: “The problem with FTP is it transfers data, including authorization credentials, in plain text rather than in encrypted form, says Jeff Debrosse, senior research analyst at security vendor ESET”. Okay, that’s true – but in much the same vein as saying that the same problems all apply to HTTP.

Towards the end of the article, we return to Finjan’s assertion that malware can steal credentials for FTP sites – and as I’ve mentioned before, malware can get pretty much any user secret, so again, that’s not a problem that a protocol such as FTP – or SFTP, HTTP, SSH, SCP, etc – can fix. There’s a password or a secret key, and once malware is inside the system, it can get those credentials.

Fortunately, the article closes with a quote from Trent Henry, who says “That means FTP is not the real issue as much as it is a server-protection issue.”

OK, But a ZDNet blogger says so, too.

Well, yeah, an article in a recent ZDNet blog entry – on storage, not networking or security (rather like getting security advice from Steve Gibson, a hard-drive expert) – rants on about how his web site got hacked into (through WordPress, not FTP), and as a result, he’s taken to heart a suggestion not to use FTP.

Such a non-sequitur just leaves me breathless. So here’s my take:

FTP Has Been Secure for Years

But some people have just been too busy, or too devoted to other solutions, to take notice.

FTP first gained secure credentials with the addition of support for SASL and SKey. These are mechanisms for authenticating users without passing a password or password-equivalent (and by “password-equivalent”, I’m including schemes where the hash is passed as proof that you have the password – an attacker can simply copy the hash instead of the password). These additional authentication methods give FTP the ability to check identity without jeopardising the security of the identified party. [Of course, prior to this, there were IPsec and SOCKS solutions that work outside of the protocol.]

OK, you might say, but that only protects the authentication – what about the data?

FTP under GSSAPI was defined in RFC 2228, which was published in October 1997 (the earliest draft copy I can find is from March 1995), from a draft developed over the preceding couple of years. What’s GSSAPI? As far as anyone really needs to know, it’s Kerberos.

This inspired the development of FTP over SSL in 1996, which became FTP over TLS, and which finally became RFC 4217. From 1997 to 2003, those of us in the FTPExt Working Group were wondering why the standard wasn’t yet an RFC, as draft after draft were submitted with small changes, and then apparently sat on by the RFC editor – during this time, several compatible FTP clients, servers and proxies were produced that compatibly supported FTP over TLS (and/or SSL).

Why so long from draft to publication?

One theory that was raised is that the IETF were trying to get SSH-based protocols such as SFTP out before FTP over TLS (which has become known as “FTPS”, for FTP over SSL).

SFTP was abandoned after draft 13, which was made available in July 2006; RFC 4217 was published in October 2005. So it seems a little unlikely that this is the case.

The more likely theory is simply that the RFC Editor was overworked – the former RFC Editor, Jon Postel, died in 1998, and it’s likely that it took some time for the new RFC Editor to sort all the competing drafts out, and give them his attention.

What did the FTPExt Working Group do while waiting?

While we were waiting for the RFC, we all built compatible implementations of the FTP over TLS standard.

One or two of us even tried to implement SFTP, but with the draft mutating rapidly, and internal discussion on the SFTP mailing list indicating that no-one yet knew quite what they wanted SFTP to be when it grew up, it was like nailing the proverbial jelly to a tree. Then the SFTP standardisation process ground to a halt, as everyone lost interest. This is why getting SFTP implementations to interoperate is sometimes so frustrating an experience.

FTPS, however – that was solidly defined, and remains a very compatible protocol with few relevant drawbacks. Sadly, even FTP under GSSAPI turned out to have some reliability issues (the data transfer and the control connection, though over different asynchronous channels, share the same encryption context, which means that the receiver must synchronise the two asynchronous channels exactly as the sender did, or face a loss of connection) – but FTP over TLS remains strong and reliable.

So, why does no-one know about FTPS?

Actually, there’s lots of people that do – and many clients and servers, proxies and tunnels, exist in real life implementations. Compatibility issues are few, and generally revolve around how strict servers are about observing the niceties of the secure transaction.

Even a ZDNet blogger or two has come across FTPS, and recommends it, although of course he recommends the wrong server.

My recommendation?

WFTPD Pro. Unequivocally. Because I know who wrote it, and I know what went into it. It’s all good stuff.

Vistafy Me.

I have a little time over the next couple of weeks to devote to developing WFTPD a little further.

This is a good thing, as it’s way past time that I brought it into Vista’s world.

I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.

The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.

So, WFTPD and WFTPD Pro run in Windows Vista and Windows Server 2008.

But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.

I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.

Here’s my list of significant changes to make over the next couple of weeks:

  1. Convert the Help file from WinHelp to HTML Help.
    • WinHelp is not supported in Vista – you can download a WinHelp version, but it’s far better to support the one format of Help file that Windows uses. So, I’m converting from WinHelp to HTML Help.
  2. Changing the Control Panel Applet for WFTPD Pro.
    • CPL files still work in Windows Vista, but they’re considered ‘old’, and there’s an ugly user experience when it comes to making them elevate – run as administrator.
    • There are two or three ways to go here -
      1. one is to create an EXE wrapper that calls the old CPL file. That’s fairly cheap, and will probably be the first version.
      2. Another is to write an MMC plugin. That’s a fair amount of work, and requires some thought and design. That’s going to take more than a couple of weeks.
      3. A third option is to create some form of web-based interface. I don’t want to go that way, because I don’t want to require my users to install IIS or some other web server.
    • So, first blush it seems will be to wrap the existing interface, and secondly I’ll be investigating what an MMC should look like.
  3. Support for IPv6.
    • I already have this implemented in a trial version, but have yet to fully wire it up to a user interface that I’m willing to unleash on the world. So that’s on the cards for the next release.
  4. Multiple languages
    • There are two elements to support for multiple languages in FTP:
      1. File names in non-Latin character sets
      2. Text messages in languages other than English
    • The first, file names in different character sets, will be achieved sooner than the second. If the second ever occurs, it will be because customers are sufficiently interested to ask me specifically to do it.
  5. SSL Client Certificate authentication
    • SSL Client Certificate Auth has been in place for years – it’s a secret feature. The IIS guys warned me off developing it, saying “that’s really hard, don’t try and do anything with client certs”.
    • I didn’t have the heart to tell them I had the feature working already (but without an interface), and that it simply required a little patience.
  6. Install under Local Service and Network Service accounts
  7. Build in Visual Studio 2008, to get maximum protection using new compiler features.
    • /analyze, Address Space Layout Randomisation, SAL – all designed to catch my occasional mistakes.

As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.

Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.

Another side-project of mine is an EFS tool. I may use some time to work on that.