[Additional note: Bing and Juniper Networks just announced that they will also be joining in World IPv6 Day.]
IANA just held a ceremony (streamed live, and with a press conference following at 10amEST) to hand out the last of the IPv4 /8 blocks to Regional Internet Registries â RIRs.
Itâs a quiet, but historical moment, as it truly marks the time we can finally tell people âyes, I know nothing appeared to be happening, but finally itâs happenedâ. Preparing for IPv6 has to happen, because there just isnât any stopping this particular juggernaut. IPv4 addresses will run out, and there will arise a time when web sites can no longer find a public IPv4 address.
BEFORE that happens, something has to change to allow us to work together on an IPv6 Internet. Iâm doing what I can.
As a client user, I live on the Hurricane Electric IPv6 Tunnel Broker, because Comcast have yet to extend their IPv6 trial to my neck of the woods, seeing as how I live in technology-deprived Seattle.
Iâm still trying to persuade my web siteâs ISP, 1&1, to put an IPv6 capability in place before World IPv6 Day on June 8, so I can host my web page there in IPv6, but I definitely have my FTP server software, WFTPD and WFTPD Pro, ready to support IPv6 fully.
What are you doing?
OK, so IPv4 is probably right to be acting like the old man in Monty Python and the Holy Grail, and screaming âIâm not dead yet!â, but we certainly shouldnât hold out any hope that itâll be getting any better. Clonk it on the head as soon as possible, because really, itâs been extremely poorly for many years now.
Iâve mentioned before that my biggest argument that IPv4 has already exhausted itself is the mere presence of aggregating NATs â Network Address Translators, whose sole purpose is to take multiple hosts inside a network, and expose them to the outside as if they were really only processes on one host with one IP address. If IPv4 were large enough, we wouldnât have needed these at all, and at best, they were a stop-gap measure, and an inconvenient one at that.
Well, now we canât really stop the gap any longer. Weâve hit the first of a set of dominoes that leads to us not even having enough IPv4 addresses to support the Internet with NATs in place.
Thatâs right, no more /8 networks are left in IANAâs pool to assign. OK, I know it says â5/256â are left, but thatâs only because the IANA (Internet Assigned Numbers Authority) havenât yet announced that theyâve given out those last five, and they have previously announced that when they get down to five, those five will automatically be distributed.
Yesterday, the counter said â7/256â, but earlier today, APNIC â the RIR for the Asia & Pacific region (RIR â Regional Internet Registry) â bought two entries to serve their ever-growing Internet market. That will trigger the IANA to distribute the remaining five blocks.
And no, Egyptâs IP blocks are not available for re-use.
Yes, thatâs right, this isnât a âshut everything off and go homeâ moment â as I said before, this is merely the tipping of an early domino in a chain. Next, the last five /8s will be given to the five RIRs, and then they will use those to continue handing out addresses to their ISPs. At some point, the supply will dry up, and it will either become impossible, or expensive, to get new public-facing addresses. Existing addresses will still work even then, of course, and several of the new IPv4 address assignments will, ironically, be aggregating NATs that will allow the IPv6 Internet to access old IPv4 sites!
The only question now is what we call this momentous slide into IPv4 exhaustion. Certainly, RagNATok has a pleasant ring to it, as it invokes the idea of a twilight of the old order, a decay into darkness, but this time with a renewal phase, as the new Internet, based entirely on IPv6, rises, if not Phoenix-like from the ashes, then at least alongside, and eventually much larger than the old IPv4 Internet.
As you can tell from my tone, I donât think itâs doom and gloom â Iâm quite looking forward to having the Internet back the way I remember it â with every host a full-class node on the network. Itâs going to mean some challenges, particularly in the world of online security, where there will be new devices to buy (bigger addresses mean larger rule-sets, and existing devices are already pretty much operating at capacity), new terminology to learn, and new reasons to insist on best practices (authentication by IP address was never reliable, and is particularly a bad idea when every host has multiple addresses by default, and by design will change its source address on a regular basis).
Perhaps the Mayans were right in deciding that 2012 is the year when everything changes (to borrow a line from Torchwood).
The rather unassuming name that has been chosen for this particular date â when the last assignment leaves the IANA â is âX-Dayâ â X as in âeXhaustionâ.
The next date for your calendars, then, is World IPv6 Day, two days after my birthday, or for those of you that donât know me, June 8, 2011, which is when major Internet presences including Google, Yahoo and others, will be switching on full IPv6 service on their main sites, and seeing what breaks. Look forward to that, and in the meantime, test some known IPv6 sites, like http://ipv6.google.com to ensure that youâre getting good name resolution and connectivity.
If youâre running an FTP server on Windows, I encourage you to contact me at support@wftpd.com if you would like to test WFTPD or WFTPD Pro for IPv6 connectivity. We are currently beta-testing a version with much greater IPv6 support than before.
I rarely write about my business on the blog here, and perhaps I should do so some more.
I mentioned in the post earlier today of how I’d “hacked” my badge (“hacked” in the sense of “that’s not programming, that’s typing”) to display the Texas Imperial Software and WFTPD logos, and the wftpd.com domain hosting our web site.
Also, that I’ll be wearing my bright orange Texas Imperial Software t-shirt.
So, here’s the competition:
Take a photo of the Texas Imperial Software logo either from my shirt or my badge, post it to your blog (or other web-site), along with a description of where you saw me, and a link to Texas Imperial Software’s web site, http://www.wftpd.com, send me an email with a link to your site, and when I get back to the office, I’ll email you a free copy of WFTPD Pro – and as long as your page stays there for six months, you’ll get free updates the same as the rest of our customers.
What can you do with the free copy of WFTPD Pro? You can host your own secured FTP server, using the FTP over TLS protocol defined in RFC 4217, and also known as FTPS. Of course, what I’m guessing you’re going to do is hack on it – and that’s OK, providing that you notify me by email before(*) publishing your results. If you turn that hacking into a paper for a con, give me the opportunity to support your presentation, whether that’s with rebuttal, fixes, or mere apologies (sorry, can’t afford money).
The closest thing I have to a catch for this is that it has to be your own unique photo – I’ll be comparing all submissions for similarity, and the best way to avoid duplicates is to have someone else take the photo for you, and put yourself in the picture. And don’t forget, I don’t read your blog, so you have to email me a link to it.
Thanks for participating,
Alun.
~~~~
(*) I’d prefer the Google-recommended sixty days to fix stuff, but if you’re the kind of hacker who believes all vendors need public spanking, then by all means post immediately after emailing me. After all, it’s not like you couldn’t do that with the trial version anyway. But if you do that, I’ll be all grumpy about it, and won’t buy you a drink next time I see you.
Stupid spammer is stupid, spamming me his stupid spam.
As far as I can tell, I have had no interactions with either Biscom or Mark Eaton. And yet, he sends me email every couple of months or so, advertising his companyâs product, BDS. I class that as spam, and usually I delete it.
Today, I choose instead to comment on it.
Hereâs the text of his email:
Alun
Although widely used as a file transfer method, FTP may leave users non-compliant with some federal and state regulatory requirements. Susceptible to hacking and unauthorized access to private information, FTP is being replaced with more secure file transfer technologies. Companies seeking ways to prevent data breaches and keep confidential information private should consider these FTP risks:
» FTP passwords are sent in clear text
» Files transferred with FTP are not encrypted
» Unpatched FTP servers are vulnerable to malicious attacksBiscom Delivery Server (BDS) is a secure file transfer solution that enables users to safely exchange and transfer large files while maintaining a complete transaction and audit trail. Because BDS balances an organization’s need for security â encrypting files both at rest and in transit â without requiring knowledge workers to change their accustomed business processes and workflows, workers can manage their own secure and large file delivery needs. See how BDS works.
I would request 15 minutes of your time to mutually explore on a conference call if BDS can meet your current and future file transfer requirements. To schedule a time with me, please view my calendar here or call my direct line at 978-367-3536. Thank you for the opportunity and I look forward to a brief call with you to discuss your requirements in more detail.
Best regards,
Mark
Better than most spammers, I suppose, in that he spelled my name correctly. Thatâs about the only correct statement in the entire email, however. Itâs easy to read this and to assume that this salesman and his company are deliberately intending to deceive their customers, but I prefer to assume that he is merely misinformed, or chose his words poorly.
In that vein, hereâs a couple of corrections I offer (I use âFTPâ as shorthand for âthe FTP protocol and its standard extensionsâ):
Finally, some things that BDS canât, or doesnât appear to, do, but which are handled with ease by FTP servers. (All of these are based on the âHow BDS worksâ page. As such, my understanding is limited, too, but then I am clear in that, and not claiming to be a renowned expert in their protocol. All I can do is go from their freely available material. FTP, by contrast, is a fully documented standard protocol.)
So, all things told, I think that Biscomâs spam was not only unsolicited and unwanted, but itâs also at the very least incorrect and uninformed. The whitepaper they host at http://www.biscomdeliveryserver.com/collateral/wp/BDS-wp-aberdeen-200809.pdf repeats many of these incorrect statements, attributing them to Carol Baroudi of âThe Aberdeen Groupâ. What they donât link to is a later paper from The Aberdeen Groupâs Vice President, Derek Brink, which is tagged as relating to FTPS and FTP â hopefully this means that Derek Brink is a little better informed, possibly as an indirect result of having Ipswitch as one of the paperâs sponsors. Iâd love to read the paper, but at $400, itâs a little steep for a mere blog post.
So, if youâve been using FTP, and want to move to a more secure file transfer method, donât bother with the suggestions of a poorly-informed spammer. Simply update your FTP infrastructure if necessary, to a more modern and secure version â then configure it to require SSL / TLS encryption (the FTP over Kerberos implementation documented in RFC 2228, while secure, can have reliability issues), and to require encrypted authentication.
You are then at a stage where you have good quality encrypted and protected file transfer services, often at little or no cost on top of your existing FTP infrastructure, and without having to learn and use a new protocol.
Doubtless there are some features of BDS that make it a winning solution for some companies, but I donât feel comfortable remaining silent while knowing that itâs being advertised by comparing it ineptly and incorrectly to my chosen favourite secure file transport mechanism.
Iâm visiting the in-laws in Texas this weekend, and I use the SSTP VPN in Windows Server 2008 R2 to connect home (my client is Windows 7, but it works just as well with Vista). Never had many problems with it up until this weekend.
Apparently, on Friday, we had a power cut back at the house, and our network connectivity is still not happening. Iâve asked the house-sitter to restart the servers and routers where possible, but itâs still not there.
So I went online to Comcast, to track down whether they were aware of any local outage. Sadly not, so weâll just have to wait until I get home to troubleshoot this issue.
What I did see at Comcast, though, got me really excited:
Anyone who talks to me about networking knows I canât wait for the world to move to IPv6, for a number of reasons, among which are the following:
So I canât but be excited that my local ISP, Comcast, is looking to test IPv6 support. I only hope that itâll work well with the router we have (and the router we plan to buy, to get into the Wireless-N range). Last time I was testing IPv6 connectivity, it turned out that our router was not forwarding the GRE tunneling protocol that was used by the 6-in-4 protocol used by Hurricane Electricâs Tunnel Broker.
Who knows what other connectivity issues weâre likely to see with whatever protocol(s) Comcast is going to expect our routers and servers to support? I canât wait to find out
[Note – for previous parts in this series, see Part 1 and Part 2.]
FTP, and FTP over SSL, are my specialist subject, having written one of the first FTP servers for Windows to support FTP over SSL (and the first standalone FTP server for Windows!)
Rescorla and others have concentrated on the SSL MITM attacks and their effects on HTTPS, declining to discuss other protocols about which they know relatively far less. OK, time to step up and assume the mantle of expert, so that someone with more imagination can shoot me down.
FTPS is not vulnerable to this attack.
No, that’s plainly rubbish. If you start thinking along those lines in the security world, you’ve lost it. You might as well throw in the security towel and go into a job where you can assume everybody loves you and will do nothing to harm you. Be a developer of web-based applications, say. :-)
And they are all dependent on the features, design and implementation of your individual FTPS server and/or client. That’s why I say “possible”.
The obvious attack – renegotiation for client certificates – is likely to fail, because FTPS starts its TLS sessions in a different way from HTTPS.
In HTTPS, you open an unauthenticated SSL session, request a protected resource, and the server prompts for your client certificate.
In FTPS, when you connect to the control channel, you provide your credentials at the first SSL negotiation or not at all. There’s no need to renegotiate, and certainly there’s no language in the FTPS standard that allows the server to query for more credentials part way into the transaction. The best the server can do is refuse a request and say you need different or better credentials.
A renegotiation attack on the control channel that doesn’t rely on making the server ask for client credentials is similarly unlikely to succeed – when the TLS session is started with an AUTH TLS command, the server puts the connection into the ‘reinitialised’ state, waiting for a USER and PASS command to supply credentials. Request splitting across the renegotiation boundary might get the user name, but the password wouldn’t be put into anywhere the attacker could get to.
At first sight, the data connection, too, is difficult or impossible to attack – an attacker would have to guess which transaction was an upload in order to be able to prepend his own content to the upload.
But that’s betting without the effect that NATs had on the FTP protocol.
Because the PORT and PASV commands involve sending an IP address across the control channel, and because NAT devices have to modify these commands and their responses, in many implementations of FTPS, after credentials have been negotiated on the control channel, the client issues a “CCC” command, to drop the control channel back into clear-text mode.
Yes, that’s right, after negotiating SSL with the server, the client may throw away the protection on the control channel, so the MitM attacker can easily see what files are going to be accessed over what ports and IP addresses, and if the server supports SSL renegotiation, the attacker can put his data in at the start of the upload before renegotiating to hand off to the legitimate client. Because the client thinks everything is fine, and the server just assumes a renegotiation is fine, there’s no reason for either one to doubt the quality of the file that’s been uploaded.
How could this be abused? Imagine that you are uploading an EXE file, and the hacker prepends it with his own code. That’s how I wrote code for a ‘dongle’ check in a program I worked on over twenty years ago, and the same trick could still work easily today. Instant Trojan.
There are many formats of file that would allow abuse by prepending data. CSV files, most exploitable buffer overflow graphic formats, etc.
While I’m on FTP over SSL implementations and the data connection, there’s also the issue that most clients don’t properly terminate the SSL connection in FTPS data transfers.
As a result, the server can’t afford to report as an error when a MitM closes the TCP connection underneath them with an unexpected TCP FIN.
That’s bad – but combine it with FTP’s ability to resume a transfer from part-way into a file, and you realize that an MitM could actually stuff data into the middle of a file by allowing the upload to start, interrupting it after a few segments, and then when the client resumed, interjecting the data using the renegotiation attack.
The attacker wouldn’t even need to be able to insert the FIN at exactly the byte mark he wanted – after all, the client will be sending the REST command in clear-text thanks to the CCC command. That means the attacker can modify it, to pick where his data is going to sit.
Not as earth-shattering as the HTTPS attacks, but worth considering if you rely on FTPS for data security.
1. I never bothered implementing SSL / TLS renegotiation – didn’t see it as necessary; never had the feature requested. Implementing unnecessary complexity is often cause for a security failure.
2. I didn’t like the CCC command, and so I didn’t implement that, either. I prefer to push people towards using Block instead of Stream mode to get around NAT restrictions.
I know, it’s merely fortunate that I made those decisions, rather than that I had any particular foresight, but it’s nice to be able to say that my software is not vulnerable to the obvious attacks.
I’ve yet to run this by other SSL and FTP experts to see whether I’m still vulnerable to something I haven’t thought of, but my thinking so far makes me happy – and makes me wonder what other FTPS developers have done.
I wanted to contact one or two to see if they’ve thought of attacks that I haven’t considered, or that I haven’t covered. So far, however, I’ve either received no response, or I’ve discovered that they are no longer working on their FTPS software.
Let me know if you have any input of your own on this issue.
As we mentioned in the 1st part of this series, FTP is a more complex protocol than many, using one control connection and one data connection.
In typical Stream Mode operation, a new data connection is opened and closed for each data transfer, whether thatâs an upload, a download, or a directory listing. To avoid confusion between different data connections, and as a recognition of the fact that networks may have old packets shuttling around for some time, these connections need to be distinguishable from one another.
In the previous article, we noted that two network sockets are distinguished by the five elements of âLocal Addressâ, âLocal Portâ, âProtocolâ, âRemote Addressâ, and âRemote Portâ. For a data connection associated with any particular request, the local and remote addresses are fixed, as the addresses of the client and server. The protocol is TCP, and only the two ports are variable.
For a PASV, or passive data connection, the client-side port is chosen randomly by the client, and the server-side port is similarly chosen randomly by the server. The client connects to the server.
For a PORT, or active data connection, the client-side port is chosen randomly by the client, and the server-side port is set to port 20. The server connects to the client.
All of these work through firewalls and NAT routers, because firewalls and NAT routers contain an Application Layer Gateway (ALG) that watches for PORT and PASV commands, and modifies the control (in the case of a NAT) and/or uses the values provided to open up a firewall hole.
For the default data connection (what happens if no PORT or PASV command is sent before the first data transfer command), the client-side port is predictable (itâs the same as the source port the client used when connecting the control channel), and the server-side port is 20. Again, the server connects to the client.
Because firewalls and NATs open up a âreverseâ hole for TCP sockets, the default data port works with firewalls and NATs that arenât running an ALG, or whose ALG cannot scan for PORT and PASV commands.
There are a couple of reasons â the first is that it doesnât know that the service connected to is running the FTP protocol. This is common if the server is running on a port other than the usual port 21.
The second reason is that the FTP control connection doesnât look like it contains FTP commands â usually because the connection is encrypted. This can happen because youâre tunneling the FTP control connection through an encrypted tunnel such as SSH (donât laugh â it does happen!), or hopefully itâs because youâre running FTP over SSL, so that the control and data connections can be encrypted, and you can authenticate the identity of the FTP server.
In the words of Deep Thought: âHmm⊠trickyâ.
There are a couple of classic solutions:
The astute reader can probably see where Iâm going with this.
The default data port is predictable â if the client connects from port U to port L at the server (L is usually 21), then the default data port will be opened from port L-1 at the server to port U at the client.
The default data port doesnât need the firewall to do anything other than allow reverse connections back along the port that initiated the connection. You donât need to open huge ranges at the serverâs firewall (in fact you should be able to simply open port 21 inbound to your server).
The default data port is required to be supported by FTP servers going back a long way- at least a couple of decades. Yes, really, that long.
Good point, that, and a great sentence to use whenever you wish to halt innovation in its tracks.
Okay, itâs obvious that there are some drawbacks:
Even with those drawbacks, there are still further solutions to apply â the first being to use Block-mode instead of Stream-mode. In Stream-mode, each data transfer requires opening and closing the data connection; in Block-mode, which is a little like HTTPâs chunked mode, blocks of data are sent, and followed by an âEOFâ marker (End of File), so that the data connection doesnât need to be closed. If you can convince your FTP client to request Block-mode with the default data connection, and your FTP server supports it (WFTPD Pro has done so for several years), you can achieve FTP over SSL through NATs and firewalls simply by opening port 21.
For the second problem, itâs worth noting that many FTP client authors implemented default data connections out of a sense of robustness, so default data connections will often work if you can convince the PORT and PASV commands to fail â by, for instance, putting restrictive firewalls or NATs in the way, or perhaps by preventing the FTP server from accepting PORT or PASV commands in some way.
Clearly, since Microsoftâs IIS 7.5 downloadable FTP Server supports FTPS in block mode with the default data port, there has been some consideration given to my whispers to them that this could solve the FTP over SSL through firewall problem.
Other than my own WFTPD Explorer, I am not aware of any particular clients that support the explicit use of FTP over SSL with Block-mode on the default data connection â Iâd love to hear of your experiments with this mode of operation, to see if it works as well for you as it does for me.
This will be the first of a couple of articles on FTP, as Iâve been asked to post this information in an easy-to-read format in a public place where it can be referred to. I think my expertise in developing and supporting WFTPD and WFTPD Pro allow me to be reliable on this topic. Oh, that and the fact that Iâve contributed to a number of RFCs on the subject.
First, a quick refresher on TCP â every TCP connection can be thought of as being associated with a âsocketâ at each device along the way â from one computer, through routers, to the other computer. The socket is identified by five individual items â the local IP address, the local port, the remote IP address, the remote port, and the protocol (in this case, the protocol is TCP).
Firewalls are essentially a special kind of router, with rules not only for how to forward data, but also rules on connection requests to drop or allow. Once a connection request is allowed, the entire flow of traffic associated with that connection request is allowed, also â any traffic flow not associated with a previously allowed connection request is discarded.
When you set up a firewall to allow access to a server, you have to consider the first segment â the âSYNâ, or connection request from the TCP client to the TCP server. The rule can refer to any data that would identify the socket to be created, such as âallow any connection request where the source IP address is 10.1.1.something, and the destination port is 54321â.
Typically, an external-facing firewall will allow all outbound connections, and have rules only for inbound connections. As a result, firewall administrators are used to saying things like âto enable access to the web server, simply open port 80â, whereas what they truly mean is to add a rule that applies to incoming TCP connection requests whose source address and source port could be anything, but whose destination port is 80, and whose destination address is that of the web server.â This is usually written in some short hand, such as âallow tcp 0.0.0.0:0 10.1.2.3:80â, where â0.0.0.0â stands for âany addressâ and â:0â stands for âany portâ.
For an FTP server, firewall rules are known to be a little trickier than for most other servers.
Sure, you can set up the rule âallow tcp 0.0.0.0:0 10.1.2.3:21â, because the default port for the control connection of FTP is 21. That only allows the control connection, though.
What other connections are there?
In the default transfer mode of âStreamâ, every file transfer gets its own data connection. Of course, itâd be lovely if this data connection was made on port 21 as well, but thatâs not the way the protocol was built. Instead, Stream mode data connections are opened either as âActiveâ or âPassiveâ connections.
The terms "Active" and "Passive" refer to how the FTP server connects. The choice of connection method is initiated by the client, although the server can choose to refuse whatever the client asked for, at which point the client should fail over to using the other method.
In the Active method, the FTP server connects to the client (the server is the âactiveâ participant, the client just lies back and thinks of England), on a random port chosen by the client. Obviously, that will work if the client’s firewall is configured to allow the connection to that port, and doesn’t depend on the firewall at the server to do anything but allow connections outbound. The Active method is chosen by the client sending a âPORTâ command, containing the IP address and port to which the server should connect.
In the Passive method, the FTP client connects to the server (the server is now the âpassiveâ participant), on a random port chosen by the server. This requires the server’s firewall to allow the incoming connection, and depends on the client’s firewall only to allow outbound connections. The Passive method is chosen by the client sending a âPASVâ command, to which the server responds with a message containing the IP address and port at the server that the client should connect to.
So in theory, your firewall now needs to know what ports are going to be requested by the PORT and PASV commands. For some situations, this is true, and you need to consider this â weâll talk about that in part 2. For now, letâs assume everything is ânormalâ, and talk about how the firewall helps the FTP user or administrator.
If you use port 21 for your FTP server, and the firewall is able to read the control connection, just about every firewall in existence will recognise the PORT and PASV commands, and open up the appropriate holes. This is because those firewalls have an Application Level Gateway, or ALG, which monitors port 21 traffic for FTP commands, and opens up the appropriate holes in the firewall. Weâve discussed the FTP ALG in the Windows Vista firewall before.
Where does port 20 come in? A rather simplistic view is that administrators read the âServicesâ file, and see the line that tells them that port 20 is âftp-dataâ. They assume that this means that opening port 20 as a destination port on the firewall will allow FTP data connections to flow. By the âelephant repellantâ theory, this is proved âtrueâ when their firewalls allow FTP data connections after they open ports 21 and 20. Nobody bothers to check that it also works if they only open port 21, because of the ALG.
OK, so if port 20 isnât needed, why is it associated with âftp-dataâ? For that, youâll have to remember what I said early on in the article â that every socket has five values associated with it â two addresses, two ports, and a protocol. When the data connection is made from the server to the client (remember, thatâs an Active data connection, in response to a PORT command), the source port at the server is port 20. Itâs totally that simple, and since nobody makes firewall rules that look at source port values, itâs relatively unimportant. That âftp-dataâ in the Services file is simply so that the output from ânetstatâ has a meaningful service name instead of â:20â as a source port.
Next time, weâll expand on this topic, to go into the inability of the ALG to process encrypted FTP control traffic, and the resultant issues and solutions that face encrypted FTP.
Lately, as if writers all draw from the same shrinking paddling-pool of ideas, I’ve noticed a batch of stories about how unsafe, unsecure and untrustworthy is FTP.
First it was an article in the print version of SC Magazine, sadly not repeated online, titled “2 Minutes On… FTP integrity challenged”, by Jim Carr. I tried to reach Jim by email, but his bounce message tells me he doesn’t work for SC Magazine any more.
This article was full of interesting quotes.
“8,700 FTP server credentials were being used to access and infect more than 2,000 legitimate websites in the US”. The article goes on to quote Finjan’s director of security research who says they were “most likely hijacked by malware” – since most malware can do keystroke logging for passwords, there’s not much can be done at the protocol level to protect against this, so this isn’t really an indictment of FTP so much as it is an indication of the value and ubiquity of FTP.
Then we get to a solid criticism of FTP: “The problem with FTP is it transfers data, including authorization credentials, in plain text rather than in encrypted form, says Jeff Debrosse, senior research analyst at security vendor ESET”. Okay, that’s true – but in much the same vein as saying that the same problems all apply to HTTP.
Towards the end of the article, we return to Finjan’s assertion that malware can steal credentials for FTP sites – and as I’ve mentioned before, malware can get pretty much any user secret, so again, that’s not a problem that a protocol such as FTP – or SFTP, HTTP, SSH, SCP, etc – can fix. There’s a password or a secret key, and once malware is inside the system, it can get those credentials.
Fortunately, the article closes with a quote from Trent Henry, who says “That means FTP is not the real issue as much as it is a server-protection issue.”
Well, yeah, an article in a recent ZDNet blog entry – on storage, not networking or security (rather like getting security advice from Steve Gibson, a hard-drive expert) – rants on about how his web site got hacked into (through WordPress, not FTP), and as a result, he’s taken to heart a suggestion not to use FTP.
Such a non-sequitur just leaves me breathless. So here’s my take:
But some people have just been too busy, or too devoted to other solutions, to take notice.
FTP first gained secure credentials with the addition of support for SASL and SKey. These are mechanisms for authenticating users without passing a password or password-equivalent (and by “password-equivalent”, I’m including schemes where the hash is passed as proof that you have the password – an attacker can simply copy the hash instead of the password). These additional authentication methods give FTP the ability to check identity without jeopardising the security of the identified party. [Of course, prior to this, there were IPsec and SOCKS solutions that work outside of the protocol.]
OK, you might say, but that only protects the authentication – what about the data?
FTP under GSSAPI was defined in RFC 2228, which was published in October 1997 (the earliest draft copy I can find is from March 1995), from a draft developed over the preceding couple of years. What’s GSSAPI? As far as anyone really needs to know, it’s Kerberos.
This inspired the development of FTP over SSL in 1996, which became FTP over TLS, and which finally became RFC 4217. From 1997 to 2003, those of us in the FTPExt Working Group were wondering why the standard wasn’t yet an RFC, as draft after draft were submitted with small changes, and then apparently sat on by the RFC editor – during this time, several compatible FTP clients, servers and proxies were produced that compatibly supported FTP over TLS (and/or SSL).
One theory that was raised is that the IETF were trying to get SSH-based protocols such as SFTP out before FTP over TLS (which has become known as “FTPS”, for FTP over SSL).
SFTP was abandoned after draft 13, which was made available in July 2006; RFC 4217 was published in October 2005. So it seems a little unlikely that this is the case.
The more likely theory is simply that the RFC Editor was overworked – the former RFC Editor, Jon Postel, died in 1998, and it’s likely that it took some time for the new RFC Editor to sort all the competing drafts out, and give them his attention.
While we were waiting for the RFC, we all built compatible implementations of the FTP over TLS standard.
One or two of us even tried to implement SFTP, but with the draft mutating rapidly, and internal discussion on the SFTP mailing list indicating that no-one yet knew quite what they wanted SFTP to be when it grew up, it was like nailing the proverbial jelly to a tree. Then the SFTP standardisation process ground to a halt, as everyone lost interest. This is why getting SFTP implementations to interoperate is sometimes so frustrating an experience.
FTPS, however – that was solidly defined, and remains a very compatible protocol with few relevant drawbacks. Sadly, even FTP under GSSAPI turned out to have some reliability issues (the data transfer and the control connection, though over different asynchronous channels, share the same encryption context, which means that the receiver must synchronise the two asynchronous channels exactly as the sender did, or face a loss of connection) – but FTP over TLS remains strong and reliable.
Actually, there’s lots of people that do – and many clients and servers, proxies and tunnels, exist in real life implementations. Compatibility issues are few, and generally revolve around how strict servers are about observing the niceties of the secure transaction.
Even a ZDNet blogger or two has come across FTPS, and recommends it, although of course he recommends the wrong server.
WFTPD Pro. Unequivocally. Because I know who wrote it, and I know what went into it. It’s all good stuff.
I have a little time over the next couple of weeks to devote to developing WFTPD a little further.
This is a good thing, as it’s way past time that I brought it into Vista’s world.
I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.
The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.
So, WFTPD and WFTPD Pro run in Windows Vista and Windows Server 2008.
But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.
I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.
Here’s my list of significant changes to make over the next couple of weeks:
As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.
Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.
Another side-project of mine is an EFS tool. I may use some time to work on that.