My take on the SSL MITM Attacks – part 3 – the FTPS attacks

[Note – for previous parts in this series, see Part 1 and Part 2.]

FTP, and FTP over SSL, are my specialist subject, having written one of the first FTP servers for Windows to support FTP over SSL (and the first standalone FTP server for Windows!)

Rescorla and others have concentrated on the SSL MITM attacks and their effects on HTTPS, declining to discuss other protocols about which they know relatively far less. OK, time to step up and assume the mantle of expert, so that someone with more imagination can shoot me down.

FTPS is not vulnerable to this attack.

No, that’s plainly rubbish. If you start thinking along those lines in the security world, you’ve lost it. You might as well throw in the security towel and go into a job where you can assume everybody loves you and will do nothing to harm you. Be a developer of web-based applications, say. 🙂

FTPS has a number of possible vulnerabilities

And they are all dependent on the features, design and implementation of your individual FTPS server and/or client. That’s why I say “possible”.

Attack 1 – renegotiation with client certificates

The obvious attack – renegotiation for client certificates – is likely to fail, because FTPS starts its TLS sessions in a different way from HTTPS.

In HTTPS, you open an unauthenticated SSL session, request a protected resource, and the server prompts for your client certificate.

In FTPS, when you connect to the control channel, you provide your credentials at the first SSL negotiation or not at all. There’s no need to renegotiate, and certainly there’s no language in the FTPS standard that allows the server to query for more credentials part way into the transaction. The best the server can do is refuse a request and say you need different or better credentials.

Attack 2 – unsolicited renegotiation without credentials

A renegotiation attack on the control channel that doesn’t rely on making the server ask for client credentials is similarly unlikely to succeed – when the TLS session is started with an AUTH TLS command, the server puts the connection into the ‘reinitialised’ state, waiting for a USER and PASS command to supply credentials. Request splitting across the renegotiation boundary might get the user name, but the password wouldn’t be put into anywhere the attacker could get to.

Attack 3 – renegotiating the data connection

At first sight, the data connection, too, is difficult or impossible to attack – an attacker would have to guess which transaction was an upload in order to be able to prepend his own content to the upload.

But that’s betting without the effect that NATs had on the FTP protocol.

Because the PORT and PASV commands involve sending an IP address across the control channel, and because NAT devices have to modify these commands and their responses, in many implementations of FTPS, after credentials have been negotiated on the control channel, the client issues a “CCC” command, to drop the control channel back into clear-text mode.

Yes, that’s right, after negotiating SSL with the server, the client may throw away the protection on the control channel, so the MitM attacker can easily see what files are going to be accessed over what ports and IP addresses, and if the server supports SSL renegotiation, the attacker can put his data in at the start of the upload before renegotiating to hand off to the legitimate client. Because the client thinks everything is fine, and the server just assumes a renegotiation is fine, there’s no reason for either one to doubt the quality of the file that’s been uploaded.

How could this be abused? Imagine that you are uploading an EXE file, and the hacker prepends it with his own code. That’s how I wrote code for a ‘dongle’ check in a program I worked on over twenty years ago, and the same trick could still work easily today. Instant Trojan.

There are many formats of file that would allow abuse by prepending data. CSV files, most exploitable buffer overflow graphic formats, etc.

Attack 3.5 – truncation attacks

While I’m on FTP over SSL implementations and the data connection, there’s also the issue that most clients don’t properly terminate the SSL connection in FTPS data transfers.

As a result, the server can’t afford to report as an error when a MitM closes the TCP connection underneath them with an unexpected TCP FIN.

That’s bad – but combine it with FTP’s ability to resume a transfer from part-way into a file, and you realize that an MitM could actually stuff data into the middle of a file by allowing the upload to start, interrupting it after a few segments, and then when the client resumed, interjecting the data using the renegotiation attack.

The attacker wouldn’t even need to be able to insert the FIN at exactly the byte mark he wanted – after all, the client will be sending the REST command in clear-text thanks to the CCC command. That means the attacker can modify it, to pick where his data is going to sit.

Not as earth-shattering as the HTTPS attacks, but worth considering if you rely on FTPS for data security.

How does WFTPD Pro get around these attacks?

1. I never bothered implementing SSL / TLS renegotiation – didn’t see it as necessary; never had the feature requested. Implementing unnecessary complexity is often cause for a security failure.

2. I didn’t like the CCC command, and so I didn’t implement that, either. I prefer to push people towards using Block instead of Stream mode to get around NAT restrictions.

I know, it’s merely fortunate that I made those decisions, rather than that I had any particular foresight, but it’s nice to be able to say that my software is not vulnerable to the obvious attacks.

I’ve yet to run this by other SSL and FTP experts to see whether I’m still vulnerable to something I haven’t thought of, but my thinking so far makes me happy – and makes me wonder what other FTPS developers have done.

I wanted to contact one or two to see if they’ve thought of attacks that I haven’t considered, or that I haven’t covered. So far, however, I’ve either received no response, or I’ve discovered that they are no longer working on their FTPS software.

Let me know if you have any input of your own on this issue.

My take on the SSL MitM Attacks – part 2 – clarifications

Since the last post I made on the topic of SSL renegotiation attacks, I’ve had a few questions in email. Let’s see how well I can answer them:

Q. Some stories talk about SSL, others about TLS, what’s the difference?

A. For trademark reasons, when SSL became an open standard, it had to change its name from SSL to TLS. TLS 1.0 is essentially SSL 3.1 – it even claims to be version “3.1” in its communication. I’ll just call it SSL from here on out to remind you that it’s a problem with SSL and TLS both.

Q. All the press coverage seems to be talking about HTTPS – is this limited to HTTPS?

A. No, this isn’t an HTTPS-only attack, although it is true that most people’s exposure to SSL is through HTTPS. There are many other protocols that use SSL to protect their connections and traffic, and they each may be vulnerable in their own special ways.

Q. I’ve seen some posts saying that SSH and SFTP are not vulnerable – how did they manage that?

A. Simply by being “not SSL”. SFTP is a protocol on top of SSH, and SSH is not related to SSL. That’s why it’s not affected by this issue. Of course, if there’s a vulnerability discovered in SSH, it’ll affect SSH and SFTP, but won’t affect SSL or SSL-based protocols such as HTTPS and FTPS.

Q. Is it OK to disable SSL renegotiation to fix this bug?

A. Obviously, if SSL didn’t need renegotiation at all, it wouldn’t be there. So, in some respects, if you disable SSL renegotiation, you may be killing functionality. There are a few reasons that you might be using SSL renegotiation:

  1. Because that’s how client authentication works – while you can do client authentication without renegotiation, most HTTPS implementations use renegotiation to request the client certificate. Disabling renegotiation will generally prevent most clients from authenticating with client authentication.
  2. After 10 hours, renegotiation is required, so as to refresh the session key. Do you have SSL connections lasting 10 hours? You probably should be looking at some disconnect/reconnect scenario instead.
  3. Because you can’t disable SSL renegotiation in all cases. In OpenSSL, you can only disable renegotiation if you download and install the new version, and in other SSL implementations, there is no way to disable renegotiation outside of modifying the application.

Q. Since this attack requires the attacker to become a man-in-the-middle, doesn’t that make it fundamentally difficult, esoteric, or close to impossible?

A. If becoming a man-in-the-middle (MitM) was impossible or difficult, there would be little-to-no need for SSL in the first place. SSL is designed specifically to protect against MitM attacks by authenticating and encrypting the channel. If a MitM can alter traffic and make it seem as if everything’s secure between client and server over SSL, then there’s a failure in SSL’s basic goal of protecting against men-in-the-middle.

Once you assume that an attacker can intercept, read, and modify (but not decrypt) the SSL traffic, this attack is actually relatively easy. There are demonstration programs available already to show how to exploit it.

I was asked earlier today how someone could become a man-in-the-middle, and off the top of my head I came up with six ways that are either recently or frequently used to do just that.

Q. Am I safe at a coffee shop using the wifi?

A. No, not really – over wifi is the easiest way for an attacker to insert himself into your stream.

When using a public wifi spot, always connect as soon as possible to a secured VPN. Ironically, of course, most VPNs are SSL-based, these days, and so you’re relying on SSL to protect you against possible attacks that might lead to SSL issues. This is not nearly as daft as it sounds.

Q. Is this really the most important vulnerability we face right now?

A. No, it just happens to be one that I understood quickly and can blather on about. I think it’s under-discussed, and I don’t think we’ve seen the last entertaining use of it. I’d like to make sure developers of SSL-dependent applications are at least thinking about what attacks can be performed against them using this step, and how they can prevent these attacks. I know I’m working to do something with WFTPD Pro.

Q. Isn’t the solution to avoid executing commands outside the encrypted tunnel?

A. Very nearly, yes. The answer is to avoid executing commands sent across two encrypted sessions, and to deal harshly with those connections who try to send part of their content in one session and the rest in a differently negotiated session.

In testing WFTPD Pro out against FTPS clients, I found that some would send two encrypted packets for each command – one containing the command itself, the other containing the carriage return and linefeed. This is bad in itself, but if the two packets straddle either side of a renegotiation, disconnect the client. That should prevent the HTTPS Request-Splitting using renegotiation.

One key behaviour HTTPS has is that when you request a protected resource, it will ask for authentication and then hand you the resource. What it should probably be doing is to ask for authentication and then wait for you to re-request the resource. That action alone would have prevented the client-certificate attacks discussed so far.

Q. What is the proposed solution?

A. The proposed solution, as I understand it, is for client and server to state in their renegotiation handshake what the last negotiated session state was. That way, an interloper cannot hand off a previously negotiated session to the victim client without the client noticing.

Note that, because this is implemented as a TLS handshake extension, it cannot be implemented in SSLv3. Those of you who just got done with mandating SSLv2 removal throughout your organisations, prepare for the future requirement that SSLv3 be similarly disabled.

Q. Can we apply the solution today?

A. It’s not been ratified as a standard yet, and there needs to be some discussion to avoid rushing into a solution that might, in retrospect, turn out to be no better – or perhaps worse – than the problem it’s trying to solve.

Even when the solution is made available, consider that PCI auditors are still working hard to persuade their customers to stop using SSLv2, which was deprecated over twelve years ago. I keep thinking that this is rather akin to debating whether we should disable the Latin language portion of our web pages.

However, it does demonstrate that users and server operators alike do not like to change their existing systems. No doubt IDS and IPS vendors will step up and provide modules that can disconnect unwarranted renegotiations.

Update: Read Part 3 for a discussion of the possible threats to FTPS.

My take on the SSL MITM Attacks – part 1 – the HTTPS attack

If you’re in the security world, you’ve probably heard a lot lately about new and deadly flaws in the SSL and TLS protocols – so-called “Man in the Middle” attacks (aka MITM).

These aren’t the same as old-style MITM attacks, which relied on the attacker somehow pretending strongly to be the secure site being connected to – those attacks allowed the attacker to get the entire content of the transmission, but they required the attacker to already have some significant level of access. The access required included that the attacker had to be able to intercept and change the network traffic as it passed through him, and also that the attacker had to provide a completely trusted certificate representing himself as the secure server. [Note – you can always perform a man-in-the-middle attack if you own a trusted certificate authority.]

The current SSL MITM attack follows a different pattern, because of the way HTTPS authentication works in practice. This means it has more limited effect, but requires less in the way of access. You gain some security advantage, you lose some. The attacker still needs to be able to intercept and modify the traffic between client and server, but does not get to see the content of traffic between client and server. All the attacker gets to do is to submit data to the server before the client gets its turn.

Imagine you’re ordering a pizza over the phone. Normally, the procedure is that you call and tell them what the pizza order is (type of pizza, delivery address), and they ask you for your credit card number as verification. Sometimes, though, the phone operator asks for your credit card number first, and then takes your order. So, you’re comfortable working either way.

Now, suppose an attacker can hijack your call to the pizza restaurant and mimic your voice. While playing you a ringing tone to keep you on the line, he talks to the phone operator, specifying the pizza he wants and the address to which it is to be delivered. Immediately after that, he connects you to your pizza restaurant, you’re asked for your credit card number, which you supply, and then you place your pizza order.

Computers are as dumb as a bag of rocks. Not very smart rocks at that. So, imagine that this phone operator isn’t smart enough to say “what, another pizza? You just ordered one.”

That’s a rough, non-technical description of the HTTPS attack. There’s another subtle variation, in which the caller states his pizza order, then says “oh, and ignore my attempt to order a pizza in a few seconds”. The computer is dumb enough to accept that, too.

For a more technical description, go see Eric Rescorla’s summary at Understanding the TLS Renegotiation Attack, or Marsh Ray’s original report.

Let’s call these the HTTPS client-auth attack and the HTTPS request-splitting attack. That’s a basic description of what they do.

HTTPS client-authentication attack

The client-authentication attack is getting the biggest press, because it allows the attacker one go (per try) at persuading the server to perform an action in the context of the authenticated user. From ordering a pizza to pretty any activity that can be caused in a single request to a web site can be achieved with this attack.

Preventing the attack at the server.

Servers have been poorly designed in this respect – but out of some necessity. Eric Rescorla explains this in the SSL and TLS bible, “SSL and TLS” [Subtitle: Designing and Building Secure Systems] on page 322, section 9.18.

“The commonly used approach is for the server to negotiate an ordinary SSL connection for all clients. Then, once the request has been received, the server determines whether client authentication is required… If it is required, the server requests a rehandshake using HelloRequest. In this second handshake, the server requests client authentication.”

How does HTTP handle other authentication, such as Forms, Digest, Basic, Windows Integrated, etc? Is it different from the above description?

A client can provide credentials along with its original request using the WWW-Authenticate header, or the server can refuse an unauthorised (anonymous) request with a 401 error code indicating that authentication is necessary (and listing WWW-Authenticate headers containing appropriate challenges). In the latter case, the client resends the request with the appropriate WWW-Authenticate header.

HTTPS Mutual Authentication (another term for client authentication) doesn’t do this. Why on earth not? I’m not sure, but I think it’s probably because SSL already has a mostly unwarranted reputation for being slow, and this would add another turnaround to the process.

Whatever the reason, a sudden dose of unexpected ‘401’ errors would lead to clients failing, because they aren’t coded to re-request the page with mutual auth in place.

So, we can’t redesign from scratch to fix this immediately – how do we fix what’s in place?

The best way is to realise what the attack can do, and make sure that the effects are as limited as possible. The attack can make the client engage in one action – the first action it performs after authenticating – using the credentials sent immediately after requesting the action to be performed.

A change of application design is warranted, then, to ensure that the first thing your secure application does on authenticating with a client certificate is to display a welcome screen, and not to perform an action. Reject any action requested prior to authentication having been received.

Sadly, while this is technically possible using SSL if you’ve written your own server to go along with the application, or can tie into information about the underlying SSL connection, it’s likely that most HTTPS servers operate on the principle that HTTP is stateless, and the app should have no knowledge of the SSL state beyond “have I been authenticated or not”.

Doubtless web server vendors are going to be coming out with workarounds, advice and fixes – and you should, of course, be looking to their advice on how to fix this behaviour.

The best defence against the client-authentication attack, of course, is to not use client authentication.

Preventing the attack at the client

Not much you can do here, I’m afraid – the client can’t tell if the server has already received a request. Perhaps it would work to not provide client certificates to a server unless you already have an existing SSL connection, but that would kill functionality to perfectly good web sites that are operating properly. Assuming that most web sites operate in the mode of “accept a no-client-auth connection before requesting authentication”, you could rework your client to insist on this happening all the time. Prepare for failures to be reported.

Again, the best defence is not to use client authentication right now. Perhaps split your time between browsers – one with client certificates built in for those few occasions when you need them, and the other without client certs, for your main browsing. That will, at least, limit your exposure.

HTTPS Request-splitting attack

Preventing the attack at the server

The HTTPS Request-splitting attack is technically a little easier to block at the server, if you write the server’s SSL interface – there should be absolutely no reason for an HTTP Request to be split across an SSL renegotiation. So, an HTTPS server should be able to discard any connection state, including headers already sent, when renegotiation happens. Again, consult with your web server developer / vendor for their recommendations.

Preventing the attack at the client?

Again, you’re pretty much out of luck here – even sending a double carriage return to terminate any previous request would cause the attacker’s request to succeed.

The long term approach – fix the protocol

As you can imagine, there are some changes that can be made to TLS to fix all of this. The basic thought is to have client and server add a little information in the renegotiation handshake that checks that client and server both agree about what has already come before in their communication. This allows client and server both to tell when an interloper has added his own communication before the renegotiation has taken place.

Details of the current plan can be found at draft-rescorla-tls-renegotiate.txt

Final thoughts

Yeah, this is a significant attack against SSL, or particularly HTTPS. There are few, if any, options for protecting yourself as a client, and not very many for protecting yourself as a server.

Considering how long it’s taken some places to get around to ditching SSLv2 after its own security flaws were found and patched 14 years ago with the development of SSLv3 and TLS, it seems like we’ll be trying to cope with these issues for many years to come.

Like it or not, though, the long-term approach of revising TLS is our best protection, and it’s important as users that we consider keeping our software up-to-date with changes in the security / threat landscape.

Update: read Part 2 of this discussion for answers to a number of questions.

Update: read Part 3 for some details on FTPS and the potential for attacks.

Why .NET apps keep crashing on your Tablet PC

I’ve been struggling with this issue for some time.

I have a small, simple .NET application I wrote in Visual C# a few months ago – I’ve tentatively titled it “iFetch”, because it fetches radio shows from the BBC iPlayer.

It really is very little more than a simple data grid view that displays the details of the shows and allows users to select them for downloading and later listening.

Despite that, I’ve had some terrible trouble with it. Sometimes it’ll work perfectly, other times it’ll just suddenly crash, and apparently without warning and for different reasons – sometimes when I click on a row, other times when I select to sort on a column heading.

The crash seems to be intermittent, but doesn’t reproduce on other computers; even computers of the same configuration.

For those who want technical details, here we go – the crash is a System.StackOverflowException error, and appears to be due to an unchecked infinite recursion in System.Windows.Forms.dll!System.Windows.Forms.DataGridViewRow.DataGridViewRowAccessibleObject.Bounds.get().

The clue here is that this is a “DataGridViewRowAccessibleObject” – not a mere DataGridViewRow. These “AccessibleObject” versions of common .NET components only come into existence and spread their effect when an “accessibility application” is active on the system. Apparently, in addition to text-to-speech readers, braille devices, etc, a Tablet – whether external like mine, or internal like those in a Tablet PC – classifies as an accessibility application.

That’s why this bug was intermittent for me – sometimes I had my external graphics tablet plugged in, other times I didn’t. To make matters worse, it seems to only trigger when one or more rows in the DataGrid are hidden.

If you get this error, first try checking to see if Microsoft have fixed the flaw – check for .NET service packs – and then, if there is no direct fix for the flaw, try either unplugging your tablet, if you can, or temporarily stop the Tablet PC Input Service, while running the program.

So far, I have received no feedback from Microsoft about when this will be fixed.

Why changing passwords should be done regularly

A little birdie sent me a copy of today’s SANS ISC diary entry. That’s a good thing, because I’m at home sick with alleged piggy flu, and I’m not able to keep up with a whole lot.

The diary entry argues that regular changes of passwords are often done for no other reason than “because we’ve always done it that way”.

Apparently, people responsible for security policy have “read somewhere” that you’re supposed to change passwords every ninety days, and having no other basis on which to proceed, that’s the policy carved in stone.

When asked why this policy is the way it is, the usual response is “good security practice” – and in such environments it’s difficult to give a good response to someone who pushes back, arguing that changing passwords in their application is ‘difficult’ or, more often, ‘expensive’. This is, after all, business, and if one side pleads “expense”, while the other side pleads “good thing to do”, the latter side will lose.

So, why is it best practice?

One reason is that you have to recognise that for all that we tell users not to share their passwords, not to use the same password on multiple sites (aka “share their passwords”), etc, very often users will do exactly that. So, every ninety days, you change your password and you cut off everyone with whom you previously shared your password (to an extent).

Another reason is to allow changes in password policy to propagate out to new passwords. If you suddenly realise that passwords can be easily hacked if they are only six characters, you change the password policy to require punctuation as well, and then you realise that because no one has to change their password, the new policy will never be applied.

Those are the common arguments for regular password changes, and there are a few others, but there’s one I rarely hear being made.

What about when you do get an exposure?

In my professional career, I have seen, or heard of, a number of cases of exposure of password information. Sometimes it’s as simple as a departing employee who knows far too much information and may not be trusted, or as mind-boggling as a team sharing a list of important passwords, and one of the team members losing the list. Other times it’s more complex.

Each time, the response from security is the same – if the existing passwords are in danger of being used because of such exposure, then those passwords need to be changed.

Most times, the response from the business is the same – that the passwords haven’t been changed in so long, and they’re spread through so many different applications, that they have no idea what will be affected if they change the password.

Once you hit that scenario, it can be months before you get the password changed. Yes, months. And all during that time, the account may be compromised.

How do you prevent this?

Think of your disaster recovery drills – when there’s a process that needs to be followed quickly and correctly in an emergency situation, you achieve that by meticulous planning and regular exercise. You create the process and test it regularly, updating the process as you find there’s a need.

If you don’t change passwords on these high-value accounts once every 90 days (or so), how do you know that you’ll be able to change those passwords after an exposure or compromise? How will you guarantee that your password change procedures are current, without testing them? How will you enforce changes being documented if you don’t check the documentation against reality once in a while?