SSL Tutorial – Tales from the Crypto

SSL Tutorial

A late contribution to the expired root in certificate chain issue…

In case you missed it, on May 30th, a root certificate expired.

This made a lot of applications very unreliable, and been widely regarded as a bad move.

Well, alright, what was regarded as a bad move is that applications should become unreliable in the specific circumstances involved here.

As short a TL;DR as I can manage

When you connect to a  server(web site or application) over SSL/TLS, the server has to send your client (browser or application) its Certificate.

In modern code, this Certificate is used by the client to trace back to a signing authority that is trusted by the client or its operating system.

Some servers like to help this process out, by sending a chain along with the Certificate for a couple of reasons:

  1. The client might not have the ability or time to go building and checking its Certificate chain, and choose to trust only servers that send it an entire chain they can trust
  2. Older clients might not be aware of the newer root certificates up to which the supplied Certificate chain connects.

This second situation is what we’re interested in here. A new root appears, new certificates are issued, and old clients refuse to honour them because they don’t have the new root in their trust store.

This is fixed with “cross-signing”, which allows an older, trusted root, to sign the new untrusted root, so that the older client sees a chain that includes the older root at the top, and is therefore trusted.

Older root certificates expire. It takes 20 years, but it finally happened at the end of May, to this one root certificate, “AddTrust External CA Root”

When that happens, a client who builds the certificate chain and uses this to trust the root certificate is happy, because it sees only certificates that it trusts.

A client who takes the certificate chain as supplied by the server, without building its own, will see that the chain ends in an expired certificate, and refuse to connect, because the entire chain cannot be trusted.

What do I bring to the party?

The two links I provided earlier are well worth a read if you’re interested in solving this problem, and really, I’ve got nothing to add to how this issue occurred, why it’s a problem, how to address it at your server, or any of those fun things.

What I do offer is a tool for .NET (Windows and Linux, Mac, etc) that lets you compare the certificate chain as presented by the server against the certificate chain built by a client. It will report if a certificate in either chain has expired. It’s written in C#, and built with Visual Studio, and takes one parameter – the site to which it will connect on port 443 to query for the certificate and chain.

It’s not a very smart tool, and it makes a few assumptions (though it’s relatively easy to fix if those assumptions turn out to be false).

But it has source code, and it runs on Windows, Linux and (presumably – haven’t tested) Mac.

How does it look?

Working against the sites listed at, we get the following results:

First: – Certificate issued from a CA signed by AAA Certificate Services root.


Interestingly, note that the certificate chain in the stream from the server doesn’t include the root certificate at all, but it’s present in the code where we ask the client code what certificates are in the chain for this server.

Second: – Certificate issued from a CA signed by AddTrust External CA Root.


The certificates here expired on 5/30/2020, and it’s no surprise that we see this result in both the chain provided by the server and the chain provided by the client. Again, the root certificate isn’t actually in the chain from the server provided in the stream.

Third: – Certificate issued from a CA signed by USERTrust RSA Certification Authority with a cross cert via AIA from AddTrust External CA Root.


Nothing noteworthy here, but it’s included here for completeness. I don’t do anything in this code for an AIA cross cert.

Fourth, and most importantly: – Certificate issued from a CA signed by USERTrust RSA Certification Authority with a cross cert via server chain from AddTrust External CA Root.


Here’s the point of the tool – it’s able to tell you that there’s a certificate in the chain from the server that has expired, and may potentially be causing problems to visitors using an older browser or client library.

Enough waffle, where’s the chicken?

By now, you’ve had enough of reading and you want to see the code – or just run it. I’ve attached two files – one for the source code, the other for the executable content. I leave it up to others to tell you how to install dotnet core on your platform.

Here’s the source code

And here’s the binary

Let me know if, and how, you use this tool, and whether it achieves whatever goal you want from it.

Corrections to Thierry Zoller’s Whitepaper

Thanks to Thierry Zoller for mentioning me in the FTP section of his whitepaper summary of the TLS renegotiation attacks on various protocols. I’m glad he also spells my name right – you’d be surprised how many people get that wrong, although I’m sure Thierry gets his own share of people unable to spell his name.

The whitepaper itself contains some really nice and simple documentation of the SSL MITM renegotiation attack, and how it works. It’s well worth reading if you’re looking for some insight into how this works.

First, though, a couple of corrections to Thierry’s summary – while he’s working on revising his whitepaper, I’ll post them here:

  • The name of my FTP server is “WFTPD Server”, or “WFTPD Pro Server”, as you will see from Of the two, only the WFTPD Pro Server has FTP over TLS capabilities.
  • SFTP is not “FTP over SSH”. SFTP is a completely different protocol – a sub-protocol of SSH. For example, where FTP uses commands of three or four letters, SFTP uses binary single byte instructions. There is no one-to-one mapping between SFTP commands and FTP commands, and there are many other differences that aren’t really worth going into.
  • I don’t see the use of CCC as a reason to “use SFTP over FTPS” – I see it as a reason to not use FTPS clients that require, or FTPS servers that support, the CCC command. There are better ways to surmount the NAT problem in FTPS – the best is IPv6, but even in IPv4, the use of block-mode FTP over the default data channel removes NATs from being a problem for FTPS.
  • The language in the “Client Certificate Authentication” FTPS section appears to be an incomplete sentence. What I think he is trying to say is that when authentication is performed in FTPS, it resets the command state, so that commands entered prior to authentication, if they are executed at all, are executed in the context that existed at that time, rather than the newly-authenticated context.

I think that where FTPS has problems that are thrown into sharp relief with the SSL MITM renegotiation attacks I’ve been discussing for a while now, it has had those problems before. If an attacker can monitor and modify the FTP control channel (because the client requested CCC and the server allowed it), the attacker can easily upload whatever data they like in place of the client’s bona fide upload.

The renegotiation attack simply makes it easier for the attacker to hide the attack. It’s the use of CCC which facilitates the MITM attack, far more than the renegotiation does.

To address one further comment I’ve heard with regard to SSL MITM attacks, I hear “yeah, but getting to be a man-in-the-middle is so difficult anyway, that even a really simple attack is unlikely”. That’s a true comment – for the most part, there is little chance of a man-in-the-middle attack occurring on the general Internet in a bulk situation. The ‘last mile’ of home wireless, coffee bars and other public wireless hangouts, or the possibility of DNS hijacking, HOSTS file editing, broadband router hacking, or just plain viruses and worms, are the place where most man-in-the-middle entry points exist.

However, if you’re going to assert that it’s truly unlikely that an attacker can insert himself into your network stream, you basically have no reason whatever to use SSL / TLS – without a potential for that interception and modification of your traffic, there’s really no need to authenticate it, encrypt it, or monitor its integrity along the path.

The fact that a protocol or application uses SSL /TLS means that it tacitly assumes the existence of a man in the middle. If SSL / TLS allows a man-in-the-middle attack at all, it fails in its basic raison d’etre.

Next post, I promise something other than SSL renegotiation attacks.

My take on the SSL MITM Attacks – part 3 – the FTPS attacks

[Note – for previous parts in this series, see Part 1 and Part 2.]

FTP, and FTP over SSL, are my specialist subject, having written one of the first FTP servers for Windows to support FTP over SSL (and the first standalone FTP server for Windows!)

Rescorla and others have concentrated on the SSL MITM attacks and their effects on HTTPS, declining to discuss other protocols about which they know relatively far less. OK, time to step up and assume the mantle of expert, so that someone with more imagination can shoot me down.

FTPS is not vulnerable to this attack.

No, that’s plainly rubbish. If you start thinking along those lines in the security world, you’ve lost it. You might as well throw in the security towel and go into a job where you can assume everybody loves you and will do nothing to harm you. Be a developer of web-based applications, say. :-)

FTPS has a number of possible vulnerabilities

And they are all dependent on the features, design and implementation of your individual FTPS server and/or client. That’s why I say “possible”.

Attack 1 – renegotiation with client certificates

The obvious attack – renegotiation for client certificates – is likely to fail, because FTPS starts its TLS sessions in a different way from HTTPS.

In HTTPS, you open an unauthenticated SSL session, request a protected resource, and the server prompts for your client certificate.

In FTPS, when you connect to the control channel, you provide your credentials at the first SSL negotiation or not at all. There’s no need to renegotiate, and certainly there’s no language in the FTPS standard that allows the server to query for more credentials part way into the transaction. The best the server can do is refuse a request and say you need different or better credentials.

Attack 2 – unsolicited renegotiation without credentials

A renegotiation attack on the control channel that doesn’t rely on making the server ask for client credentials is similarly unlikely to succeed – when the TLS session is started with an AUTH TLS command, the server puts the connection into the ‘reinitialised’ state, waiting for a USER and PASS command to supply credentials. Request splitting across the renegotiation boundary might get the user name, but the password wouldn’t be put into anywhere the attacker could get to.

Attack 3 – renegotiating the data connection

At first sight, the data connection, too, is difficult or impossible to attack – an attacker would have to guess which transaction was an upload in order to be able to prepend his own content to the upload.

But that’s betting without the effect that NATs had on the FTP protocol.

Because the PORT and PASV commands involve sending an IP address across the control channel, and because NAT devices have to modify these commands and their responses, in many implementations of FTPS, after credentials have been negotiated on the control channel, the client issues a “CCC” command, to drop the control channel back into clear-text mode.

Yes, that’s right, after negotiating SSL with the server, the client may throw away the protection on the control channel, so the MitM attacker can easily see what files are going to be accessed over what ports and IP addresses, and if the server supports SSL renegotiation, the attacker can put his data in at the start of the upload before renegotiating to hand off to the legitimate client. Because the client thinks everything is fine, and the server just assumes a renegotiation is fine, there’s no reason for either one to doubt the quality of the file that’s been uploaded.

How could this be abused? Imagine that you are uploading an EXE file, and the hacker prepends it with his own code. That’s how I wrote code for a ‘dongle’ check in a program I worked on over twenty years ago, and the same trick could still work easily today. Instant Trojan.

There are many formats of file that would allow abuse by prepending data. CSV files, most exploitable buffer overflow graphic formats, etc.

Attack 3.5 – truncation attacks

While I’m on FTP over SSL implementations and the data connection, there’s also the issue that most clients don’t properly terminate the SSL connection in FTPS data transfers.

As a result, the server can’t afford to report as an error when a MitM closes the TCP connection underneath them with an unexpected TCP FIN.

That’s bad – but combine it with FTP’s ability to resume a transfer from part-way into a file, and you realize that an MitM could actually stuff data into the middle of a file by allowing the upload to start, interrupting it after a few segments, and then when the client resumed, interjecting the data using the renegotiation attack.

The attacker wouldn’t even need to be able to insert the FIN at exactly the byte mark he wanted – after all, the client will be sending the REST command in clear-text thanks to the CCC command. That means the attacker can modify it, to pick where his data is going to sit.

Not as earth-shattering as the HTTPS attacks, but worth considering if you rely on FTPS for data security.

How does WFTPD Pro get around these attacks?

1. I never bothered implementing SSL / TLS renegotiation – didn’t see it as necessary; never had the feature requested. Implementing unnecessary complexity is often cause for a security failure.

2. I didn’t like the CCC command, and so I didn’t implement that, either. I prefer to push people towards using Block instead of Stream mode to get around NAT restrictions.

I know, it’s merely fortunate that I made those decisions, rather than that I had any particular foresight, but it’s nice to be able to say that my software is not vulnerable to the obvious attacks.

I’ve yet to run this by other SSL and FTP experts to see whether I’m still vulnerable to something I haven’t thought of, but my thinking so far makes me happy – and makes me wonder what other FTPS developers have done.

I wanted to contact one or two to see if they’ve thought of attacks that I haven’t considered, or that I haven’t covered. So far, however, I’ve either received no response, or I’ve discovered that they are no longer working on their FTPS software.

Let me know if you have any input of your own on this issue.

My take on the SSL MitM Attacks – part 2 – clarifications

Since the last post I made on the topic of SSL renegotiation attacks, I’ve had a few questions in email. Let’s see how well I can answer them:

Q. Some stories talk about SSL, others about TLS, what’s the difference?

A. For trademark reasons, when SSL became an open standard, it had to change its name from SSL to TLS. TLS 1.0 is essentially SSL 3.1 – it even claims to be version “3.1” in its communication. I’ll just call it SSL from here on out to remind you that it’s a problem with SSL and TLS both.

Q. All the press coverage seems to be talking about HTTPS – is this limited to HTTPS?

A. No, this isn’t an HTTPS-only attack, although it is true that most people’s exposure to SSL is through HTTPS. There are many other protocols that use SSL to protect their connections and traffic, and they each may be vulnerable in their own special ways.

Q. I’ve seen some posts saying that SSH and SFTP are not vulnerable – how did they manage that?

A. Simply by being “not SSL”. SFTP is a protocol on top of SSH, and SSH is not related to SSL. That’s why it’s not affected by this issue. Of course, if there’s a vulnerability discovered in SSH, it’ll affect SSH and SFTP, but won’t affect SSL or SSL-based protocols such as HTTPS and FTPS.

Q. Is it OK to disable SSL renegotiation to fix this bug?

A. Obviously, if SSL didn’t need renegotiation at all, it wouldn’t be there. So, in some respects, if you disable SSL renegotiation, you may be killing functionality. There are a few reasons that you might be using SSL renegotiation:

  1. Because that’s how client authentication works – while you can do client authentication without renegotiation, most HTTPS implementations use renegotiation to request the client certificate. Disabling renegotiation will generally prevent most clients from authenticating with client authentication.
  2. After 10 hours, renegotiation is required, so as to refresh the session key. Do you have SSL connections lasting 10 hours? You probably should be looking at some disconnect/reconnect scenario instead.
  3. Because you can’t disable SSL renegotiation in all cases. In OpenSSL, you can only disable renegotiation if you download and install the new version, and in other SSL implementations, there is no way to disable renegotiation outside of modifying the application.

Q. Since this attack requires the attacker to become a man-in-the-middle, doesn’t that make it fundamentally difficult, esoteric, or close to impossible?

A. If becoming a man-in-the-middle (MitM) was impossible or difficult, there would be little-to-no need for SSL in the first place. SSL is designed specifically to protect against MitM attacks by authenticating and encrypting the channel. If a MitM can alter traffic and make it seem as if everything’s secure between client and server over SSL, then there’s a failure in SSL’s basic goal of protecting against men-in-the-middle.

Once you assume that an attacker can intercept, read, and modify (but not decrypt) the SSL traffic, this attack is actually relatively easy. There are demonstration programs available already to show how to exploit it.

I was asked earlier today how someone could become a man-in-the-middle, and off the top of my head I came up with six ways that are either recently or frequently used to do just that.

Q. Am I safe at a coffee shop using the wifi?

A. No, not really – over wifi is the easiest way for an attacker to insert himself into your stream.

When using a public wifi spot, always connect as soon as possible to a secured VPN. Ironically, of course, most VPNs are SSL-based, these days, and so you’re relying on SSL to protect you against possible attacks that might lead to SSL issues. This is not nearly as daft as it sounds.

Q. Is this really the most important vulnerability we face right now?

A. No, it just happens to be one that I understood quickly and can blather on about. I think it’s under-discussed, and I don’t think we’ve seen the last entertaining use of it. I’d like to make sure developers of SSL-dependent applications are at least thinking about what attacks can be performed against them using this step, and how they can prevent these attacks. I know I’m working to do something with WFTPD Pro.

Q. Isn’t the solution to avoid executing commands outside the encrypted tunnel?

A. Very nearly, yes. The answer is to avoid executing commands sent across two encrypted sessions, and to deal harshly with those connections who try to send part of their content in one session and the rest in a differently negotiated session.

In testing WFTPD Pro out against FTPS clients, I found that some would send two encrypted packets for each command – one containing the command itself, the other containing the carriage return and linefeed. This is bad in itself, but if the two packets straddle either side of a renegotiation, disconnect the client. That should prevent the HTTPS Request-Splitting using renegotiation.

One key behaviour HTTPS has is that when you request a protected resource, it will ask for authentication and then hand you the resource. What it should probably be doing is to ask for authentication and then wait for you to re-request the resource. That action alone would have prevented the client-certificate attacks discussed so far.

Q. What is the proposed solution?

A. The proposed solution, as I understand it, is for client and server to state in their renegotiation handshake what the last negotiated session state was. That way, an interloper cannot hand off a previously negotiated session to the victim client without the client noticing.

Note that, because this is implemented as a TLS handshake extension, it cannot be implemented in SSLv3. Those of you who just got done with mandating SSLv2 removal throughout your organisations, prepare for the future requirement that SSLv3 be similarly disabled.

Q. Can we apply the solution today?

A. It’s not been ratified as a standard yet, and there needs to be some discussion to avoid rushing into a solution that might, in retrospect, turn out to be no better – or perhaps worse – than the problem it’s trying to solve.

Even when the solution is made available, consider that PCI auditors are still working hard to persuade their customers to stop using SSLv2, which was deprecated over twelve years ago. I keep thinking that this is rather akin to debating whether we should disable the Latin language portion of our web pages.

However, it does demonstrate that users and server operators alike do not like to change their existing systems. No doubt IDS and IPS vendors will step up and provide modules that can disconnect unwarranted renegotiations.

Update: Read Part 3 for a discussion of the possible threats to FTPS.

My take on the SSL MITM Attacks – part 1 – the HTTPS attack

If you’re in the security world, you’ve probably heard a lot lately about new and deadly flaws in the SSL and TLS protocols – so-called “Man in the Middle” attacks (aka MITM).

These aren’t the same as old-style MITM attacks, which relied on the attacker somehow pretending strongly to be the secure site being connected to – those attacks allowed the attacker to get the entire content of the transmission, but they required the attacker to already have some significant level of access. The access required included that the attacker had to be able to intercept and change the network traffic as it passed through him, and also that the attacker had to provide a completely trusted certificate representing himself as the secure server. [Note – you can always perform a man-in-the-middle attack if you own a trusted certificate authority.]

The current SSL MITM attack follows a different pattern, because of the way HTTPS authentication works in practice. This means it has more limited effect, but requires less in the way of access. You gain some security advantage, you lose some. The attacker still needs to be able to intercept and modify the traffic between client and server, but does not get to see the content of traffic between client and server. All the attacker gets to do is to submit data to the server before the client gets its turn.

Imagine you’re ordering a pizza over the phone. Normally, the procedure is that you call and tell them what the pizza order is (type of pizza, delivery address), and they ask you for your credit card number as verification. Sometimes, though, the phone operator asks for your credit card number first, and then takes your order. So, you’re comfortable working either way.

Now, suppose an attacker can hijack your call to the pizza restaurant and mimic your voice. While playing you a ringing tone to keep you on the line, he talks to the phone operator, specifying the pizza he wants and the address to which it is to be delivered. Immediately after that, he connects you to your pizza restaurant, you’re asked for your credit card number, which you supply, and then you place your pizza order.

Computers are as dumb as a bag of rocks. Not very smart rocks at that. So, imagine that this phone operator isn’t smart enough to say “what, another pizza? You just ordered one.”

That’s a rough, non-technical description of the HTTPS attack. There’s another subtle variation, in which the caller states his pizza order, then says “oh, and ignore my attempt to order a pizza in a few seconds”. The computer is dumb enough to accept that, too.

For a more technical description, go see Eric Rescorla’s summary at Understanding the TLS Renegotiation Attack, or Marsh Ray’s original report.

Let’s call these the HTTPS client-auth attack and the HTTPS request-splitting attack. That’s a basic description of what they do.

HTTPS client-authentication attack

The client-authentication attack is getting the biggest press, because it allows the attacker one go (per try) at persuading the server to perform an action in the context of the authenticated user. From ordering a pizza to pretty any activity that can be caused in a single request to a web site can be achieved with this attack.

Preventing the attack at the server.

Servers have been poorly designed in this respect – but out of some necessity. Eric Rescorla explains this in the SSL and TLS bible, “SSL and TLS” [Subtitle: Designing and Building Secure Systems] on page 322, section 9.18.

“The commonly used approach is for the server to negotiate an ordinary SSL connection for all clients. Then, once the request has been received, the server determines whether client authentication is required… If it is required, the server requests a rehandshake using HelloRequest. In this second handshake, the server requests client authentication.”

How does HTTP handle other authentication, such as Forms, Digest, Basic, Windows Integrated, etc? Is it different from the above description?

A client can provide credentials along with its original request using the WWW-Authenticate header, or the server can refuse an unauthorised (anonymous) request with a 401 error code indicating that authentication is necessary (and listing WWW-Authenticate headers containing appropriate challenges). In the latter case, the client resends the request with the appropriate WWW-Authenticate header.

HTTPS Mutual Authentication (another term for client authentication) doesn’t do this. Why on earth not? I’m not sure, but I think it’s probably because SSL already has a mostly unwarranted reputation for being slow, and this would add another turnaround to the process.

Whatever the reason, a sudden dose of unexpected ‘401’ errors would lead to clients failing, because they aren’t coded to re-request the page with mutual auth in place.

So, we can’t redesign from scratch to fix this immediately – how do we fix what’s in place?

The best way is to realise what the attack can do, and make sure that the effects are as limited as possible. The attack can make the client engage in one action – the first action it performs after authenticating – using the credentials sent immediately after requesting the action to be performed.

A change of application design is warranted, then, to ensure that the first thing your secure application does on authenticating with a client certificate is to display a welcome screen, and not to perform an action. Reject any action requested prior to authentication having been received.

Sadly, while this is technically possible using SSL if you’ve written your own server to go along with the application, or can tie into information about the underlying SSL connection, it’s likely that most HTTPS servers operate on the principle that HTTP is stateless, and the app should have no knowledge of the SSL state beyond “have I been authenticated or not”.

Doubtless web server vendors are going to be coming out with workarounds, advice and fixes – and you should, of course, be looking to their advice on how to fix this behaviour.

The best defence against the client-authentication attack, of course, is to not use client authentication.

Preventing the attack at the client

Not much you can do here, I’m afraid – the client can’t tell if the server has already received a request. Perhaps it would work to not provide client certificates to a server unless you already have an existing SSL connection, but that would kill functionality to perfectly good web sites that are operating properly. Assuming that most web sites operate in the mode of “accept a no-client-auth connection before requesting authentication”, you could rework your client to insist on this happening all the time. Prepare for failures to be reported.

Again, the best defence is not to use client authentication right now. Perhaps split your time between browsers – one with client certificates built in for those few occasions when you need them, and the other without client certs, for your main browsing. That will, at least, limit your exposure.

HTTPS Request-splitting attack

Preventing the attack at the server

The HTTPS Request-splitting attack is technically a little easier to block at the server, if you write the server’s SSL interface – there should be absolutely no reason for an HTTP Request to be split across an SSL renegotiation. So, an HTTPS server should be able to discard any connection state, including headers already sent, when renegotiation happens. Again, consult with your web server developer / vendor for their recommendations.

Preventing the attack at the client?

Again, you’re pretty much out of luck here – even sending a double carriage return to terminate any previous request would cause the attacker’s request to succeed.

The long term approach – fix the protocol

As you can imagine, there are some changes that can be made to TLS to fix all of this. The basic thought is to have client and server add a little information in the renegotiation handshake that checks that client and server both agree about what has already come before in their communication. This allows client and server both to tell when an interloper has added his own communication before the renegotiation has taken place.

Details of the current plan can be found at draft-rescorla-tls-renegotiate.txt

Final thoughts

Yeah, this is a significant attack against SSL, or particularly HTTPS. There are few, if any, options for protecting yourself as a client, and not very many for protecting yourself as a server.

Considering how long it’s taken some places to get around to ditching SSLv2 after its own security flaws were found and patched 14 years ago with the development of SSLv3 and TLS, it seems like we’ll be trying to cope with these issues for many years to come.

Like it or not, though, the long-term approach of revising TLS is our best protection, and it’s important as users that we consider keeping our software up-to-date with changes in the security / threat landscape.

Update: read Part 2 of this discussion for answers to a number of questions.

Update: read Part 3 for some details on FTPS and the potential for attacks.

How to send a close_notify at the end of an SSL connection

One of the more confusing parts of writing code to correctly work an SSL connection is the final act – the closure.

Here’s how to do it in Windows’ SChannel:

    // phCtx is the pointer to the context handle you’ve already been using for SSL.
SecBuffer sbshut={sizeof(dwshut), SECBUFFER_TOKEN, &dwshut};
SecBufferDesc sdshut={SECBUFFER_VERSION,1,&sbshut};
DWORD sec_ret;
ASSERT(secret==SEC_E_OK); // You’ll want to do better handling than just “assert”.
DWORD dwOutFlags=0;

At this point, you’ll need to send the contents of sbshut.pvBuffer (length is in sbshut.cbBuffer) in the stream (after anything else encrypted you’ve queued up), because it contains the close_notify message. You’ll likely have to read – and decrypt – more response back from your peer, checking for it to either close the stream, or send a matching close_notify.

[The documentation for DecryptMessage online at Microsoft’s MSDN now correctly describes how to recognise and react to a peer’s close_notify alert.]

After verifying that you’re receiving a close_notify from the other end, you’ll be in a loop with AcceptSecurityContext, responding to the peer, and sending what AcceptSecurityContext tells you to, until ASC (as we insiders call it) returns SEC_I_CONTEXT_EXPIRED or SEC_E_OK.

SSL development gotchas.

There are two behaviours in SSL that seem to catch out a number of people.

The first is the use of close_notify.

close_notify is an operation in SSL that terminates the SSL session, providing a definite end to the stream that is being protected. As it provides an HMAC summarising the entire communication so far, it’s a solid, reliable record that your stream has not been interrupted in its journey to you, and that you have received the entire stream.

Consider close_notify to be an essential part of the stream, when you’re writing a stream, but think carefully about the possibilities when reading the stream.

In many protocols, there is some other component of the stream that can already indicate an end, and the HMAC protecting that component (which SSL considers to be simply data) can be relied upon to indicate that you have not been interrupted by evil-doers. HTTP, for instance, sends either a byte count, or uses chunked-encoding, where each chunk is counted. As a result, the close_notify doesn’t tell you anything more than you already know about the stream – it has finished. So, a lack of close_notify in an HTTPS stream, while a sign of technically poor SSL development, is not a fault in the use of SSL.

FTPS, on the other hand, is a different beast – data transfers under FTPS, if they are protected by SSL, start at the beginning of the SSL session, and finish at the end of the SSL connection. SSL connections can terminate either with a close_notify, or with a TCP FIN – closing the underlying stream. SSL’s design assumes that the TCP FIN can be forged – and indeed, so it can, in many environments. So, for a protocol like FTPS, a close_notify should really be treated as essential – though the last time I looked, out of a dozen FTPS clients I tried, only one actually sent the close_notify at the end of the upload.

Here’s a link to “how to send a close_notify at the end of an SSL connection“.

The second ‘gotcha’ is how to correctly handle client certificates.

An SSL server always has a certificate that it uses to identify itself. A client might have one or several certificates. Client certificates are requested by the SSL server if the SSL server wants them; in some cases, an SSL server might be able to accept that the client has no certificate.

For instance, going back to FTPS again, a connection to the FTPS server might be authenticated by certificate exchange, or – after the SSL session has been initiated – by using a username and password. Remember, sending a user name and password over SSL means that they are protected from snooping by everyone except the FTPS server administrator. So, an FTPS server might very reasonably ask for a client certificate, but not mind if the client hasn’t got one.

If a server can accept a certificate from the client, the only way it can indicate this is to ask for mutual authentication. Some clients are coded with the idea that a request from the server for mutual auth is actually a requirement for mutual auth, and they will throw up an error if they don’t have a client certificate, or the user hasn’t selected a certificate to use.

This is incorrect behaviour – if the client has been asked for a certificate, but cannot provide any, it should simply respond with a list of certificates that is empty. This only becomes an error condition if the server requires at least one certificate from the client.

SSL Tutorial part 0.

So you want to protect your TCP application’s traffic?

You’ve been writing network code for a while, using TCP, and you’ve faced the bugbears of reliability and performance, but now you’re looking for a real challenge.

You want to secure your network traffic; you want to securely authenticate the server and maybe even the client.

Or perhaps your users are simply screaming for the protection of SSL, even if they don’t know what that means, but because “everyone else has it”.

There are obviously several reasons you might have to use SSL to protect your network traffic – and over the next few blog entries, I’m going to advise you on how you might add SSL to your client or server, and what benefits you’ll get from doing so.

I’m going to start with a brief run-down of what SSL can provide, in its most common configuration.  There are some pedants that will tell you all about using Diffie-Hellman (DH) key exchange, so that noone needs a certificate, or a NULL encryption cipher, so that you can read the SSL-wrapped communication, but neither of those apply in the general case that we’re going to talk about.  When you have finished reading this set of columns, you’ll be able to take an HTTP client or server and turn it into HTTPS, or an FTP client or server, and make it support FTPS.

So, to begin, here’s a list of what SSL gives you over and above what you already have with your TCP application.

  • Server Authentication: SSL requires that the server send a certificate to the client, identifying itself.

  • Client Authentication: SSL allows the server to ask the client for a certificate, which will identify it.

  • Communication privacy: Apart from the first few bytes of the exchange, all traffic is encrypted with a symmetric cipher.

  • Communication integrity: A special checksum, called an HMAC, is used to ensure that bits within the ciphered text have not been altered, extra text has not been added, and that the communications stream has not been closed early by a hacker (or by network faults).

Now, here’s a list of some interesting changes that SSL makes to your TCP traffic:

  • Session initialisation requires a significant amount of traffic (certificate exchange) before the first byte of your data can flow.

  • TCP is a stream-based protocol, with no suggestion of message boundaries; SSL encrypts your data stream as a series of discrete messages within the TCP stream, and a message must be fully received before being decrypted (otherwise it is not protected by the HMAC).

  • You have to think carefully about closure issues – what does a TCP RST mean, or a TCP FIN?  You thought you understood those terms already, but they may have a different interpretation when you’re trying to secure a communication.

  • In a client, in addition to resolving the server’s name to an IP address, you also have to check that the server’s certificate matches the name of the server you thought you were trying to reach.

  • Your carefully-calculated performance-enhancing measures are all going to go up the spout; the overhead of encryption, plus the requirement to work within the message size of SSL is going to seriously impact performance.

Until next time, happy coding!