It seems like a strange question for me to ask, given that in a number of my National Cyber Security Awareness Month posts to date, I have been advising you to use SSL or TLS to protect your communications. [Remember: TLS is the new name for SSL, but most people refer to it still as SSL, so I will do the same below]
But it’s a question I get asked on a fairly regular basis, largely as a result of news articles noting that there has been some new attack or other on SSL that breaks it in some way.
To be fair, I’m not sure that I would expect a journalist – even a technology journalist – to understand SSL in such a way that they could give a good idea as to how broken it may or may not be after each successful attack. That means that the only information they’re able to rely on is the statement given to them by the flaw’s discoverers. And who’s going to go to the press and say “we’ve found a slight theoretical flaw in SSL, probably not much, but thought you ought to know”?
First, the good news.
SSL is a protocol framework around cryptographic operations. That means that, rather than describing a particular set of cryptography that can’t be extended, it describes how to describe cryptography to be used, so that it can be extended when new algorithms come along.
So, when a new algorithm arrives, or a new way of using an existing algorithm (how can you tell the difference?), SSL can be updated to describe that.
So, in a sense, SSL will never be broken for long, and can always be extended to fix issues as they are detected.
Of course, SSL is really only a specification, and it has to be implemented before it can actually be used. That means that when SSL is updated to fix flaws, theoretical or practical, every implementation has to be changed to catch up to the new version.
And implementers don’t like to change their code once they have it working.
So when a new theoretical flaw comes along, the SSL designers update the way SSL works, increasing the version number when they have to.
The implementers, on the other hand, tend to wait until there is a practical flaw before updating to support the new version.
This means that whenever a practical break is found, you can bet it will be at least several weeks before you can see it fixed in the versions you actually use.
The presence of SSL assumes that your communications may be monitored, intercepted and altered. As such, don’t ever rely on a statement to the effect that “this breach of SSL is difficult to exploit, because you would have to get between the victim and his chosen site”. If that wasn’t possible, we wouldn’t need SSL in the first place.
Having said that, on a wired network, you are less likely to see interception of the type that SSL is designed to prevent. As such, even a broken SSL on wired networks is probably secure for the time it takes everyone to catch up to fixing their flaws.
On a wireless network, any flaws in SSL are significant – but as I’ve noted before, if you connect immediately to a trusted VPN, your wireless surfing is significantly safer, pretty much to the same level as you have on your home wired network.
In summary then:
SSL is frequently, and in some senses never, broken. There are frequently attacks, both theoretical and physical, on the SSL framework. Theoretical attacks are fixed in the specifications, often before they become practical. Practical attacks are fixed in implementations, generally by adopting the tack that had been suggested in the specifications while the attack was still theoretical. At each stage, the protocol that prevents the attack is still SSL (or these days, strictly, TLS), but it requires you keep your computers up to date with patches as they come out, and enable new versions of SSL as they are made available.
If you’re on a wired network, the chances of your being attacked are pretty slim. If you’re on a wireless network, your chances of being attacked are high, so make sure you are using SSL or an equivalent protocol, and for extra protection, use a VPN to connect a trusted wired network.
There are some people who seem to get this right away, and others to whom I seem to have been explaining this concept for years. [And you know who you are, if you’re reading this!]
Whenever you talk about keys used for encryption, you have to figure out how you’re going to keep those keys, and whether or not you need to protect them.
And the answer depends (doesn’t everything?) – and depends on what kind of encryption algorithm you are using.
Let’s start with the easy kind, the one we’re all familiar with.
This is the sort of code that I’m sure we all played with as children. The oh-so-secret code (well, we didn’t know about frequency counting or cryptanalysis back then), where you and your best friend knew the secret code and the secret key. [Probably a Caesar cipher, although I used a Vigenere cipher, myself]
Well, those codes, like us, have grown up. The category of shared-key cryptography, also known as symmetric cryptography, so that the same keys (and sometimes the same operations) are used to encrypt and decrypt the data, has been enhanced hugely since those old and simple ciphers.
Now we have AES to contend with, and for all practical purposes, with reasonable keys, it’s unbreakable in usable time. [But if you have a spare universe to exhaust, perhaps you can crack my files]
For symmetric key cryptography, you do have to give out your key – to the party with whom you plan to exchange data. Of course, you have to protect this key as if it was as important as the data it protects, because it is all that protects your data. [Your attacker can tell what algorithm you use, and if you develop your own algorithm, well, they can tell what that is, too, because crypto algorithm inventors are generally doomed to fail to recognise the flaws in their own algorithm.]
That’s kind of a catch-22 situation – there’s really no way using cryptography to protect a key-sized piece of data outside of encrypting it with another key.
That’s why the British had to invent public key cryptography.
Of course, unlike the Americans, the British managed to keep this a secret – so much so that to this day, many Americans believe their country invented public key cryptography (along with apple pie, mothers and speaking English loudly to foreigners).
With public key cryptography, there are two keys for every cryptographic operation – the public key, and the private key.
OK, I don’t think this part is very tricky, but there are several people I’ve had to explain this to over and over again, so I’ll try to take it really slowly.
Of the two keys, there is one key that you are supposed to share with anyone and everyone. To some of you it may come as a surprise that this is the PUBLIC key.
Again, the PUBLIC key is something you can share with anyone and everyone with no known danger to date. You can print it on billboards, put it on your business cards, include it in your email, really you can do anything with it that distributes it to anyone who might want it.
In a pinch, you might want to make sure that you distribute the public key in a way that allows the recipients to associate it with their opinion of your identity.
But the PRIVATE key – no, no, no, no, no, you do not ever distribute that. You don’t even let someone else create it for you. You generate your private key for yourself, and you don’t ever tell it to anyone.
The simple reason is that anyone who has your private key can pretend to be you – in fact, for cryptographic purposes, they are you.
So, really simply now:
If you think this is confusing, apparently you are right – even Microsoft’s official curriculum for the Windows Server 2003 training courses says that “Alice encrypts the message using Bob’s private key” – if Alice has Bob’s private key, she can exchange any secret message with Bob while they are in bed together that night.
Actually, scratch that – even my wife doesn’t have access to my private key, and I don’t have access to hers.
There are two operations that you can do with your private key. You can decrypt data, and you can sign data.
Reversing this, there are two operations that you can do with a public key – that would be someone else’s public key, not yours. You can encrypt data, and you can verify a signature.
In many cryptographic exchanges, such as SSL / TLS, and other modern equivalents, asymmetric cryptography is used briefly at the start of each session, so that two parties can identify each other and exchange (or, more commonly, derive) a shared key. This shared key is then used to encrypt the subsequent communications for some time using symmetric key cryptography.
For shared-key (aka symmetric) cryptography, you do have to share your keys – but you share them secretly with only the person to whom you are communicating. If you are trying to protect a communication between you and a partner, you cannot send the keys down the same line that you are going to send the communication down, because an attacker who can steal your communication can also steal your keys.
For asymmetric cryptography, you also have to share your keys – but only your public keys. Again, that’s only your public keys that you share. And you have to share those public keys. Your private keys are used by the various applications that encrypt data on your behalf, or to sign data to prove it came from you. Anything outside of that realm that asks you for your private keys is not to be trusted.
Ask an expert if you still have concerns. Because if you give out your private keys, then you have to generate new ones, and distribute new public keys.
In yesterday’s post, we talked about how SSL and HTTPS don’t provide perfect security for your web surfing needs. You need to make sure that a site is also protecting its applications and credentials.
One of my favourite interview questions for security engineer candidates is to ask what an application developer could use to protect a networked application if SSL wasn’t available.
It’s an open ended question – what parts of SSL is the interviewee looking to match, and what parts are they willing to throw away with an alternative (and do they even know what they are throwing away?); and it asks the interviewee to think about how else they can achieve those goals.
I like to hear answers that cover a number of options. I won’t provide a perfect answer here, because I’m sure I’ll miss something, but here are some of the considerations I would give:
There are a number of different ways to secure network communications, providing for encryption, integrity and authentication – IPsec and VPN are just two methods that should spring immediately to mind. These are not universally suitable, as they tend to be all-or-nothing solutions, rather than per-application, but if you expect to see only one application running on the communicating pair of systems (this is relatively common in business communications), this can be acceptable. These are also a considerable effort to set up, and don’t always scale to inter-networked situations.
Hey, what’s wrong with encrypting and signing a file with PGP or S/MIME, or even WinZip, and sending it through email?
Not a whole lot, surely. We can get into discussions of key distribution and so on, but essentially, this is a solid technique. Maybe not easy to automate, and probably not accepted by everyone the world over, but from a “protected by encryption” standpoint, this is actually fairly defensible.
What I’m really trying to say here is that your application’s security rests on an understanding of what protections you can ask from your network – and from your network staff, and which you will have to implement in the application itself. For every protection that is available in the network, that’s maybe some less work you have to do in your application; and for every protection the network does not provide, that’s one more thing you have to write into the app itself.
Without knowing what security your network provides between you and all your communicating partners, you can’t truly know or guess what security you need to provide in your application. Without knowing what security your application provides, you can’t describe what network environment is appropriate to host that application.
We split the world into infrastructure and application so frequently, that it’s important to remember that we each have to understand a little of the other’s world in order to safely operate.
I know, it sounds like complete heresy, but there it is – SSL and HTTPS will not make your web site secure.
Even more appropriate (although I queued the title of this topic up almost a month ago) is this recent piece of news: Top FBI Cyber Cop Recommends New Secure Internet, which appears to make much the opposite point, that all our problems could be fixed if we were only to switch to an Internet in which everyone is identified (something tells me the FBI is not necessarily looking for us to use strong encryption).
There are a number of ways in which an HTTPS-only website, or HTTPS-only portion of a site, can be insecure. Here’s a list of just some of them:
It’s been a long time since web servers provided only static content in their pages. Now it’s the case that pretty much every web site has to serve “applications”, in which inputs provided by the visitor to the site get processed and involved in outputs.
There are any number of ways in which those inputs can produce bad outputs – Cross Site Scripting (XSS), on which I’ve posted before; Cross Site Request Forgery, allowing an attacker to force you to take actions you didn’t intend; SQL injection, where data behind a web site can be extracted and/or modified – these are just the most commonly known.
Applications can also fail to check credentials, fail to apply access controls, and even fail in some old-fashioned ways like buffer overflows leading to remote code execution.
Providing sensitive information in an application’s path, or through parameters passed in a URL, is another common means by which application authors, who think they are protected by using HTTPS, come a significant cropper. URLs – even HTTPS protected URLs – are often read, logged, and processed at both ends of the connection, and sometimes even in the middle!
Egress filtering in enterprises is often carried out by interrupting the HTTPS communication between client and server, using a locally-deployed trusted root certificate. This quite legitimately allows the egress filtering system to process URLs to determine what’s a safe request, and what’s a dangerous one. This can also cause information sent in a URL to be exposed. This is one reason why an application developer should avoid using GET requests to perform and data exchange for user data, or for data that the site feels is sensitive.
Other path vulnerabilities – mostly fixed these days, but still something that attackers and scanning suites alike feel is worth trying – are those where the path can be changed by embedding extra slash or double-dot characters or sequences. Enough “..” entries in a path, and if the server isn’t properly written or managed, an attacker can escape out of the web server’s restrictions, and visit the operating system disk. The official term for this is a “path traversal attack”.
The presence of a padlock – or whatever your web browser shows to indicate an HTTPS, rather than HTTP, connection, indicates a few things:
If you’re the sort of person who clicks through browser warnings, all you’ve managed to confirm is that your communication is encrypted, and the site you’ve connected to is trying to convince you it is secure. Note that this is exactly what a fraudulent site will try to do. The padlock isn’t everything.
Then think about where your secret information goes. If you’re like a lot of users, you’ll be using the same password on every site you connect to, or some variation thereof. Just because the site uses SSL does not mean that you
If your bank doesn’t use HTTPS when accepting your logon information, it’s a sign that they really aren’t terribly interested in protecting that transaction. Maybe you should ask them why.
Many web sites will use HTTPS on parts of the site, and HTTP on others. Observe what they choose to protect, and what they choose to leave public. Is the publicly-transmitted information truly public? Is it something you want other people in the coffee shop or library to know you’re browsing?
Week 4 of National Cyber Security Awareness Month, and I’m getting into the more advanced topics of secure communications and protocols.
I figured I couldn’t start this topic without something that’s very near and dear to me – the security of FTP.
FTP is one of the oldest application protocols for the Internet. You can tell because it has a very low assigned port number (21).
You can also tell, because it actually has two assigned port numbers – 20 for ftp-data and 21 for ftp.
In many ways the old days of the Internet were really good, and in much the same ways, those days were bad. From a security perspective, for instance, those days were bad because none of the protocols considered security very much, if at all. Of course, you could look at this as ‘good’ and note that there weren’t really a whole lot of reasons to include security protections. Most of the original users were government, military or academic, and in each of these situations there were pretty good sanctions to use against evil-doers.
In the middle ages of the Internet, the security was still missing from many protocols, and people took advantage of them a lot. Additions like SSL were invented, and we are all familiar with using HTTPS on a web site to protect traffic to and from it.
Other protocols were simply shunned, as was the case with FTP, on the basis that no one was interested in updating them – after all, what with the web and all, who needs FTP?
Fast forward to modern day, and we find that FTP has a poor reputation for security. But is it deserved?
In some respects, yes – FTP has had its fair share of security badness in the past. But it’s also had its share of updates.
First, there was RFC 1579, Firewall Friendly FTP. Not much of a security advance, using PASV (passive) mode to open connections, so that it’s the server’s responsibility to be compatible with its firewall.
Then came RFC 2228, FTP Security Extensions, dealing with additions to FTP to manage encrypted and integrity-protected connections for data and control channel. Good, but the only protocol supported is Kerberos, and nobody really uses that on the open Internet.
Next, RFC 2577, which addresses some of the common areas where FTP implementations suffer from security failings – a definite huge step forward, because finally even new FTP implementations could get things right in terms of many of the security issues seen in older versions.
And recently (OK, so it’s six years old this month in RFC form, and has been developed for a few years before then), RFC 4217, on Securing FTP with TLS – applies the usual SSL and TLS network protection layers to FTP, basing it on the work defined in RFC 2228.
I don’t know, but I’m fairly certain that you will find FTP as it exists today is a far more secure protocol than the one described in, say, the PCI DSS requirements. In fact, if you’ve implemented an RFC 4217 compliant FTP server, enabled its protections, and made sure it implements the suggestions in RFC 2577, you can make a good case to your PCI Auditors (QSA, to use the technical term) that this is an acceptable and secure method of transferring data.
So, what’s holding you back from using FTP in your secure environment now? Anything?
So, what did we learn this week?
Because the operating system doesn’t bother to help you hide user names, and because those user names are used in countless protocols as if they were public information, you’re backing a loser if you want to try and act as if the user name is some kind of secret. There is nothing wrong with having predictable user names. If you need more security, make the passwords longer.
Arguments from other security luminaries notwithstanding, I’m still of the opinion that there really is no benefit to renaming the Administrator account, and it’s going to cause plenty of irritation.
Despite being used as both a claim and proof of identity, it really needs to be seen as one or the other, along with other biometrics. Also worth noting are the ADA and other considerations that some people just don’t have readable fingerprints, if any at all.
Don’t do it. Just don’t do it. Use IP addresses as a filter, to cut out the noise, but don’t rely on that as your only authentication measure, because an IP address doesn’t have sufficient rigour to use as an authenticator.
While it’s tempting to think that a black-hole firewall is the best, because it sits silently not responding to unwanted traffic, there are some times when it’s important to respond to unwanted traffic with a “go away, I’m not talking to you”.
And do, please, leave comments or email to let me know if you’re enjoying this series, which is published because October is “National Cyber Security Awareness Month”.
So, given the information we have so far, you should be able to answer the question.
There are two schools of thought when it comes to how a firewall should behave in some situations.
The one school says that a firewall should ignore all traffic that reaches it, unless it is traffic that should be passed on. This is known as a “black hole”, or “fully stealthed” firewall, because it refuses to send any packets in response to communications it didn’t request.
The other school says that a firewall should respond to unexpected traffic exactly like a router that knows it is unable to reach the host being requested. This is the RFC-compliant firewall, because it looks to the RFC documents to decide what should be done in response to each packet it receives.
Black hole firewalls are named after the cosmological entity of the same name, because they suck packets in and never send them back out again.
Much like a black hole, however, their existence can be deduced by the simple absence of light passing through them – a range of IPs that should be responding with reset packets (aka “go away, not listening”) to incoming TCP requests, are instead simply ignoring them. If the intent of the firewall was to make the attacker lose interest, you’ve already failed.
The RFC compliant firewall replies to every unwanted TCP connection request with a RST packet, to indicate that the targeted address is not interested in talking.
To a well-behaved TCP connection partner, this is a request to stop all communications and close the connection, without processing any further data.
Which is fine, except all unexpected traffic at a firewall is an attack, right?
OK, I really telegraphed that one.
Some unwanted TCP packets are actually very informative, and the RST message sent in response is a useful part of keeping your systems safe.
Let’s suppose someone was able to predict, or otherwise get a hold of, the Initial Sequence Numbers we talked about in yesterday’s post. That someone, an attacker, would be able to spoof, or forge, a connection coming from your system, and connect to a targeted server. Even if they couldn’t see what information was coming back, they might be able to make an attack look like it came from you.
The classic example of “what can I do with a spoofed TCP connection” is that of sending email – spam, usually – from the user of an ISP.
But those packets from the server, that the attacker can’t see (but can guess), do go somewhere – and if the Internet is working properly, they go to your computer, or the firewall sitting in front of your computer.
If your firewall is an RFC-compliant firewall, those packets will be seen by the firewall as unexpected and unwanted – and the firewall will send back a RST packet, demanding that your mail server stop trying to communicate with you. This may be the only indication to the server that anything is amiss. Your RST packet, if it arrives quickly enough, will prevent the spam run being done in your name.
If your firewall is a black-hole router, on the other hand, no RST packets will be sent, and the communication between spoofer and server will continue uninterrupted, unabated, and with you potentially on the hook for emails sent “from your IP address”.
[Note that the same argument can be made for a network where the attacker is a man in the middle who can read and inject packets, but is unable to remove packets from the stream between you and the server.]
As with many of the other issues I’ve been talking about this month, there are differing views on this. I’m generally a fan of following the RFCs, because they’ve usually been arrived at by smart people persuading other smart people to a consensus. I’m sure that you’ll run into people with other opinions on this issue, so please feel free to ask more questions and share different opinions. The really fun topics in computing are those where there are multiple answers that could all be right.
So we’ve talked a little about names as claims of identities and passwords as proofs of those identities, continuing on to describe a fingerprint as a reasonable proof of identity, but perhaps not so useful when it has to be a claim and proof of identity at the same time.
A number of applications offer you the ability to accept or deny connections / requests from outsiders, based on their IP address. Good connections / requests from IP addresses that you know are allowed; bad connections / requests from IP addresses that you don’t know (or from IP addresses that you know are bad) are blocked.
Since this looks rather like an authentication scheme, let’s ask the question:
Well, for UDP, the claim appears to be the IP address in the “source address” component.
Is this IP address also a proof of identity?
Since I can forge a UDP datagram for any source IP address, I think that means that it can’t possibly be a proof of identity.
So, for UDP traffic, using the source IP address as any kind of authenticator is clearly a bad idea.
TCP is a little stronger of a case, because there’s a connection to be made, and some protections to be had. One of the protections to be had is that the handshake at the beginning of the connection exchanges a couple of random numbers – known as initial sequence numbers (ISNs) – one from the client, and one from the server. The client then has to send packets with a sequence number starting at the ISN the server sent, and the server has to send packets with a sequence number starting at the ISN the client sent. This means that it’s harder to forge TCP connections than UDP requests, because the client has to see the ISN from the server, and vice versa.
Not really, because of a number of factors.
All that the ISNs really do is provide a reasonable protection against massive floods of forged connection attempts, by requiring that the client be able to receive and respond to the server’s messages. Some schemes even make use of this further, and don’t create the actual connection object until receiving the first packet with the correct sequence number.
There is always the possibility of a third party who can listen to your conversations, and spoof portions of your communications. They can know your ISN, and use it to initiate or continue connections you make to other servers.
This usually requires the attacker to be a “man in the middle” (MITM), and remember, an attacker can do that through a wireless connection.
The only protection you have is if you also are a part of this conversation and can send a quick message along the lines of “stop, don’t trust him, he’s not really me”.
This would be the RST, or reset, message, that aborts a TCP connection, usually because inappropriate (out-of-standard) traffic has been detected. We’ll touch on that more in the next post.
So, while it might be a good filter for convenience and traffic reduction, filtering by source IP address is not something you can consider as a security measure, because there is no authentication involved.
Rather like the “TCP evil flag”, it does require that someone be truthful when attacking you, so that you can repel them.
I’ve given a couple of arguments about names and why they shouldn’t be treated like passwords now, and because I always like to turn problems into classes of problems to be solved (the “meta-approach”), I figured a long time ago that I should decide what it is that makes a name different from a password, and whether I could apply that to the security world as a whole.
A name is a claim of your identity. “Hello, I’m Bob,” says the badge – but anyone could wear that badge, much to the consternation of any real Bobs who happen to be bobbing around in the background. [Another name for a claim is an “assertion”, but the term “claim” seems to have won out in recent security discussions.]
An identity claim is best when it’s unique – so that claiming to be “Bob” is only useful if no one other than the one, true “Bob” gets to use that name.
Your password, by comparison, is a proof of your identity.
Your password is not unique – which is good, because if you set your password to “frebbot”, and the system told you “that password is not unique”, you’d have a relatively easy time of going through each user on the system to find out who had that password.
Occasionally, systems are proposed where the entry of a password is all that is required to establish an identity – the proof is needed, but the claim is not. This is an example of a truly pathological case of unique passwords, in which you don’t even need to guess whose account has the password that you’ve just been denied.
I’ll readily admit to enjoying using a fingerprint reader as a convenience device on my home computer. It’s a great way of quickly logging on to a system that no more than about four legitimate users are registered on, so I’m not going to say that fingerprints are unusable in all cases.
However, by the descriptions above, your fingerprint serves as both a claim of an identity, and a proof of that identity.
This alone rings alarm bells, and is a reason not to use it in an environment such as an enterprise, where hundreds, maybe thousands, of people are registered, and where a relatively simple few mistakes in reading a fingerprint could result in being identified as an entirely different user.
There are, of course, further reasons why I would discourage security practitioners in business to avoid fingerprints as a security measure.
My argument here is much in the same vein as my previous post on choosing random usernames.
I’ve met a number of people who argue that renaming the built-in Windows Administrator account is a great security measure, because an attacker now has to guess the name of the Administrator account as well as its password.
Or they put it a little more scientifically, and say that renaming the Administrator account increases the entropy already present in the password.
Seems pretty obvious to me, really.
If your passwords don’t have enough entropy (more or less equivalent to “random” in the everyday sense of the word), add more entropy – use a wider range of characters, or simply make the minimum password length that much longer.
If you make the name of the Administrator account a secret, that’s really no good, because the system works against you. It does absolutely nothing to protect you. A simple call to “NET USER” will list the local users, including the local administrator, and a relatively simple ICACLS command will apply rights to the local administrator – no matter its name – on a file. And then you can list the file’s permissions – again using ICACLS – to see what the name is for that administrator.
There is one argument I accept as moderately valid for renaming the Administrator – now you can go ahead and treat every attempt to use the “Administrator” account as an attack, because none of the valid uses of the renamed-Administrator account will use that user name.
So now for some arguments against – unlike the “doesn’t really add entropy” point above, these are areas in which renaming the Administrator account does actual harm:
And my favouritest argument of all: