Category Archives: NCSAM

Government Shuts Down for Cyber Security

In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.

In case the DHS isn’t able to make things happen without funding, here’s what they originally had planned:

image

I’m sure you’ll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.

Maybe without the government in charge, we can stop using the “C” word to describe it.

UPDATE 1

The “C” word I’m referring to is, of course, “Cyber”. Bad word. Doesn’t mean anything remotely like what people using it think it means.

UPDATE 2

The main page of the DHS.GOV web site actually does carry a small banner indicating that there’s no activity happening at the web site today.

image

So, there may be many NCSAM events, but DHS will not be a part of them.

NCSAM 2011–Post 21–Failure is always an option

For my last post in the National Cyber Security Awareness Month, I’d like to expound on an important maxim for security.

Failure is always an option – and sometimes the best

If you can’t handle a customer’s credit card in a secure fashion, you shouldn’t be handling the customer’s credit card.

If a process is too slow when you add the necessary security, the process was always too slow, and can not yet be done effectively by modern computers (or the computers you’re using).

If you enable a new convenience feature, and the rate of security failures increases as a result, the convenience is more to the hackers than to the users, and the feature should be removed or revisited.

Accept your own failures and deal with them

Sometimes there’s nothing to do but to say “Oops, that didn’t work”. Find something else that does.

If you’re writing software code, expect to encounter failing conditions – disk full, network unresponsive, keyboard stuck, database corrupt, power outage – all these are far more common than software developers anticipate.

Failure is not the exception, it is a part of life in an uncertain universe.

Handle other people’s failures gracefully

Other people will fail you.

This is not always their intent, nor is it necessarily something that they will recognise. Do not punish unintentional failure as if it was an intentional insult. Educate, where possible, redirect otherwise.

Where failure is intentional, be firm and decisive. Do not allow deliberate failure to continue unhindered.

Failure is always a necessary part of innovation

Innovation is doing that which has never been done before.

As a result, no one knows how to do it correctly. You will fail, a lot. If you are always right, it is because you are doing something that you already know.

Because failure is ubiquitous, look for it everywhere

Part of being a security expert is the ability to see where people, process and technology are likely to fail, and how someone might take advantage of that, or cause you disadvantage.

Turn “I can’t imagine how that might fail” into “I can see seven different ways this could screw up, and I’ve got plans for eight of them”.

And yes, I failed to finish writing this in National Cyber Security Awareness Month.

NCSAM/2011–Post 20–Is SSL broken?

It seems like a strange question for me to ask, given that in a number of my National Cyber Security Awareness Month posts to date, I have been advising you to use SSL or TLS to protect your communications. [Remember: TLS is the new name for SSL, but most people refer to it still as SSL, so I will do the same below]

But it’s a question I get asked on a fairly regular basis, largely as a result of news articles noting that there has been some new attack or other on SSL that breaks it in some way.

To be fair, I’m not sure that I would expect a journalist – even a technology journalist – to understand SSL in such a way that they could give a good idea as to how broken it may or may not be after each successful attack. That means that the only information they’re able to rely on is the statement given to them by the flaw’s discoverers. And who’s going to go to the press and say “we’ve found a slight theoretical flaw in SSL, probably not much, but thought you ought to know”?

The good news

First, the good news.

SSL is a protocol framework around cryptographic operations. That means that, rather than describing a particular set of cryptography that can’t be extended, it describes how to describe cryptography to be used, so that it can be extended when new algorithms come along.

So, when a new algorithm arrives, or a new way of using an existing algorithm (how can you tell the difference?), SSL can be updated to describe that.

So, in a sense, SSL will never be broken for long, and can always be extended to fix issues as they are detected.

Now for the bad news

Of course, SSL is really only a specification, and it has to be implemented before it can actually be used. That means that when SSL is updated to fix flaws, theoretical or practical, every implementation has to be changed to catch up to the new version.

And implementers don’t like to change their code once they have it working.

So when a new theoretical flaw comes along, the SSL designers update the way SSL works, increasing the version number when they have to.

The implementers, on the other hand, tend to wait until there is a practical flaw before updating to support the new version.

This means that whenever a practical break is found, you can bet it will be at least several weeks before you can see it fixed in the versions you actually use.

In moderation

The presence of SSL assumes that your communications may be monitored, intercepted and altered. As such, don’t ever rely on a statement to the effect that “this breach of SSL is difficult to exploit, because you would have to get between the victim and his chosen site”. If that wasn’t possible, we wouldn’t need SSL in the first place.

Having said that, on a wired network, you are less likely to see interception of the type that SSL is designed to prevent. As such, even a broken SSL on wired networks is probably secure for the time it takes everyone to catch up to fixing their flaws.

On a wireless network, any flaws in SSL are significant – but as I’ve noted before, if you connect immediately to a trusted VPN, your wireless surfing is significantly safer, pretty much to the same level as you have on your home wired network.

The bottom line

In summary then:

SSL is frequently, and in some senses never, broken. There are frequently attacks, both theoretical and physical, on the SSL framework. Theoretical attacks are fixed in the specifications, often before they become practical. Practical attacks are fixed in implementations, generally by adopting the tack that had been suggested in the specifications while the attack was still theoretical. At each stage, the protocol that prevents the attack is still SSL (or these days, strictly, TLS), but it requires you keep your computers up to date with patches as they come out, and enable new versions of SSL as they are made available.

If you’re on a wired network, the chances of your being attacked are pretty slim. If you’re on a wireless network, your chances of being attacked are high, so make sure you are using SSL or an equivalent protocol, and for extra protection, use a VPN to connect a trusted wired network.

NCSAM/2011–Post 19–Is it safe to give out my keys?

There are some people who seem to get this right away, and others to whom I seem to have been explaining this concept for years. [And you know who you are, if you’re reading this!]

Whenever you talk about keys used for encryption, you have to figure out how you’re going to keep those keys, and whether or not you need to protect them.

And the answer depends (doesn’t everything?) – and depends on what kind of encryption algorithm you are using.

Let’s start with the easy kind, the one we’re all familiar with.

Symmetric (aka shared-key) cryptography

This is the sort of code that I’m sure we all played with as children. The oh-so-secret code (well, we didn’t know about frequency counting or cryptanalysis back then), where you and your best friend knew the secret code and the secret key. [Probably a Caesar cipher, although I used a Vigenere cipher, myself]

Well, those codes, like us, have grown up. The category of shared-key cryptography, also known as symmetric cryptography, so that the same keys (and sometimes the same operations) are used to encrypt and decrypt the data, has been enhanced hugely since those old and simple ciphers.

Now we have AES to contend with, and for all practical purposes, with reasonable keys, it’s unbreakable in usable time. [But if you have a spare universe to exhaust, perhaps you can crack my files]

For symmetric key cryptography, you do have to give out your key – to the party with whom you plan to exchange data. Of course, you have to protect this key as if it was as important as the data it protects, because it is all that protects your data. [Your attacker can tell what algorithm you use, and if you develop your own algorithm, well, they can tell what that is, too, because crypto algorithm inventors are generally doomed to fail to recognise the flaws in their own algorithm.]

That’s kind of a catch-22 situation – there’s really no way using cryptography to protect a key-sized piece of data outside of encrypting it with another key.

Asymmetric (aka public key) cryptography

That’s why the British had to invent public key cryptography.

Of course, unlike the Americans, the British managed to keep this a secret – so much so that to this day, many Americans believe their country invented public key cryptography (along with apple pie, mothers and speaking English loudly to foreigners).

With public key cryptography, there are two keys for every cryptographic operation – the public key, and the private key.

Here’s the tricky part

OK, I don’t think this part is very tricky, but there are several people I’ve had to explain this to over and over again, so I’ll try to take it really slowly.

Of the two keys, there is one key that you are supposed to share with anyone and everyone. To some of you it may come as a surprise that this is the PUBLIC key.

Again, the PUBLIC key is something you can share with anyone and everyone with no known danger to date. You can print it on billboards, put it on your business cards, include it in your email, really you can do anything with it that distributes it to anyone who might want it.

In a pinch, you might want to make sure that you distribute the public key in a way that allows the recipients to associate it with their opinion of your identity.

But the PRIVATE key – no, no, no, no, no, you do not ever distribute that. You don’t even let someone else create it for you. You generate your private key for yourself, and you don’t ever tell it to anyone.

The simple reason is that anyone who has your private key can pretend to be you – in fact, for cryptographic purposes, they are you.

So, really simply now:

  • You generate your own keys. Nobody else ever does this for you (otherwise they aren’t your keys)
  • The public key can be given to anyone, but has to be associated with your identity in the recipient’s mind.
  • The private key cannot be given to anyone. It must be held by you, and you alone.

If you think this is confusing, apparently you are right – even Microsoft’s official curriculum for the Windows Server 2003 training courses says that “Alice encrypts the message using Bob’s private key” – if Alice has Bob’s private key, she can exchange any secret message with Bob while they are in bed together that night.

Actually, scratch that – even my wife doesn’t have access to my private key, and I don’t have access to hers.

What do you do with these two keys?

There are two operations that you can do with your private key. You can decrypt data, and you can sign data.

Reversing this, there are two operations that you can do with a public key – that would be someone else’s public key, not yours. You can encrypt data, and you can verify a signature.

Hybrid cryptography

In many cryptographic exchanges, such as SSL / TLS, and other modern equivalents, asymmetric cryptography is used briefly at the start of each session, so that two parties can identify each other and exchange (or, more commonly, derive) a shared key. This shared key is then used to encrypt the subsequent communications for some time using symmetric key cryptography.

A quick summary

For shared-key (aka symmetric) cryptography, you do have to share your keys – but you share them secretly with only the person to whom you are communicating. If you are trying to protect a communication between you and a partner, you cannot send the keys down the same line that you are going to send the communication down, because an attacker who can steal your communication can also steal your keys.

For asymmetric cryptography, you also have to share your keys – but only your public keys. Again, that’s only your public keys that you share. And you have to share those public keys. Your private keys are used by the various applications that encrypt data on your behalf, or to sign data to prove it came from you. Anything outside of that realm that asks you for your private keys is not to be trusted.

Any questions?

Ask an expert if you still have concerns. Because if you give out your private keys, then you have to generate new ones, and distribute new public keys.

NCSAM/2011–Post 18–Know what security you want from your network

In yesterday’s post, we talked about how SSL and HTTPS don’t provide perfect security for your web surfing needs. You need to make sure that a site is also protecting its applications and credentials.

This can be generalised

One of my favourite interview questions for security engineer candidates is to ask what an application developer could use to protect a networked application if SSL wasn’t available.

It’s an open ended question – what parts of SSL is the interviewee looking to match, and what parts are they willing to throw away with an alternative (and do they even know what they are throwing away?); and it asks the interviewee to think about how else they can achieve those goals.

I like to hear answers that cover a number of options. I won’t provide a perfect answer here, because I’m sure I’ll miss something, but here are some of the considerations I would give:

Can we use network layer security?

There are a number of different ways to secure network communications, providing for encryption, integrity and authentication – IPsec and VPN are just two methods that should spring immediately to mind. These are not universally suitable, as they tend to be all-or-nothing solutions, rather than per-application, but if you expect to see only one application running on the communicating pair of systems (this is relatively common in business communications), this can be acceptable. These are also a considerable effort to set up, and don’t always scale to inter-networked situations.

What about application layer security?

Hey, what’s wrong with encrypting and signing a file with PGP or S/MIME, or even WinZip, and sending it through email?

Not a whole lot, surely. We can get into discussions of key distribution and so on, but essentially, this is a solid technique. Maybe not easy to automate, and probably not accepted by everyone the world over, but from a “protected by encryption” standpoint, this is actually fairly defensible.

So what’s my point?

What I’m really trying to say here is that your application’s security rests on an understanding of what protections you can ask from your network – and from your network staff, and which you will have to implement in the application itself. For every protection that is available in the network, that’s maybe some less work you have to do in your application; and for every protection the network does not provide, that’s one more thing you have to write into the app itself.

Without knowing what security your network provides between you and all your communicating partners, you can’t truly know or guess what security you need to provide in your application. Without knowing what security your application provides, you can’t describe what network environment is appropriate to host that application.

We split the world into infrastructure and application so frequently, that it’s important to remember that we each have to understand a little of the other’s world in order to safely operate.

NCSAM/2011–Post 17–SSL does not make your web site secure

I know, it sounds like complete heresy, but there it is – SSL and HTTPS will not make your web site secure.

Even more appropriate (although I queued the title of this topic up almost a month ago) is this recent piece of news: Top FBI Cyber Cop Recommends New Secure Internet, which appears to make much the opposite point, that all our problems could be fixed if we were only to switch to an Internet in which everyone is identified (something tells me the FBI is not necessarily looking for us to use strong encryption).

HTTPS is just one facet of your web site security

There are a number of ways in which an HTTPS-only website, or HTTPS-only portion of a site, can be insecure. Here’s a list of just some of them:

Application vulnerabilities

It’s been a long time since web servers provided only static content in their pages. Now it’s the case that pretty much every web site has to serve “applications”, in which inputs provided by the visitor to the site get processed and involved in outputs.

There are any number of ways in which those inputs can produce bad outputs – Cross Site Scripting (XSS), on which I’ve posted before; Cross Site Request Forgery, allowing an attacker to force you to take actions you didn’t intend; SQL injection, where data behind a web site can be extracted and/or modified – these are just the most commonly known.

Applications can also fail to check credentials, fail to apply access controls, and even fail in some old-fashioned ways like buffer overflows leading to remote code execution.

Path vulnerabilities

Providing sensitive information in an application’s path, or through parameters passed in a URL, is another common means by which application authors, who think they are protected by using HTTPS, come a significant cropper. URLs – even HTTPS protected URLs – are often read, logged, and processed at both ends of the connection, and sometimes even in the middle!

Egress filtering in enterprises is often carried out by interrupting the HTTPS communication between client and server, using a locally-deployed trusted root certificate. This quite legitimately allows the egress filtering system to process URLs to determine what’s a safe request, and what’s a dangerous one. This can also cause information sent in a URL to be exposed. This is one reason why an application developer should avoid using GET requests to perform and data exchange for user data, or for data that the site feels is sensitive.

Other path vulnerabilities – mostly fixed these days, but still something that attackers and scanning suites alike feel is worth trying – are those where the path can be changed by embedding extra slash or double-dot characters or sequences. Enough “..” entries in a path, and if the server isn’t properly written or managed, an attacker can escape out of the web server’s restrictions, and visit the operating system disk. The official term for this is a “path traversal attack”.

Credential vulnerabilities

The presence of a padlock – or whatever your web browser shows to indicate an HTTPS, rather than HTTP, connection, indicates a few things:

  • Your communication is encrypted (This can be overcome, but it takes so much work at both client and server for most implementations that I think it’s fair to say you will not be in the situation where you see a padlock without the use of encryption.)
    • That doesn’t mean to say you will always have the best encryption around, but if you didn’t go and enable weaker encryption than that supplied in a recent and patched browser, you’re fairly well guaranteed to be safe.
  • The web site you are connecting to is at least trying to give some indication of security.
  • The web site to which you connected has convinced your browser – or you – that it is who it claims to be in the address bar.
    • Note that this may not mean that it passes a test of its identity that you really want. You could be in an enterprise with an SSL-interrupting egress filter, as explained above, you could have been convinced fraudulently to accept the site’s certificate, or you could have installed an inappropriate certificate authority’s root certificate.

If you’re the sort of person who clicks through browser warnings, all you’ve managed to confirm is that your communication is encrypted, and the site you’ve connected to is trying to convince you it is secure. Note that this is exactly what a fraudulent site will try to do. The padlock isn’t everything.

Then think about where your secret information goes. If you’re like a lot of users, you’ll be using the same password on every site you connect to, or some variation thereof. Just because the site uses SSL does not mean that you

But at least it’s a start.

If your bank doesn’t use HTTPS when accepting your logon information, it’s a sign that they really aren’t terribly interested in protecting that transaction. Maybe you should ask them why.

Many web sites will use HTTPS on parts of the site, and HTTP on others. Observe what they choose to protect, and what they choose to leave public. Is the publicly-transmitted information truly public? Is it something you want other people in the coffee shop or library to know you’re browsing?

NCSAM/2011–Post 16–FTP is secure

Week 4 of National Cyber Security Awareness Month, and I’m getting into the more advanced topics of secure communications and protocols.

I figured I couldn’t start this topic without something that’s very near and dear to me – the security of FTP.

The good/bad old days

FTP is one of the oldest application protocols for the Internet. You can tell because it has a very low assigned port number (21).

You can also tell, because it actually has two assigned port numbers – 20 for ftp-data and 21 for ftp.

In many ways the old days of the Internet were really good, and in much the same ways, those days were bad. From a security perspective, for instance, those days were bad because none of the protocols considered security very much, if at all. Of course, you could look at this as ‘good’ and note that there weren’t really a whole lot of reasons to include security protections. Most of the original users were government, military or academic, and in each of these situations there were pretty good sanctions to use against evil-doers.

The Middle Ages

In the middle ages of the Internet, the security was still missing from many protocols, and people took advantage of them a lot. Additions like SSL were invented, and we are all familiar with using HTTPS on a web site to protect traffic to and from it.

Other protocols were simply shunned, as was the case with FTP, on the basis that no one was interested in updating them – after all, what with the web and all, who needs FTP?

Modern Day

Fast forward to modern day, and we find that FTP has a poor reputation for security. But is it deserved?

In some respects, yes – FTP has had its fair share of security badness in the past. But it’s also had its share of updates.

First, there was RFC 1579, Firewall Friendly FTP. Not much of a security advance, using PASV (passive) mode to open connections, so that it’s the server’s responsibility to be compatible with its firewall.

Then came RFC 2228, FTP Security Extensions, dealing with additions to FTP to manage encrypted and integrity-protected connections for data and control channel. Good, but the only protocol supported is Kerberos, and nobody really uses that on the open Internet.

Next, RFC 2577, which addresses some of the common areas where FTP implementations suffer from security failings – a definite huge step forward, because finally even new FTP implementations could get things right in terms of many of the security issues seen in older versions.

And recently (OK, so it’s six years old this month in RFC form, and has been developed for a few years before then), RFC 4217, on Securing FTP with TLS – applies the usual SSL and TLS network protection layers to FTP, basing it on the work defined in RFC 2228.

Are we done yet?

I don’t know, but I’m fairly certain that you will find FTP as it exists today is a far more secure protocol than the one described in, say, the PCI DSS requirements. In fact, if you’ve implemented an RFC 4217 compliant FTP server, enabled its protections, and made sure it implements the suggestions in RFC 2577, you can make a good case to your PCI Auditors (QSA, to use the technical term) that this is an acceptable and secure method of transferring data.

So, what’s holding you back from using FTP in your secure environment now? Anything?

NCSAM/2011–Week 3 summary–names and addresses

So, what did we learn this week?

Your user name is not a secret

Because the operating system doesn’t bother to help you hide user names, and because those user names are used in countless protocols as if they were public information, you’re backing a loser if you want to try and act as if the user name is some kind of secret. There is nothing wrong with having predictable user names. If you need more security, make the passwords longer.

Don’t bother renaming the Administrator account

Arguments from other security luminaries notwithstanding, I’m still of the opinion that there really is no benefit to renaming the Administrator account, and it’s going to cause plenty of irritation.

What is a fingerprint?

Despite being used as both a claim and proof of identity, it really needs to be seen as one or the other, along with other biometrics. Also worth noting are the ADA and other considerations that some people just don’t have readable fingerprints, if any at all.

An IP address as an authenticator

Don’t do it. Just don’t do it. Use IP addresses as a filter, to cut out the noise, but don’t rely on that as your only authentication measure, because an IP address doesn’t have sufficient rigour to use as an authenticator.

What’s the better firewall – black-hole or RFC compliant?

While it’s tempting to think that a black-hole firewall is the best, because it sits silently not responding to unwanted traffic, there are some times when it’s important to respond to unwanted traffic with a “go away, I’m not talking to you”.

Up next week – Communication Protocols

And do, please, leave comments or email to let me know if you’re enjoying this series, which is published because October is “National Cyber Security Awareness Month”.

NCSAM/2011–Post 15–What’s the better firewall–black-hole, or RFC compliant?

So, given the information we have so far, you should be able to answer the question.

Background info

There are two schools of thought when it comes to how a firewall should behave in some situations.

The one school says that a firewall should ignore all traffic that reaches it, unless it is traffic that should be passed on. This is known as a “black hole”, or “fully stealthed” firewall, because it refuses to send any packets in response to communications it didn’t request.

The other school says that a firewall should respond to unexpected traffic exactly like a router that knows it is unable to reach the host being requested. This is the RFC-compliant firewall, because it looks to the RFC documents to decide what should be done in response to each packet it receives.

First, consider the ‘black hole’

Black hole firewalls are named after the cosmological entity of the same name, because they suck packets in and never send them back out again.

Much like a black hole, however, their existence can be deduced by the simple absence of light passing through them – a range of IPs that should be responding with reset packets (aka “go away, not listening”) to incoming TCP requests, are instead simply ignoring them. If the intent of the firewall was to make the attacker lose interest, you’ve already failed.

And now the RFC compliant firewall

The RFC compliant firewall replies to every unwanted TCP connection request with a RST packet, to indicate that the targeted address is not interested in talking.

To a well-behaved TCP connection partner, this is a request to stop all communications and close the connection, without processing any further data.

Which is fine, except all unexpected traffic at a firewall is an attack, right?

Not every packet is an attack

OK, I really telegraphed that one.

Some unwanted TCP packets are actually very informative, and the RST message sent in response is a useful part of keeping your systems safe.

Let’s suppose someone was able to predict, or otherwise get a hold of, the Initial Sequence Numbers we talked about in yesterday’s post. That someone, an attacker, would be able to spoof, or forge, a connection coming from your system, and connect to a targeted server. Even if they couldn’t see what information was coming back, they might be able to make an attack look like it came from you.

The classic example of “what can I do with a spoofed TCP connection” is that of sending email – spam, usually – from the user of an ISP.

But those packets from the server, that the attacker can’t see (but can guess), do go somewhere – and if the Internet is working properly, they go to your computer, or the firewall sitting in front of your computer.

If your firewall is an RFC-compliant firewall, those packets will be seen by the firewall as unexpected and unwanted – and the firewall will send back a RST packet, demanding that your mail server stop trying to communicate with you. This may be the only indication to the server that anything is amiss. Your RST packet, if it arrives quickly enough, will prevent the spam run being done in your name.

If your firewall is a black-hole router, on the other hand, no RST packets will be sent, and the communication between spoofer and server will continue uninterrupted, unabated, and with you potentially on the hook for emails sent “from your IP address”.

[Note that the same argument can be made for a network where the attacker is a man in the middle who can read and inject packets, but is unable to remove packets from the stream between you and the server.]

Not really settled

As with many of the other issues I’ve been talking about this month, there are differing views on this. I’m generally a fan of following the RFCs, because they’ve usually been arrived at by smart people persuading other smart people to a consensus. I’m sure that you’ll run into people with other opinions on this issue, so please feel free to ask more questions and share different opinions. The really fun topics in computing are those where there are multiple answers that could all be right.

NCSAM/2011–Post 14–An IP address as an authenticator?

So we’ve talked a little about names as claims of identities and passwords as proofs of those identities, continuing on to describe a fingerprint as a reasonable proof of identity, but perhaps not so useful when it has to be a claim and proof of identity at the same time.

So, how about an IP address?

A number of applications offer you the ability to accept or deny connections / requests from outsiders, based on their IP address. Good connections / requests from IP addresses that you know are allowed; bad connections / requests from IP addresses that you don’t know (or from IP addresses that you know are bad) are blocked.

Since this looks rather like an authentication scheme, let’s ask the question:

What is the claim of identity, and what is the proof?

UDP – User Datagram Protocol (or “Unreliable”)

Well, for UDP, the claim appears to be the IP address in the “source address” component.

Is this IP address also a proof of identity?

Since I can forge a UDP datagram for any source IP address, I think that means that it can’t possibly be a proof of identity.

So, for UDP traffic, using the source IP address as any kind of authenticator is clearly a bad idea.

TCP – Transmission Control Protocol

TCP is a little stronger of a case, because there’s a connection to be made, and some protections to be had. One of the protections to be had is that the handshake at the beginning of the connection exchanges a couple of random numbers – known as initial sequence numbers (ISNs) – one from the client, and one from the server. The client then has to send packets with a sequence number starting at the ISN the server sent, and the server has to send packets with a sequence number starting at the ISN the client sent. This means that it’s harder to forge TCP connections than UDP requests, because the client has to see the ISN from the server, and vice versa.

Are these ISNs and subsequent sequence numbers a proof of identity?

Not really, because of a number of factors.

  • In early days of TCP implementations, the ISN was easily guessed by a spoofing client, who didn’t actually have to see their connection handshake. It’s always possible that a flawed TCP implementation in the future will also generate predictable ISNs.
  • The ISN is essentially echoed back, rather than being a secret held by the owner of the identity (IP address) being claimed as the source identity.

All that the ISNs really do is provide a reasonable protection against massive floods of forged connection attempts, by requiring that the client be able to receive and respond to the server’s messages. Some schemes even make use of this further, and don’t create the actual connection object until receiving the first packet with the correct sequence number.

If they aren’t a proof of identity, how can someone spoof me?

There is always the possibility of a third party who can listen to your conversations, and spoof portions of your communications. They can know your ISN, and use it to initiate or continue connections you make to other servers.

This usually requires the attacker to be a “man in the middle” (MITM), and remember, an attacker can do that through a wireless connection.

The only protection you have is if you also are a part of this conversation and can send a quick message along the lines of “stop, don’t trust him, he’s not really me”.

This would be the RST, or reset, message, that aborts a TCP connection, usually because inappropriate (out-of-standard) traffic has been detected. We’ll touch on that more in the next post.

Bottom line: IP address is not an authenticator

So, while it might be a good filter for convenience and traffic reduction, filtering by source IP address is not something you can consider as a security measure, because there is no authentication involved.

Rather like the “TCP evil flag”, it does require that someone be truthful when attacking you, so that you can repel them.