Every year, in October, we celebrate National Cyber Security Awareness Month.
Normally, Iâm dismissive of anything with the word âCyberâ in it. This is no exception â the adjective âcyberâ is a manufactured word, without root, without meaning, and with only a tenuous association to the world it endeavours to describe.
But thatâs not the point.
And I do it from a very basic level.
This is not the place for me to assume youâve all been reading and understanding security for years â this is where I appeal to readers with only a vague understanding that thereâs a âsecurityâ thing out there that needs addressing.
This first week is all about Information Security â Cyber Security, as the government and military put it â as our shared responsibility.
Iâm a security professional, in a security team, and my first responsibility is to remind the thousands of other employees that I canât secure the company, our customers, our managers, and our continued joint success, without everyone pitching in just a little bit.
Iâm also a customer, with private data of my own, and I have a responsibility to take reasonable measures to protect that data, and by extension, my identity and its association with me. But I also need others to take up their responsibility in protecting me.
This year, Iâve had my various identifying factors â name, address, phone number, Social Security Number (if youâre not from the US, thatâs a government identity number thatâs rather inappropriately used as proof of identity in too many parts of life) â misappropriated by others, and used in an attempt to buy a car, and to file taxes in my name. So, Iâve filed reports of identity theft with a number of agencies and organisations.
Just today, another breach report arrives, from a company I do business with, letting me know that more data has been lost â this time from one of the organisations charged with actually protecting my identity and protecting my credit.
While companies can â and should â do much more to protect customers (and putative customers), and their data, itâs also incumbent on the customers to protect themselves.
Every day, thousands of new credit and debit cards get issued to eager recipients, many of them teenagers and young adults.
Excited as they are, many of these youths share pictures of their new cards on Twitter or Facebook. Occasionally with both sides. Thereâs really not much your bank can do if youâre going to react in such a thoughtless way, with a casual disregard for the safety of your data.
Sure, youâre only liable for the first $50 of any use of your credit card, and perhaps of your debit card, but itâs actually much better to not have to trace down unwanted charges and dispute them in the first place.
So, Iâm going to buy into the first message of National Cyber Security Awareness Month â and Iâm going to suggest you do the same:
This is really the base part of all security â before doing a thing, stop a moment. Think about whether itâs a good thing to do, or has negative consequences you hadnât considered. Connect with other people to find out what they think.
Iâll finish tonight with some examples where stopping a moment to think, and connecting with others to pool knowledge, will improve your safety and security online. More tomorrow.
The most common password is â12345678â, or âpasswordâ. This means that many people are using that simple a password. Many more people are using more secure passwords, but they still make mistakes that could be prevented with a little thought.
Passwords leak â either from their owners, or from the systems that use those passwords to recognise the owners.
When they do, those passwords â and data associated with them â can then be used to log on to other sites those same owners have visited. Either because their passwords are the same, or because they are easily predicted. If my password at Adobe is âThis is my Adobe passwordâ, well, thatâs strong(ish), but it also gives a hint as to what my Amazon password is â and when you crack the Adobe password leak (thatâs already available), you might be able to log on to my Amazon account.
Creating unique passwords â and yes, writing them down (or better still, storing them in a password manager), and keeping them safe â allows you to ensure that leaks of your passwords donât spread to your other accounts.
There are exciting events which happen to us every day, and which we want to share with others.
Thatâs great, and itâs what Twitter and Facebook are there FOR. All kinds of social media available for you to share information with your friends.
Unfortunately, itâs also where a whole lot of bad people hang out â and some of those bad people are, unfortunately, your friends and family.
Be careful what you share, and if youâre sharing about others, get their permission too.
If youâre sharing about children, contemplate that there are predators out there looking for the information you may be giving out. Thereâs one living just up the road, I can assure you. Theyâre almost certainly safely withdrawn, and youâre protected from them by natural barriers and instincts. But you have none of those instincts on Facebook unless you stop, think and connect.
So donât post addresses, locations, your childâs phone number, and really limit things like names of children, friends, pets, teachers, etc â imagine that someone will use that as âproofâ to your child of their safety. âItâs OK, I was sent by Aunt Josie, whoâs waiting for you to come and see Dobbie the catâ
Bobâs going off on vacation for a month.
Lucky Bob.
Just in case, while heâs gone, heâs left you his password, so that you can log on and access various files.
Two months later, and the office gets raided by the police. Theyâve traced a child porn network to your company. To Bob.
Well, actually, to Bob and to you, because the system canât tell the difference between Bob and you.
Donât share accounts. Make Bob learn (with the IT departmentâs help) how to share portions of his networked files appropriately. Itâs really not all that hard.
I develop software. The first thing I write is always a basic proof of concept.
The second thing I write â well, whoâs got time for a second thing?
Make notes in comments every time you skip a security decision, and make those notes in such a way that you can revisit them and address them â or at least, count them â prior to release, so that you know how badly youâre in the mess.
In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.
In case the DHS isnât able to make things happen without funding, hereâs what they originally had planned:
Iâm sure youâll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.
Maybe without the government in charge, we can stop using the âCâ word to describe it.
The âCâ word Iâm referring to is, of course, âCyberâ. Bad word. Doesnât mean anything remotely like what people using it think it means.
The main page of the DHS.GOV web site actually does carry a small banner indicating that thereâs no activity happening at the web site today.
So, there may be many NCSAM events, but DHS will not be a part of them.
For my last post in the National Cyber Security Awareness Month, Iâd like to expound on an important maxim for security.
If you canât handle a customerâs credit card in a secure fashion, you shouldnât be handling the customerâs credit card.
If a process is too slow when you add the necessary security, the process was always too slow, and can not yet be done effectively by modern computers (or the computers youâre using).
If you enable a new convenience feature, and the rate of security failures increases as a result, the convenience is more to the hackers than to the users, and the feature should be removed or revisited.
Sometimes thereâs nothing to do but to say âOops, that didnât workâ. Find something else that does.
If youâre writing software code, expect to encounter failing conditions â disk full, network unresponsive, keyboard stuck, database corrupt, power outage â all these are far more common than software developers anticipate.
Failure is not the exception, it is a part of life in an uncertain universe.
Other people will fail you.
This is not always their intent, nor is it necessarily something that they will recognise. Do not punish unintentional failure as if it was an intentional insult. Educate, where possible, redirect otherwise.
Where failure is intentional, be firm and decisive. Do not allow deliberate failure to continue unhindered.
Innovation is doing that which has never been done before.
As a result, no one knows how to do it correctly. You will fail, a lot. If you are always right, it is because you are doing something that you already know.
Part of being a security expert is the ability to see where people, process and technology are likely to fail, and how someone might take advantage of that, or cause you disadvantage.
Turn âI canât imagine how that might failâ into âI can see seven different ways this could screw up, and Iâve got plans for eight of themâ.
And yes, I failed to finish writing this in National Cyber Security Awareness Month.
It seems like a strange question for me to ask, given that in a number of my National Cyber Security Awareness Month posts to date, I have been advising you to use SSL or TLS to protect your communications. [Remember: TLS is the new name for SSL, but most people refer to it still as SSL, so I will do the same below]
But itâs a question I get asked on a fairly regular basis, largely as a result of news articles noting that there has been some new attack or other on SSL that breaks it in some way.
To be fair, Iâm not sure that I would expect a journalist â even a technology journalist â to understand SSL in such a way that they could give a good idea as to how broken it may or may not be after each successful attack. That means that the only information theyâre able to rely on is the statement given to them by the flawâs discoverers. And whoâs going to go to the press and say âweâve found a slight theoretical flaw in SSL, probably not much, but thought you ought to knowâ?
First, the good news.
SSL is a protocol framework around cryptographic operations. That means that, rather than describing a particular set of cryptography that canât be extended, it describes how to describe cryptography to be used, so that it can be extended when new algorithms come along.
So, when a new algorithm arrives, or a new way of using an existing algorithm (how can you tell the difference?), SSL can be updated to describe that.
So, in a sense, SSL will never be broken for long, and can always be extended to fix issues as they are detected.
Of course, SSL is really only a specification, and it has to be implemented before it can actually be used. That means that when SSL is updated to fix flaws, theoretical or practical, every implementation has to be changed to catch up to the new version.
And implementers donât like to change their code once they have it working.
So when a new theoretical flaw comes along, the SSL designers update the way SSL works, increasing the version number when they have to.
The implementers, on the other hand, tend to wait until there is a practical flaw before updating to support the new version.
This means that whenever a practical break is found, you can bet it will be at least several weeks before you can see it fixed in the versions you actually use.
The presence of SSL assumes that your communications may be monitored, intercepted and altered. As such, donât ever rely on a statement to the effect that âthis breach of SSL is difficult to exploit, because you would have to get between the victim and his chosen siteâ. If that wasnât possible, we wouldnât need SSL in the first place.
Having said that, on a wired network, you are less likely to see interception of the type that SSL is designed to prevent. As such, even a broken SSL on wired networks is probably secure for the time it takes everyone to catch up to fixing their flaws.
On a wireless network, any flaws in SSL are significant â but as Iâve noted before, if you connect immediately to a trusted VPN, your wireless surfing is significantly safer, pretty much to the same level as you have on your home wired network.
In summary then:
SSL is frequently, and in some senses never, broken. There are frequently attacks, both theoretical and physical, on the SSL framework. Theoretical attacks are fixed in the specifications, often before they become practical. Practical attacks are fixed in implementations, generally by adopting the tack that had been suggested in the specifications while the attack was still theoretical. At each stage, the protocol that prevents the attack is still SSL (or these days, strictly, TLS), but it requires you keep your computers up to date with patches as they come out, and enable new versions of SSL as they are made available.
If youâre on a wired network, the chances of your being attacked are pretty slim. If youâre on a wireless network, your chances of being attacked are high, so make sure you are using SSL or an equivalent protocol, and for extra protection, use a VPN to connect a trusted wired network.
There are some people who seem to get this right away, and others to whom I seem to have been explaining this concept for years. [And you know who you are, if youâre reading this!]
Whenever you talk about keys used for encryption, you have to figure out how youâre going to keep those keys, and whether or not you need to protect them.
And the answer depends (doesnât everything?) â and depends on what kind of encryption algorithm you are using.
Letâs start with the easy kind, the one weâre all familiar with.
This is the sort of code that Iâm sure we all played with as children. The oh-so-secret code (well, we didnât know about frequency counting or cryptanalysis back then), where you and your best friend knew the secret code and the secret key. [Probably a Caesar cipher, although I used a Vigenere cipher, myself]
Well, those codes, like us, have grown up. The category of shared-key cryptography, also known as symmetric cryptography, so that the same keys (and sometimes the same operations) are used to encrypt and decrypt the data, has been enhanced hugely since those old and simple ciphers.
Now we have AES to contend with, and for all practical purposes, with reasonable keys, itâs unbreakable in usable time. [But if you have a spare universe to exhaust, perhaps you can crack my files]
For symmetric key cryptography, you do have to give out your key â to the party with whom you plan to exchange data. Of course, you have to protect this key as if it was as important as the data it protects, because it is all that protects your data. [Your attacker can tell what algorithm you use, and if you develop your own algorithm, well, they can tell what that is, too, because crypto algorithm inventors are generally doomed to fail to recognise the flaws in their own algorithm.]
Thatâs kind of a catch-22 situation â thereâs really no way using cryptography to protect a key-sized piece of data outside of encrypting it with another key.
Thatâs why the British had to invent public key cryptography.
Of course, unlike the Americans, the British managed to keep this a secret â so much so that to this day, many Americans believe their country invented public key cryptography (along with apple pie, mothers and speaking English loudly to foreigners).
With public key cryptography, there are two keys for every cryptographic operation â the public key, and the private key.
OK, I donât think this part is very tricky, but there are several people Iâve had to explain this to over and over again, so Iâll try to take it really slowly.
Of the two keys, there is one key that you are supposed to share with anyone and everyone. To some of you it may come as a surprise that this is the PUBLIC key.
Again, the PUBLIC key is something you can share with anyone and everyone with no known danger to date. You can print it on billboards, put it on your business cards, include it in your email, really you can do anything with it that distributes it to anyone who might want it.
In a pinch, you might want to make sure that you distribute the public key in a way that allows the recipients to associate it with their opinion of your identity.
But the PRIVATE key â no, no, no, no, no, you do not ever distribute that. You donât even let someone else create it for you. You generate your private key for yourself, and you donât ever tell it to anyone.
The simple reason is that anyone who has your private key can pretend to be you â in fact, for cryptographic purposes, they are you.
So, really simply now:
If you think this is confusing, apparently you are right â even Microsoftâs official curriculum for the Windows Server 2003 training courses says that âAlice encrypts the message using Bobâs private keyâ â if Alice has Bobâs private key, she can exchange any secret message with Bob while they are in bed together that night.
Actually, scratch that â even my wife doesnât have access to my private key, and I donât have access to hers.
There are two operations that you can do with your private key. You can decrypt data, and you can sign data.
Reversing this, there are two operations that you can do with a public key â that would be someone elseâs public key, not yours. You can encrypt data, and you can verify a signature.
In many cryptographic exchanges, such as SSL / TLS, and other modern equivalents, asymmetric cryptography is used briefly at the start of each session, so that two parties can identify each other and exchange (or, more commonly, derive) a shared key. This shared key is then used to encrypt the subsequent communications for some time using symmetric key cryptography.
For shared-key (aka symmetric) cryptography, you do have to share your keys â but you share them secretly with only the person to whom you are communicating. If you are trying to protect a communication between you and a partner, you cannot send the keys down the same line that you are going to send the communication down, because an attacker who can steal your communication can also steal your keys.
For asymmetric cryptography, you also have to share your keys â but only your public keys. Again, thatâs only your public keys that you share. And you have to share those public keys. Your private keys are used by the various applications that encrypt data on your behalf, or to sign data to prove it came from you. Anything outside of that realm that asks you for your private keys is not to be trusted.
Ask an expert if you still have concerns. Because if you give out your private keys, then you have to generate new ones, and distribute new public keys.
In yesterdayâs post, we talked about how SSL and HTTPS donât provide perfect security for your web surfing needs. You need to make sure that a site is also protecting its applications and credentials.
One of my favourite interview questions for security engineer candidates is to ask what an application developer could use to protect a networked application if SSL wasnât available.
Itâs an open ended question â what parts of SSL is the interviewee looking to match, and what parts are they willing to throw away with an alternative (and do they even know what they are throwing away?); and it asks the interviewee to think about how else they can achieve those goals.
I like to hear answers that cover a number of options. I wonât provide a perfect answer here, because Iâm sure Iâll miss something, but here are some of the considerations I would give:
There are a number of different ways to secure network communications, providing for encryption, integrity and authentication â IPsec and VPN are just two methods that should spring immediately to mind. These are not universally suitable, as they tend to be all-or-nothing solutions, rather than per-application, but if you expect to see only one application running on the communicating pair of systems (this is relatively common in business communications), this can be acceptable. These are also a considerable effort to set up, and donât always scale to inter-networked situations.
Hey, whatâs wrong with encrypting and signing a file with PGP or S/MIME, or even WinZip, and sending it through email?
Not a whole lot, surely. We can get into discussions of key distribution and so on, but essentially, this is a solid technique. Maybe not easy to automate, and probably not accepted by everyone the world over, but from a âprotected by encryptionâ standpoint, this is actually fairly defensible.
What Iâm really trying to say here is that your applicationâs security rests on an understanding of what protections you can ask from your network â and from your network staff, and which you will have to implement in the application itself. For every protection that is available in the network, thatâs maybe some less work you have to do in your application; and for every protection the network does not provide, thatâs one more thing you have to write into the app itself.
Without knowing what security your network provides between you and all your communicating partners, you canât truly know or guess what security you need to provide in your application. Without knowing what security your application provides, you canât describe what network environment is appropriate to host that application.
We split the world into infrastructure and application so frequently, that itâs important to remember that we each have to understand a little of the otherâs world in order to safely operate.
I know, it sounds like complete heresy, but there it is â SSL and HTTPS will not make your web site secure.
Even more appropriate (although I queued the title of this topic up almost a month ago) is this recent piece of news: Top FBI Cyber Cop Recommends New Secure Internet, which appears to make much the opposite point, that all our problems could be fixed if we were only to switch to an Internet in which everyone is identified (something tells me the FBI is not necessarily looking for us to use strong encryption).
There are a number of ways in which an HTTPS-only website, or HTTPS-only portion of a site, can be insecure. Hereâs a list of just some of them:
Itâs been a long time since web servers provided only static content in their pages. Now itâs the case that pretty much every web site has to serve âapplicationsâ, in which inputs provided by the visitor to the site get processed and involved in outputs.
There are any number of ways in which those inputs can produce bad outputs â Cross Site Scripting (XSS), on which Iâve posted before; Cross Site Request Forgery, allowing an attacker to force you to take actions you didnât intend; SQL injection, where data behind a web site can be extracted and/or modified â these are just the most commonly known.
Applications can also fail to check credentials, fail to apply access controls, and even fail in some old-fashioned ways like buffer overflows leading to remote code execution.
Providing sensitive information in an applicationâs path, or through parameters passed in a URL, is another common means by which application authors, who think they are protected by using HTTPS, come a significant cropper. URLs â even HTTPS protected URLs â are often read, logged, and processed at both ends of the connection, and sometimes even in the middle!
Egress filtering in enterprises is often carried out by interrupting the HTTPS communication between client and server, using a locally-deployed trusted root certificate. This quite legitimately allows the egress filtering system to process URLs to determine whatâs a safe request, and whatâs a dangerous one. This can also cause information sent in a URL to be exposed. This is one reason why an application developer should avoid using GET requests to perform and data exchange for user data, or for data that the site feels is sensitive.
Other path vulnerabilities â mostly fixed these days, but still something that attackers and scanning suites alike feel is worth trying â are those where the path can be changed by embedding extra slash or double-dot characters or sequences. Enough â..â entries in a path, and if the server isnât properly written or managed, an attacker can escape out of the web serverâs restrictions, and visit the operating system disk. The official term for this is a âpath traversal attackâ.
The presence of a padlock â or whatever your web browser shows to indicate an HTTPS, rather than HTTP, connection, indicates a few things:
If youâre the sort of person who clicks through browser warnings, all youâve managed to confirm is that your communication is encrypted, and the site youâve connected to is trying to convince you it is secure. Note that this is exactly what a fraudulent site will try to do. The padlock isnât everything.
Then think about where your secret information goes. If youâre like a lot of users, youâll be using the same password on every site you connect to, or some variation thereof. Just because the site uses SSL does not mean that you
If your bank doesnât use HTTPS when accepting your logon information, itâs a sign that they really arenât terribly interested in protecting that transaction. Maybe you should ask them why.
Many web sites will use HTTPS on parts of the site, and HTTP on others. Observe what they choose to protect, and what they choose to leave public. Is the publicly-transmitted information truly public? Is it something you want other people in the coffee shop or library to know youâre browsing?
Week 4 of National Cyber Security Awareness Month, and Iâm getting into the more advanced topics of secure communications and protocols.
I figured I couldnât start this topic without something thatâs very near and dear to me â the security of FTP.
FTP is one of the oldest application protocols for the Internet. You can tell because it has a very low assigned port number (21).
You can also tell, because it actually has two assigned port numbers â 20 for ftp-data and 21 for ftp.
In many ways the old days of the Internet were really good, and in much the same ways, those days were bad. From a security perspective, for instance, those days were bad because none of the protocols considered security very much, if at all. Of course, you could look at this as âgoodâ and note that there werenât really a whole lot of reasons to include security protections. Most of the original users were government, military or academic, and in each of these situations there were pretty good sanctions to use against evil-doers.
In the middle ages of the Internet, the security was still missing from many protocols, and people took advantage of them a lot. Additions like SSL were invented, and we are all familiar with using HTTPS on a web site to protect traffic to and from it.
Other protocols were simply shunned, as was the case with FTP, on the basis that no one was interested in updating them â after all, what with the web and all, who needs FTP?
Fast forward to modern day, and we find that FTP has a poor reputation for security. But is it deserved?
In some respects, yes â FTP has had its fair share of security badness in the past. But itâs also had its share of updates.
First, there was RFC 1579, Firewall Friendly FTP. Not much of a security advance, using PASV (passive) mode to open connections, so that itâs the serverâs responsibility to be compatible with its firewall.
Then came RFC 2228, FTP Security Extensions, dealing with additions to FTP to manage encrypted and integrity-protected connections for data and control channel. Good, but the only protocol supported is Kerberos, and nobody really uses that on the open Internet.
Next, RFC 2577, which addresses some of the common areas where FTP implementations suffer from security failings â a definite huge step forward, because finally even new FTP implementations could get things right in terms of many of the security issues seen in older versions.
And recently (OK, so itâs six years old this month in RFC form, and has been developed for a few years before then), RFC 4217, on Securing FTP with TLS â applies the usual SSL and TLS network protection layers to FTP, basing it on the work defined in RFC 2228.
I donât know, but Iâm fairly certain that you will find FTP as it exists today is a far more secure protocol than the one described in, say, the PCI DSS requirements. In fact, if youâve implemented an RFC 4217 compliant FTP server, enabled its protections, and made sure it implements the suggestions in RFC 2577, you can make a good case to your PCI Auditors (QSA, to use the technical term) that this is an acceptable and secure method of transferring data.
So, whatâs holding you back from using FTP in your secure environment now? Anything?
So, what did we learn this week?
Your user name is not a secret
Because the operating system doesnât bother to help you hide user names, and because those user names are used in countless protocols as if they were public information, youâre backing a loser if you want to try and act as if the user name is some kind of secret. There is nothing wrong with having predictable user names. If you need more security, make the passwords longer.
Don’t bother renaming the Administrator account
Arguments from other security luminaries notwithstanding, Iâm still of the opinion that there really is no benefit to renaming the Administrator account, and itâs going to cause plenty of irritation.
Despite being used as both a claim and proof of identity, it really needs to be seen as one or the other, along with other biometrics. Also worth noting are the ADA and other considerations that some people just donât have readable fingerprints, if any at all.
An IP address as an authenticator
Donât do it. Just donât do it. Use IP addresses as a filter, to cut out the noise, but donât rely on that as your only authentication measure, because an IP address doesnât have sufficient rigour to use as an authenticator.
What’s the better firewall – black-hole or RFC compliant?
While itâs tempting to think that a black-hole firewall is the best, because it sits silently not responding to unwanted traffic, there are some times when itâs important to respond to unwanted traffic with a âgo away, Iâm not talking to youâ.
And do, please, leave comments or email to let me know if youâre enjoying this series, which is published because October is âNational Cyber Security Awareness Monthâ.
So, given the information we have so far, you should be able to answer the question.
There are two schools of thought when it comes to how a firewall should behave in some situations.
The one school says that a firewall should ignore all traffic that reaches it, unless it is traffic that should be passed on. This is known as a âblack holeâ, or âfully stealthedâ firewall, because it refuses to send any packets in response to communications it didnât request.
The other school says that a firewall should respond to unexpected traffic exactly like a router that knows it is unable to reach the host being requested. This is the RFC-compliant firewall, because it looks to the RFC documents to decide what should be done in response to each packet it receives.
Black hole firewalls are named after the cosmological entity of the same name, because they suck packets in and never send them back out again.
Much like a black hole, however, their existence can be deduced by the simple absence of light passing through them â a range of IPs that should be responding with reset packets (aka âgo away, not listeningâ) to incoming TCP requests, are instead simply ignoring them. If the intent of the firewall was to make the attacker lose interest, youâve already failed.
The RFC compliant firewall replies to every unwanted TCP connection request with a RST packet, to indicate that the targeted address is not interested in talking.
To a well-behaved TCP connection partner, this is a request to stop all communications and close the connection, without processing any further data.
Which is fine, except all unexpected traffic at a firewall is an attack, right?
OK, I really telegraphed that one.
Some unwanted TCP packets are actually very informative, and the RST message sent in response is a useful part of keeping your systems safe.
Letâs suppose someone was able to predict, or otherwise get a hold of, the Initial Sequence Numbers we talked about in yesterdayâs post. That someone, an attacker, would be able to spoof, or forge, a connection coming from your system, and connect to a targeted server. Even if they couldnât see what information was coming back, they might be able to make an attack look like it came from you.
The classic example of âwhat can I do with a spoofed TCP connectionâ is that of sending email â spam, usually â from the user of an ISP.
But those packets from the server, that the attacker canât see (but can guess), do go somewhere â and if the Internet is working properly, they go to your computer, or the firewall sitting in front of your computer.
If your firewall is an RFC-compliant firewall, those packets will be seen by the firewall as unexpected and unwanted â and the firewall will send back a RST packet, demanding that your mail server stop trying to communicate with you. This may be the only indication to the server that anything is amiss. Your RST packet, if it arrives quickly enough, will prevent the spam run being done in your name.
If your firewall is a black-hole router, on the other hand, no RST packets will be sent, and the communication between spoofer and server will continue uninterrupted, unabated, and with you potentially on the hook for emails sent âfrom your IP addressâ.
[Note that the same argument can be made for a network where the attacker is a man in the middle who can read and inject packets, but is unable to remove packets from the stream between you and the server.]
As with many of the other issues Iâve been talking about this month, there are differing views on this. Iâm generally a fan of following the RFCs, because theyâve usually been arrived at by smart people persuading other smart people to a consensus. Iâm sure that youâll run into people with other opinions on this issue, so please feel free to ask more questions and share different opinions. The really fun topics in computing are those where there are multiple answers that could all be right.