Every year, in October, we celebrate National Cyber Security Awareness Month.
Normally, I’m dismissive of anything with the word “Cyber” in it. This is no exception – the adjective “cyber” is a manufactured word, without root, without meaning, and with only a tenuous association to the world it endeavours to describe.
But that’s not the point.
And I do it from a very basic level.
This is not the place for me to assume you’ve all been reading and understanding security for years – this is where I appeal to readers with only a vague understanding that there’s a “security” thing out there that needs addressing.
This first week is all about Information Security – Cyber Security, as the government and military put it – as our shared responsibility.
I’m a security professional, in a security team, and my first responsibility is to remind the thousands of other employees that I can’t secure the company, our customers, our managers, and our continued joint success, without everyone pitching in just a little bit.
I’m also a customer, with private data of my own, and I have a responsibility to take reasonable measures to protect that data, and by extension, my identity and its association with me. But I also need others to take up their responsibility in protecting me.
This year, I’ve had my various identifying factors – name, address, phone number, Social Security Number (if you’re not from the US, that’s a government identity number that’s rather inappropriately used as proof of identity in too many parts of life) – misappropriated by others, and used in an attempt to buy a car, and to file taxes in my name. So, I’ve filed reports of identity theft with a number of agencies and organisations.
Just today, another breach report arrives, from a company I do business with, letting me know that more data has been lost – this time from one of the organisations charged with actually protecting my identity and protecting my credit.
While companies can – and should – do much more to protect customers (and putative customers), and their data, it’s also incumbent on the customers to protect themselves.
Every day, thousands of new credit and debit cards get issued to eager recipients, many of them teenagers and young adults.
Excited as they are, many of these youths share pictures of their new cards on Twitter or Facebook. Occasionally with both sides. There’s really not much your bank can do if you’re going to react in such a thoughtless way, with a casual disregard for the safety of your data.
Sure, you’re only liable for the first $50 of any use of your credit card, and perhaps of your debit card, but it’s actually much better to not have to trace down unwanted charges and dispute them in the first place.
So, I’m going to buy into the first message of National Cyber Security Awareness Month – and I’m going to suggest you do the same:
This is really the base part of all security – before doing a thing, stop a moment. Think about whether it’s a good thing to do, or has negative consequences you hadn’t considered. Connect with other people to find out what they think.
I’ll finish tonight with some examples where stopping a moment to think, and connecting with others to pool knowledge, will improve your safety and security online. More tomorrow.
The most common password is “12345678”, or “password”. This means that many people are using that simple a password. Many more people are using more secure passwords, but they still make mistakes that could be prevented with a little thought.
Passwords leak – either from their owners, or from the systems that use those passwords to recognise the owners.
When they do, those passwords – and data associated with them – can then be used to log on to other sites those same owners have visited. Either because their passwords are the same, or because they are easily predicted. If my password at Adobe is “This is my Adobe password”, well, that’s strong(ish), but it also gives a hint as to what my Amazon password is – and when you crack the Adobe password leak (that’s already available), you might be able to log on to my Amazon account.
Creating unique passwords – and yes, writing them down (or better still, storing them in a password manager), and keeping them safe – allows you to ensure that leaks of your passwords don’t spread to your other accounts.
There are exciting events which happen to us every day, and which we want to share with others.
That’s great, and it’s what Twitter and Facebook are there FOR. All kinds of social media available for you to share information with your friends.
Unfortunately, it’s also where a whole lot of bad people hang out – and some of those bad people are, unfortunately, your friends and family.
Be careful what you share, and if you’re sharing about others, get their permission too.
If you’re sharing about children, contemplate that there are predators out there looking for the information you may be giving out. There’s one living just up the road, I can assure you. They’re almost certainly safely withdrawn, and you’re protected from them by natural barriers and instincts. But you have none of those instincts on Facebook unless you stop, think and connect.
So don’t post addresses, locations, your child’s phone number, and really limit things like names of children, friends, pets, teachers, etc – imagine that someone will use that as ‘proof’ to your child of their safety. “It’s OK, I was sent by Aunt Josie, who’s waiting for you to come and see Dobbie the cat”
Bob’s going off on vacation for a month.
Just in case, while he’s gone, he’s left you his password, so that you can log on and access various files.
Two months later, and the office gets raided by the police. They’ve traced a child porn network to your company. To Bob.
Well, actually, to Bob and to you, because the system can’t tell the difference between Bob and you.
Don’t share accounts. Make Bob learn (with the IT department’s help) how to share portions of his networked files appropriately. It’s really not all that hard.
I develop software. The first thing I write is always a basic proof of concept.
The second thing I write – well, who’s got time for a second thing?
Make notes in comments every time you skip a security decision, and make those notes in such a way that you can revisit them and address them – or at least, count them – prior to release, so that you know how badly you’re in the mess.
In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.
In case the DHS isn’t able to make things happen without funding, here’s what they originally had planned:
I’m sure you’ll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.
Maybe without the government in charge, we can stop using the “C” word to describe it.
The “C” word I’m referring to is, of course, “Cyber”. Bad word. Doesn’t mean anything remotely like what people using it think it means.
The main page of the DHS.GOV web site actually does carry a small banner indicating that there’s no activity happening at the web site today.
So, there may be many NCSAM events, but DHS will not be a part of them.
For my last post in the National Cyber Security Awareness Month, I’d like to expound on an important maxim for security.
If you can’t handle a customer’s credit card in a secure fashion, you shouldn’t be handling the customer’s credit card.
If a process is too slow when you add the necessary security, the process was always too slow, and can not yet be done effectively by modern computers (or the computers you’re using).
If you enable a new convenience feature, and the rate of security failures increases as a result, the convenience is more to the hackers than to the users, and the feature should be removed or revisited.
Sometimes there’s nothing to do but to say “Oops, that didn’t work”. Find something else that does.
If you’re writing software code, expect to encounter failing conditions – disk full, network unresponsive, keyboard stuck, database corrupt, power outage – all these are far more common than software developers anticipate.
Failure is not the exception, it is a part of life in an uncertain universe.
Other people will fail you.
This is not always their intent, nor is it necessarily something that they will recognise. Do not punish unintentional failure as if it was an intentional insult. Educate, where possible, redirect otherwise.
Where failure is intentional, be firm and decisive. Do not allow deliberate failure to continue unhindered.
Innovation is doing that which has never been done before.
As a result, no one knows how to do it correctly. You will fail, a lot. If you are always right, it is because you are doing something that you already know.
Part of being a security expert is the ability to see where people, process and technology are likely to fail, and how someone might take advantage of that, or cause you disadvantage.
Turn “I can’t imagine how that might fail” into “I can see seven different ways this could screw up, and I’ve got plans for eight of them”.
And yes, I failed to finish writing this in National Cyber Security Awareness Month.
It seems like a strange question for me to ask, given that in a number of my National Cyber Security Awareness Month posts to date, I have been advising you to use SSL or TLS to protect your communications. [Remember: TLS is the new name for SSL, but most people refer to it still as SSL, so I will do the same below]
But it’s a question I get asked on a fairly regular basis, largely as a result of news articles noting that there has been some new attack or other on SSL that breaks it in some way.
To be fair, I’m not sure that I would expect a journalist – even a technology journalist – to understand SSL in such a way that they could give a good idea as to how broken it may or may not be after each successful attack. That means that the only information they’re able to rely on is the statement given to them by the flaw’s discoverers. And who’s going to go to the press and say “we’ve found a slight theoretical flaw in SSL, probably not much, but thought you ought to know”?
First, the good news.
SSL is a protocol framework around cryptographic operations. That means that, rather than describing a particular set of cryptography that can’t be extended, it describes how to describe cryptography to be used, so that it can be extended when new algorithms come along.
So, when a new algorithm arrives, or a new way of using an existing algorithm (how can you tell the difference?), SSL can be updated to describe that.
So, in a sense, SSL will never be broken for long, and can always be extended to fix issues as they are detected.
Of course, SSL is really only a specification, and it has to be implemented before it can actually be used. That means that when SSL is updated to fix flaws, theoretical or practical, every implementation has to be changed to catch up to the new version.
And implementers don’t like to change their code once they have it working.
So when a new theoretical flaw comes along, the SSL designers update the way SSL works, increasing the version number when they have to.
The implementers, on the other hand, tend to wait until there is a practical flaw before updating to support the new version.
This means that whenever a practical break is found, you can bet it will be at least several weeks before you can see it fixed in the versions you actually use.
The presence of SSL assumes that your communications may be monitored, intercepted and altered. As such, don’t ever rely on a statement to the effect that “this breach of SSL is difficult to exploit, because you would have to get between the victim and his chosen site”. If that wasn’t possible, we wouldn’t need SSL in the first place.
Having said that, on a wired network, you are less likely to see interception of the type that SSL is designed to prevent. As such, even a broken SSL on wired networks is probably secure for the time it takes everyone to catch up to fixing their flaws.
On a wireless network, any flaws in SSL are significant – but as I’ve noted before, if you connect immediately to a trusted VPN, your wireless surfing is significantly safer, pretty much to the same level as you have on your home wired network.
In summary then:
SSL is frequently, and in some senses never, broken. There are frequently attacks, both theoretical and physical, on the SSL framework. Theoretical attacks are fixed in the specifications, often before they become practical. Practical attacks are fixed in implementations, generally by adopting the tack that had been suggested in the specifications while the attack was still theoretical. At each stage, the protocol that prevents the attack is still SSL (or these days, strictly, TLS), but it requires you keep your computers up to date with patches as they come out, and enable new versions of SSL as they are made available.
If you’re on a wired network, the chances of your being attacked are pretty slim. If you’re on a wireless network, your chances of being attacked are high, so make sure you are using SSL or an equivalent protocol, and for extra protection, use a VPN to connect a trusted wired network.
There are some people who seem to get this right away, and others to whom I seem to have been explaining this concept for years. [And you know who you are, if you’re reading this!]
Whenever you talk about keys used for encryption, you have to figure out how you’re going to keep those keys, and whether or not you need to protect them.
And the answer depends (doesn’t everything?) – and depends on what kind of encryption algorithm you are using.
Let’s start with the easy kind, the one we’re all familiar with.
This is the sort of code that I’m sure we all played with as children. The oh-so-secret code (well, we didn’t know about frequency counting or cryptanalysis back then), where you and your best friend knew the secret code and the secret key. [Probably a Caesar cipher, although I used a Vigenere cipher, myself]
Well, those codes, like us, have grown up. The category of shared-key cryptography, also known as symmetric cryptography, so that the same keys (and sometimes the same operations) are used to encrypt and decrypt the data, has been enhanced hugely since those old and simple ciphers.
Now we have AES to contend with, and for all practical purposes, with reasonable keys, it’s unbreakable in usable time. [But if you have a spare universe to exhaust, perhaps you can crack my files]
For symmetric key cryptography, you do have to give out your key – to the party with whom you plan to exchange data. Of course, you have to protect this key as if it was as important as the data it protects, because it is all that protects your data. [Your attacker can tell what algorithm you use, and if you develop your own algorithm, well, they can tell what that is, too, because crypto algorithm inventors are generally doomed to fail to recognise the flaws in their own algorithm.]
That’s kind of a catch-22 situation – there’s really no way using cryptography to protect a key-sized piece of data outside of encrypting it with another key.
That’s why the British had to invent public key cryptography.
Of course, unlike the Americans, the British managed to keep this a secret – so much so that to this day, many Americans believe their country invented public key cryptography (along with apple pie, mothers and speaking English loudly to foreigners).
With public key cryptography, there are two keys for every cryptographic operation – the public key, and the private key.
OK, I don’t think this part is very tricky, but there are several people I’ve had to explain this to over and over again, so I’ll try to take it really slowly.
Of the two keys, there is one key that you are supposed to share with anyone and everyone. To some of you it may come as a surprise that this is the PUBLIC key.
Again, the PUBLIC key is something you can share with anyone and everyone with no known danger to date. You can print it on billboards, put it on your business cards, include it in your email, really you can do anything with it that distributes it to anyone who might want it.
In a pinch, you might want to make sure that you distribute the public key in a way that allows the recipients to associate it with their opinion of your identity.
But the PRIVATE key – no, no, no, no, no, you do not ever distribute that. You don’t even let someone else create it for you. You generate your private key for yourself, and you don’t ever tell it to anyone.
The simple reason is that anyone who has your private key can pretend to be you – in fact, for cryptographic purposes, they are you.
So, really simply now:
If you think this is confusing, apparently you are right – even Microsoft’s official curriculum for the Windows Server 2003 training courses says that “Alice encrypts the message using Bob’s private key” – if Alice has Bob’s private key, she can exchange any secret message with Bob while they are in bed together that night.
Actually, scratch that – even my wife doesn’t have access to my private key, and I don’t have access to hers.
There are two operations that you can do with your private key. You can decrypt data, and you can sign data.
Reversing this, there are two operations that you can do with a public key – that would be someone else’s public key, not yours. You can encrypt data, and you can verify a signature.
In many cryptographic exchanges, such as SSL / TLS, and other modern equivalents, asymmetric cryptography is used briefly at the start of each session, so that two parties can identify each other and exchange (or, more commonly, derive) a shared key. This shared key is then used to encrypt the subsequent communications for some time using symmetric key cryptography.
For shared-key (aka symmetric) cryptography, you do have to share your keys – but you share them secretly with only the person to whom you are communicating. If you are trying to protect a communication between you and a partner, you cannot send the keys down the same line that you are going to send the communication down, because an attacker who can steal your communication can also steal your keys.
For asymmetric cryptography, you also have to share your keys – but only your public keys. Again, that’s only your public keys that you share. And you have to share those public keys. Your private keys are used by the various applications that encrypt data on your behalf, or to sign data to prove it came from you. Anything outside of that realm that asks you for your private keys is not to be trusted.
Ask an expert if you still have concerns. Because if you give out your private keys, then you have to generate new ones, and distribute new public keys.
In yesterday’s post, we talked about how SSL and HTTPS don’t provide perfect security for your web surfing needs. You need to make sure that a site is also protecting its applications and credentials.
One of my favourite interview questions for security engineer candidates is to ask what an application developer could use to protect a networked application if SSL wasn’t available.
It’s an open ended question – what parts of SSL is the interviewee looking to match, and what parts are they willing to throw away with an alternative (and do they even know what they are throwing away?); and it asks the interviewee to think about how else they can achieve those goals.
I like to hear answers that cover a number of options. I won’t provide a perfect answer here, because I’m sure I’ll miss something, but here are some of the considerations I would give:
There are a number of different ways to secure network communications, providing for encryption, integrity and authentication – IPsec and VPN are just two methods that should spring immediately to mind. These are not universally suitable, as they tend to be all-or-nothing solutions, rather than per-application, but if you expect to see only one application running on the communicating pair of systems (this is relatively common in business communications), this can be acceptable. These are also a considerable effort to set up, and don’t always scale to inter-networked situations.
Hey, what’s wrong with encrypting and signing a file with PGP or S/MIME, or even WinZip, and sending it through email?
Not a whole lot, surely. We can get into discussions of key distribution and so on, but essentially, this is a solid technique. Maybe not easy to automate, and probably not accepted by everyone the world over, but from a “protected by encryption” standpoint, this is actually fairly defensible.
What I’m really trying to say here is that your application’s security rests on an understanding of what protections you can ask from your network – and from your network staff, and which you will have to implement in the application itself. For every protection that is available in the network, that’s maybe some less work you have to do in your application; and for every protection the network does not provide, that’s one more thing you have to write into the app itself.
Without knowing what security your network provides between you and all your communicating partners, you can’t truly know or guess what security you need to provide in your application. Without knowing what security your application provides, you can’t describe what network environment is appropriate to host that application.
We split the world into infrastructure and application so frequently, that it’s important to remember that we each have to understand a little of the other’s world in order to safely operate.
I know, it sounds like complete heresy, but there it is – SSL and HTTPS will not make your web site secure.
Even more appropriate (although I queued the title of this topic up almost a month ago) is this recent piece of news: Top FBI Cyber Cop Recommends New Secure Internet, which appears to make much the opposite point, that all our problems could be fixed if we were only to switch to an Internet in which everyone is identified (something tells me the FBI is not necessarily looking for us to use strong encryption).
There are a number of ways in which an HTTPS-only website, or HTTPS-only portion of a site, can be insecure. Here’s a list of just some of them:
It’s been a long time since web servers provided only static content in their pages. Now it’s the case that pretty much every web site has to serve “applications”, in which inputs provided by the visitor to the site get processed and involved in outputs.
There are any number of ways in which those inputs can produce bad outputs – Cross Site Scripting (XSS), on which I’ve posted before; Cross Site Request Forgery, allowing an attacker to force you to take actions you didn’t intend; SQL injection, where data behind a web site can be extracted and/or modified – these are just the most commonly known.
Applications can also fail to check credentials, fail to apply access controls, and even fail in some old-fashioned ways like buffer overflows leading to remote code execution.
Providing sensitive information in an application’s path, or through parameters passed in a URL, is another common means by which application authors, who think they are protected by using HTTPS, come a significant cropper. URLs – even HTTPS protected URLs – are often read, logged, and processed at both ends of the connection, and sometimes even in the middle!
Egress filtering in enterprises is often carried out by interrupting the HTTPS communication between client and server, using a locally-deployed trusted root certificate. This quite legitimately allows the egress filtering system to process URLs to determine what’s a safe request, and what’s a dangerous one. This can also cause information sent in a URL to be exposed. This is one reason why an application developer should avoid using GET requests to perform and data exchange for user data, or for data that the site feels is sensitive.
Other path vulnerabilities – mostly fixed these days, but still something that attackers and scanning suites alike feel is worth trying – are those where the path can be changed by embedding extra slash or double-dot characters or sequences. Enough “..” entries in a path, and if the server isn’t properly written or managed, an attacker can escape out of the web server’s restrictions, and visit the operating system disk. The official term for this is a “path traversal attack”.
The presence of a padlock – or whatever your web browser shows to indicate an HTTPS, rather than HTTP, connection, indicates a few things:
If you’re the sort of person who clicks through browser warnings, all you’ve managed to confirm is that your communication is encrypted, and the site you’ve connected to is trying to convince you it is secure. Note that this is exactly what a fraudulent site will try to do. The padlock isn’t everything.
Then think about where your secret information goes. If you’re like a lot of users, you’ll be using the same password on every site you connect to, or some variation thereof. Just because the site uses SSL does not mean that you
If your bank doesn’t use HTTPS when accepting your logon information, it’s a sign that they really aren’t terribly interested in protecting that transaction. Maybe you should ask them why.
Many web sites will use HTTPS on parts of the site, and HTTP on others. Observe what they choose to protect, and what they choose to leave public. Is the publicly-transmitted information truly public? Is it something you want other people in the coffee shop or library to know you’re browsing?
Week 4 of National Cyber Security Awareness Month, and I’m getting into the more advanced topics of secure communications and protocols.
I figured I couldn’t start this topic without something that’s very near and dear to me – the security of FTP.
FTP is one of the oldest application protocols for the Internet. You can tell because it has a very low assigned port number (21).
You can also tell, because it actually has two assigned port numbers – 20 for ftp-data and 21 for ftp.
In many ways the old days of the Internet were really good, and in much the same ways, those days were bad. From a security perspective, for instance, those days were bad because none of the protocols considered security very much, if at all. Of course, you could look at this as ‘good’ and note that there weren’t really a whole lot of reasons to include security protections. Most of the original users were government, military or academic, and in each of these situations there were pretty good sanctions to use against evil-doers.
In the middle ages of the Internet, the security was still missing from many protocols, and people took advantage of them a lot. Additions like SSL were invented, and we are all familiar with using HTTPS on a web site to protect traffic to and from it.
Other protocols were simply shunned, as was the case with FTP, on the basis that no one was interested in updating them – after all, what with the web and all, who needs FTP?
Fast forward to modern day, and we find that FTP has a poor reputation for security. But is it deserved?
In some respects, yes – FTP has had its fair share of security badness in the past. But it’s also had its share of updates.
First, there was RFC 1579, Firewall Friendly FTP. Not much of a security advance, using PASV (passive) mode to open connections, so that it’s the server’s responsibility to be compatible with its firewall.
Then came RFC 2228, FTP Security Extensions, dealing with additions to FTP to manage encrypted and integrity-protected connections for data and control channel. Good, but the only protocol supported is Kerberos, and nobody really uses that on the open Internet.
Next, RFC 2577, which addresses some of the common areas where FTP implementations suffer from security failings – a definite huge step forward, because finally even new FTP implementations could get things right in terms of many of the security issues seen in older versions.
And recently (OK, so it’s six years old this month in RFC form, and has been developed for a few years before then), RFC 4217, on Securing FTP with TLS – applies the usual SSL and TLS network protection layers to FTP, basing it on the work defined in RFC 2228.
I don’t know, but I’m fairly certain that you will find FTP as it exists today is a far more secure protocol than the one described in, say, the PCI DSS requirements. In fact, if you’ve implemented an RFC 4217 compliant FTP server, enabled its protections, and made sure it implements the suggestions in RFC 2577, you can make a good case to your PCI Auditors (QSA, to use the technical term) that this is an acceptable and secure method of transferring data.
So, what’s holding you back from using FTP in your secure environment now? Anything?
So, what did we learn this week?
Because the operating system doesn’t bother to help you hide user names, and because those user names are used in countless protocols as if they were public information, you’re backing a loser if you want to try and act as if the user name is some kind of secret. There is nothing wrong with having predictable user names. If you need more security, make the passwords longer.
Arguments from other security luminaries notwithstanding, I’m still of the opinion that there really is no benefit to renaming the Administrator account, and it’s going to cause plenty of irritation.
Despite being used as both a claim and proof of identity, it really needs to be seen as one or the other, along with other biometrics. Also worth noting are the ADA and other considerations that some people just don’t have readable fingerprints, if any at all.
Don’t do it. Just don’t do it. Use IP addresses as a filter, to cut out the noise, but don’t rely on that as your only authentication measure, because an IP address doesn’t have sufficient rigour to use as an authenticator.
While it’s tempting to think that a black-hole firewall is the best, because it sits silently not responding to unwanted traffic, there are some times when it’s important to respond to unwanted traffic with a “go away, I’m not talking to you”.
And do, please, leave comments or email to let me know if you’re enjoying this series, which is published because October is “National Cyber Security Awareness Month”.
So, given the information we have so far, you should be able to answer the question.
There are two schools of thought when it comes to how a firewall should behave in some situations.
The one school says that a firewall should ignore all traffic that reaches it, unless it is traffic that should be passed on. This is known as a “black hole”, or “fully stealthed” firewall, because it refuses to send any packets in response to communications it didn’t request.
The other school says that a firewall should respond to unexpected traffic exactly like a router that knows it is unable to reach the host being requested. This is the RFC-compliant firewall, because it looks to the RFC documents to decide what should be done in response to each packet it receives.
Black hole firewalls are named after the cosmological entity of the same name, because they suck packets in and never send them back out again.
Much like a black hole, however, their existence can be deduced by the simple absence of light passing through them – a range of IPs that should be responding with reset packets (aka “go away, not listening”) to incoming TCP requests, are instead simply ignoring them. If the intent of the firewall was to make the attacker lose interest, you’ve already failed.
The RFC compliant firewall replies to every unwanted TCP connection request with a RST packet, to indicate that the targeted address is not interested in talking.
To a well-behaved TCP connection partner, this is a request to stop all communications and close the connection, without processing any further data.
Which is fine, except all unexpected traffic at a firewall is an attack, right?
OK, I really telegraphed that one.
Some unwanted TCP packets are actually very informative, and the RST message sent in response is a useful part of keeping your systems safe.
Let’s suppose someone was able to predict, or otherwise get a hold of, the Initial Sequence Numbers we talked about in yesterday’s post. That someone, an attacker, would be able to spoof, or forge, a connection coming from your system, and connect to a targeted server. Even if they couldn’t see what information was coming back, they might be able to make an attack look like it came from you.
The classic example of “what can I do with a spoofed TCP connection” is that of sending email – spam, usually – from the user of an ISP.
But those packets from the server, that the attacker can’t see (but can guess), do go somewhere – and if the Internet is working properly, they go to your computer, or the firewall sitting in front of your computer.
If your firewall is an RFC-compliant firewall, those packets will be seen by the firewall as unexpected and unwanted – and the firewall will send back a RST packet, demanding that your mail server stop trying to communicate with you. This may be the only indication to the server that anything is amiss. Your RST packet, if it arrives quickly enough, will prevent the spam run being done in your name.
If your firewall is a black-hole router, on the other hand, no RST packets will be sent, and the communication between spoofer and server will continue uninterrupted, unabated, and with you potentially on the hook for emails sent “from your IP address”.
[Note that the same argument can be made for a network where the attacker is a man in the middle who can read and inject packets, but is unable to remove packets from the stream between you and the server.]
As with many of the other issues I’ve been talking about this month, there are differing views on this. I’m generally a fan of following the RFCs, because they’ve usually been arrived at by smart people persuading other smart people to a consensus. I’m sure that you’ll run into people with other opinions on this issue, so please feel free to ask more questions and share different opinions. The really fun topics in computing are those where there are multiple answers that could all be right.