EFS

FTP is Secure; Is Your FTP Server Secure?

Stupid spammer is stupid, spamming me his stupid spam.

As far as I can tell, I have had no interactions with either Biscom or Mark Eaton. And yet, he sends me email every couple of months or so, advertising his company’s product, BDS. I class that as spam, and usually I delete it.

Today, I choose instead to comment on it.

Here’s the text of his email:

Alun

Although widely used as a file transfer method, FTP may leave users non-compliant with some federal and state regulatory requirements. Susceptible to hacking and unauthorized access to private information, FTP is being replaced with more secure file transfer technologies. Companies seeking ways to prevent data breaches and keep confidential information private should consider these FTP risks:

» FTP passwords are sent in clear text
» Files transferred with FTP are not encrypted
» Unpatched FTP servers are vulnerable to malicious attacks

Biscom Delivery Server (BDS) is a secure file transfer solution that enables users to safely exchange and transfer large files while maintaining a complete transaction and audit trail. Because BDS balances an organization’s need for security – encrypting files both at rest and in transit – without requiring knowledge workers to change their accustomed business processes and workflows, workers can manage their own secure and large file delivery needs. See how BDS works.

I would request 15 minutes of your time to mutually explore on a conference call if BDS can meet your current and future file transfer requirements. To schedule a time with me, please view my calendar here or call my direct line at 978-367-3536. Thank you for the opportunity and I look forward to a brief call with you to discuss your requirements in more detail.

Best regards,
Mark

Better than most spammers, I suppose, in that he spelled my name correctly. That’s about the only correct statement in the entire email, however. It’s easy to read this and to assume that this salesman and his company are deliberately intending to deceive their customers, but I prefer to assume that he is merely misinformed, or chose his words poorly.

In that vein, here’s a couple of corrections I offer (I use “FTP” as shorthand for “the FTP protocol and its standard extensions”):

  • “FTP may leave users non-compliant”
    • I have yet to see a standard that states that FTP is banned outright. While FTP is specifically mentioned in the PCI DSS specification, it is always with language such as “An example of an insecure service, protocol, or port is FTP, which passes user credentials in clear-text.” FTP has not required user credentials to be passed in clear text for over a decade. An FTP server with a requirement that all credentials are protected (using encryption, one-time password, or any of the other methods of securing credentials in FTP) would be accepted by PCI auditors, and by PCI analysis tools.
  • “Susceptible to hacking and unauthorized access to private information”
    • BDS’s suggested replacement, Biscom, relies on the use of email to send a URL, and HTTPS to retrieve files.
    • email is eminently susceptible to hacking, as it is usually sent in the clear (to be fair, there are encryption technologies available for email, but they are seldom used in the kind of environments under discussion)
    • HTTPS is most definitely also susceptible to hacking and unauthorised access.
  • “FTP is being replaced with more secure file transfer technologies”
    • FTP may be replaced, that’s for certain, but I have not seen it replaced with more secure, reliable, or standard file transfer technologies.
      • Biscom essentially puts an operational framework around the downloading of files from a web server. It doesn’t add any security that FTP is lacking.
    • One more secure file transfer technology could be, of course, a more modern FTP server and client, which handles modern definitions of the FTP protocol and its extensions – for instance, the standard for FTP over TLS (and SSL), known as FTPS, which allows for encryption of credential information and file transfers, as well as the use of client and server certificates (as with HTTPS) to mutually authenticate client and server.
      • While the FTPS RFC was finalised only in 2005, it had not changed substantially for several years prior to that. WFTPD Pro included functional and interoperable implementations in versions dating back to 2001.
  • “FTP passwords are sent in clear text”
    • “People walk out into traffic without looking” – equally true, but equally open to misinterpretation. Most people don’t walk out into traffic without looking; most FTP servers are able to refuse clear text logons.
    • FTP passwords are sent in clear text in the most naïve implementations of FTP. This is true. But FTP servers have, for a decade and more, been able to use encryption and authentication technologies such as Kerberos, SSL and TLS, to prevent the transmission of passwords in clear text. If passwords are being sent in clear text by an FTP implementation, that is a configuration issue.
  • “Files transferred with FTP are not encrypted”
    • Again, for a decade and more, FTP servers have encrypted files using Kerberos, SSL or TLS.
    • If your FTP transmission does not encrypt files when it needs to, it is because of faulty configuration.
  • “Unpatched FTP servers are vulnerable to malicious attacks”
    • So are unpatched HTTP servers; so are unpatched email servers and clients; the very technology on which BDS depends.
    • Are unpatched BDS servers invulnerable? I thoroughly doubt that any company could make such a claim.
  • “BDS [operates] without requiring knowledge workers to change their accustomed business processes and workflows”
    • … unless your accustomed business process is to use your FTP server as a secure and appropriate place in which to exchange files.

Finally, some things that BDS can’t, or doesn’t appear to, do, but which are handled with ease by FTP servers. (All of these are based on the “How BDS works” page. As such, my understanding is limited, too, but then I am clear in that, and not claiming to be a renowned expert in their protocol. All I can do is go from their freely available material. FTP, by contrast, is a fully documented standard protocol.)

  • Accept uploads.
    • The description of “How BDS works” demonstrates only the manual preparation of a file and sending it to a recipient.
    • To allow two-way transfers, it appears that BDS would require each end host their own BDS server. FTP requires only one FTP server, for upload and/or download. Each party could maintain FTP servers, but it isn’t required.
  • Automated, or bulk transfers.
    • Again, the description of “How BDS works” shows emailing a recipient for each file. Are you going to want to do that for a dozen files? Sure, you could zip them up, but the option of downloading an entire directory, or selected files chosen by the recipient, seems to be one you shouldn’t ignore.
    • If your recipients need to transfer files that are continually being added to, do you want them to simply log on to an established account, and fetch those files (securely)? In the BDS model, that does not appear to be possible.
  • Transfer from regular folders.
    • BDS requires that you place the files to be fetched on a specialised server.
    • FTP servers access regular files from regular folders. If encryption is required, those folders and files can be encrypted using standard and well-known folder and file-level encryption, such as the Encrypting File System (EFS) supplied for free in Windows, or other solutions for other platforms.
  • Reduce transmission overhead
    • FTP transmissions are slightly smaller than their equivalent HTTP transmissions, and the same is true of FTPS compared to HTTPS.
    • When you add in the email roundtrip that’s involved, and the human overhead in preparing the file for transmission, that’s a lot of time and effort that could be spent in just transferring the file.
  • Integration with existing identity and authorisation management technology.
    • Where FTP relies on using operating system authentication and access control methods, you can use exactly the same methods to control access as you do for controlling access to regular files. It does not seem as though BDS is tied into the operating system in the same way.
    • [FTP also usually offers the ability to manage authentication and access control separately from the operating system, should you need to do so]
  • Shared files
    • If your files need to be shared between a hundred known recipients, BDS appears to require you to create one download for each file, for each recipient. That’s a lot of overhead.
    • FTP, by comparison, requires you to place the files into a single location that all these users can access. Then send an email to the users (you probably use a distribution list) telling them to go fetch the file.
    • Similarly, you can use your own FTP server to host a secure shared file folder for several of your customers. BDS does not offer that feature.

So, all things told, I think that Biscom’s spam was not only unsolicited and unwanted, but it’s also at the very least incorrect and uninformed. The whitepaper they host at http://www.biscomdeliveryserver.com/collateral/wp/BDS-wp-aberdeen-200809.pdf repeats many of these incorrect statements, attributing them to Carol Baroudi of “The Aberdeen Group”. What they don’t link to is a later paper from The Aberdeen Group’s Vice President, Derek Brink, which is tagged as relating to FTPS and FTP – hopefully this means that Derek Brink is a little better informed, possibly as an indirect result of having Ipswitch as one of the paper’s sponsors. I’d love to read the paper, but at $400, it’s a little steep for a mere blog post.

So, if you’ve been using FTP, and want to move to a more secure file transfer method, don’t bother with the suggestions of a poorly-informed spammer. Simply update your FTP infrastructure if necessary, to a more modern and secure version – then configure it to require SSL / TLS encryption (the FTP over Kerberos implementation documented in RFC 2228, while secure, can have reliability issues), and to require encrypted authentication.

You are then at a stage where you have good quality encrypted and protected file transfer services, often at little or no cost on top of your existing FTP infrastructure, and without having to learn and use a new protocol.

Doubtless there are some features of BDS that make it a winning solution for some companies, but I don’t feel comfortable remaining silent while knowing that it’s being advertised by comparing it ineptly and incorrectly to my chosen favourite secure file transport mechanism.

Sometimes It Seems Like Unix(*) Needs to Learn from Windows

(*) By “Unix”, I mean Linux, Unix, AIX, OS/X, and similar flavours.


Way back when, about twenty or so years ago, I was a Unix admin, and a Unix developer. I had to be both, because I was the only person in the company who could spell Unix.


My favourite game was to go along to presentations for Microsoft Windows ‘new features’ and say “Oh, but hasn’t Unix had that for the last twenty years?”


Sure enough, there were countless things that Windows users and developers were just discovering (TCP/IP, shared libraries, multiple sessions on the same computer) that had been in Unix for some time. Linux was yet to make a mention, but as I’ve moved firmly into the Windows world, and left Unix behind, I’ve pretty much assumed that technologically speaking, if Windows has it, Unix and the like must also have the same functionality.


As I re-engage with Unix and Linux developers and IT professionals in recent months, though, I can see that there are some areas – particularly in security – where Windows is far ahead of the *x operating systems. Here’s a few:


Where’s my EFS?
EFS, the Encrypting File System, is one of Windows’ best-kept secrets. It’s not really a secret, of course, but it acts like one – there are so few people willing to use it, and mostly because they’re scared of or don’t understand it.
EFS allows users (under administrative control and with appropriate recovery measures in place) to choose files to encrypt, and to declare which other users can access the encrypted files.
EFS-encrypted files are encrypted on disk, and the keys cannot be broken simply by mounting an offline attack, because the key for each file is encrypted with users’ public keys, and the private keys are held securely in the users’ certificate store.
What does *x have in response? Whole disk encryption by third-party products (OK, Windows has Bitlocker and any number of third-party products). EFS protects individual files, and is far more fine-grained than the ‘all or nothing’ access of WDE (or FDE, Full Disk Encryption, if you prefer).
Single Certificate Store
This isn’t really a “single” store so much as a predictable location for the certificate store. If you want to read a user’s certificates and keys, you know where to find them (although you generally only have access if you are the user in question. Private keys from the certificate store are protected using the DPAPI, appropriately protecting them (apart from some key recovery scenarios, you have to log in using the password associated with the keys).
Similarly, certificates and keys belonging to the system and its service accounts are also in predictable locations.
This makes life easy for tools that need to scan for certificates due to expire.
Where are certificates and keys stored in *x? All over the place. Generally in “PEM” files, usually (but not always) in the same directory in which the application that installs them is.
How are these private keys protected in *x? There’s sometimes a password to open up the private key from the PEM file, and usually the PEM file has a restrictive access mask on it. [Read further for more problems with this]
Single SSL Library
It’s not uncommon to see several instances of OpenSSL installed on any particular system, whether it’s *x or Windows, if the system runs applications that use OpenSSL.
Windows developers, of course, can simply use the SSL API built in to Windows (CryptoAPI, CAPI and SChannel), and not have to worry about shipping an SSL library with their application, or keeping up with new versions as they come out, or tracking down customers and notifying them of updates to address security flaws (such as the Debian Linux key generation flaw I posted about a while ago).
Single SSL Configuration
If I want to disable SSL v2, or ciphers with fewer than 128 bits, on Windows I can change a few registry settings and know that I’ve fixed every application that uses SChannel. I can even do that remotely, with remote registry editing from a script or group policy tattooing the registry.
To do the same for OpenSSL, it seems that I have to find every application that uses OpenSSL and change the configuration files there. 
Data Protection API and configuration file protection
This is the one that really started me on this article.
How do you store a password in a configuration file?
Yes, the ‘right’ security answer is “you don’t”, but that’s naive. The fact is that there are many instances wherein you have to store a password – to access and authenticate to a remote application, or (if you’re using OpenSSL) to open a password-protected PEM or PFX file in order to read out the private key.
On Windows, the Patterns and Practices team have documented how to do this – basically, you use the DPAPI to encrypt the password into the config file, and again to decrypt it back out – and your DPAPI keys are encrypted by your master key, which is derived from your password. The end result is that you can’t get those DPAPI keys without the password.
What do the *x platforms have?
”Put the password in plain text, and protect it with a restrictive access mask”, is what I’m told. And in a search, I couldn’t find anything better being recommended. OK, one person recommended encoding the password with base64, but that’s hardly a security measure.
Jesper brought up the excellent question of “how is it different?” – in the *x system, the password is marked as only being accessible to the correct user. I was about to answer him when Steve F spoke up for me, and noted that in the DPAPI case, you have to read the file, and then an API has to be called to decrypt the password; in the *x case, you simply have to read the file. There are many many more exploits that allow the reading of a file under privileged rights than there are exploits that allow the execution of code.
Patch Management and Group Policy
Microsoft has done a really good job of implementing enterprise-level management features into their operating systems, from Group Policy and WMI to WSUS and other update management tools.
The *x systems I’ve seen seem to be built from the perspective that each system has its own attendant administrator, who is only too happy to manually deploy patches or tweak settings in line with some policy on a scrap of paper or post-it.

Maybe I’m missing some huge advances, and maybe some of these issues are resolved with a third-party tool – but then, maybe that’s part of the problem too. All of the above are a part of the operating system in Windows, and can be relied on to exist by developers, and their use by applications can be expected by IT professionals.


[Disclaimer: Yes, I know there are still areas where Microsoft needs to learn from Unix and Linux, and perhaps it’d be good if you’d educate me on those, too. This isn’t a “Windows is better than *X” debate, it’s a “hey, even if you think *X is better than Windows, here are some areas *X needs improving in”.]


Edit: There have been some excellent comments posted overnight in response to this article, and as I had hoped, I am mostly still ‘in the dark’ about what Linux and Unix-like systems offers. I’ll be looking at these as I have time, and responding when I can. For now, just let me say that I am impressed to see so much technical content in the responses, and so little of the “fanboy” behaviour that often characterises these discussions.

In Defence of the Self-Signed Certificate

Recently I discussed using EFS as a simple, yet reliable, form of file encryption. Among the doubts raised was the following from an article by fellow MVP Deb Shinder on EFS:

EFS generates a self-signed certificate. However, there are problems inherent in using self-signed certificates:

  • Unlike a certificate issued by a trusted third party (CA), a self-signed certificate signifies only self-trust. It’s sort of like relying on an ID card created by its bearer, rather than a government-issued card. Since encrypted files aren’t shared with anyone else, this isn’t really as much of a problem as it might at first appear, but it’s not the only problem.
  • If the self-signed certificate’s key becomes corrupted or gets deleted, the files that have been encrypted with it can’t be decrypted. The user can’t request a new certificate as he could do with a CA.

Well, she’s right, but that only really gives a part of the picture, and it verges on out-and-out recommending that self-signed certificates are completely untrustworthy. Certainly that’s how self-signed certificates are often viewed.

Let’s take the second item first, shall we?

“Request a new certificate” isn’t quite as simple as all that. If the user has deleted, or corrupted, the private key, and didn’t save a copy, then requesting a new certificate will merely allow the user to encrypt new files, and won’t let them recover old files. [The exception is, of course, if you use something called "Key Recovery" at your certificate authority (CA) - but that's effectively an automated "save a copy".]

Even renewing a certificate changes its thumbprint, so to decrypt your old EFS-encrypted files, you should keep your old EFS certificates and private keys around, or use CIPHER to re-encrypt with current certificates.

So, the second point is dependent on whether the CA has set up Key Recovery – this isn’t a problem if you make a copy of your certificate and private key, onto removable storage. And keep it very carefully stored away.

As to the first point – you (or rather, your computer) already trust dozens of self-signed certificates. Without them, Windows Update would not work, nor would many of the secured web sites that you use on a regular basis.

Whuh?

certmgr shows that all Trusted Root Certificates are self-signed.

Hey, look – they’ve all got the same thing in “Issued To” as they have in “Issued By”!

Yes, that’s right – every single “Trusted Root” certificate is self-signed!

If you’re new to PKI and cryptography, that’s going to seem weird – but a moment’s thought should set you at rest.

Every certificate must be signed. There must be a “first certificate” in any chain of signed certificates, and if that “first certificate” is signed by anyone other than itself, then it’s not the first certificate. QED.

The reason we trust any non-root certificate is that we trust the issuer to choose to sign only those certificates whose identity can be validated according to their policy.

So, if we can’t trust these trusted roots because of who they’re signed by, why should we trust them?

The reason we trust self-signed certificates is that we have a reason to trust them – and that reason is outside of the certificate and its signature. The majority (perhaps all) of the certificates in your Trusted Root Certificate Store come from Microsoft – they didn’t originate there, but they were distributed by Microsoft along with the operating system, and updates to the operating system.

You trusted the operating system’s original install disks implicitly, and that trust is where the trust for the Trusted Root certificates is rooted. That’s a trust outside of the certificate chains themselves.

So, based on that logic, you can trust the self-signed certificates that EFS issues in the absence of a CA, only if there is something outside of the certificate itself that you trust.

What could that be?

For me, it’s simple – I trust the operating system to generate the certificate, and I trust my operational processes that keep the private key associated with the EFS certificate secure.

There are other reasons to be concerned about using the self-signed EFS certificates that are generated in the absence of a CA, though, and I’ll address those in the next post on this topic.