I tweeted this the other day, after reading about Microsoftâs Project Bletchley:
With Microsoft releasing "blockchain as a service", how long till privacy rules suggest using blockchains to track data provenance?
â Alun Jones (@ftp_alun) June 16, 2016
Iâve been asked how I can tweet something as specific as this, when in a subsequent tweet, I noted:
[I readily admit I didn’t understand the announcement, or what it’s /supposed/ to be for, but that didn’t stop me thinking about it]
â Alun Jones (@ftp_alun) June 17, 2016
Despite having reasonably strong background in the use of crypto, and a little dabbling into the analysis of crypto, I donât really follow the whole âblockchainâ thing.
So, hereâs my attempt to explain what little I understand of blockchains and their potential uses, with an open invitation to come and correct me.
The most widely-known use of blockchains is that of Bit Coin and other âdigital currenciesâ.
Bit Coins are essentially numbers with special properties, that make them progressively harder to find as time goes on. Because they are scarce and getting scarcer, it becomes possible for people of a certain mindset to ascribe a âvalueâ to them, much as we assign value to precious metals or gemstones aside from their mere attractiveness. [Bit Coins have no intrinsic attractiveness as far as I can tell] That there is no actual intrinsic value leads me to refer to Bit Coin as a kind of shared madness, in which everyone who believes there is value to the Bit Coin shares this delusion with many others, and can use that shared delusion as a basis for trading other valued objects. Of course, the same kind of shared madness is what makes regular financial markets and country-run money work, too.
Because of this value, people will trade them for other things of value, whether thatâs shiny rocks, or other forms of currency, digital or otherwise. Itâs a great way to turn traceable goods into far less-traceable digital commodities, so its use for money laundering is obvious. Its use for online transactions should also be obvious, as itâs an irrevocable and verifiable transfer of value, unlike a credit card which many vendors will tell you from their own experiences can be stolen, and transactions can be revoked as a result, whether or not youâve shipped valuable goods.
What makes this an irrevocable and verifiable transfer is the principle of a âblockchainâ, which is reported in a distributed ledger. Anyone can, at any time, at least in theory, download the entire history of ownership of a particular Bit Coin, and verify that the person whoâs selling you theirs is truly the current and correct owner of it.
Iâm going to assume you understand how digital signatures work at this point, because thatâs a whole ânother explanation.
Remember that a Bit Coin starts as a number. It could be any kind of data, because all data can be represented as a number. Thatâs important, later.
The first owner of that number signs it, and then distributes the number and signature out to the world. This is the âdistributed ledgerâ. For Bit Coins, the âworldâ in this case is everyone else who signs up to the Bit Coin madness.
When someone wants to buy that Bit Coin (presumably another item of mutually agreed similar value exchanges hands, to buy the Bit Coin), the seller signs the buyerâs signature of the Bit Coin, acknowledging transfer of ownership, and then the buyer distributes that signature out to the distributed ledger. You can now use the distributed ledger at any time to verify that the Bit Coin has a story from creation and initial signature, unbroken, all the way up to current ownership.
Iâm a little flakey on what, other than a search in the distributed ledger for previous sales of this Bit Coin, prevents a seller from signing the same Bit Coin over simultaneously to two other buyers. Maybe thatâs enough â after all, if the distributed ledger contains a demonstration that you were unreliable once, your other signed Bit Coins will presumably have zero value.
So, in this perspective, a blockchain is simply an unbroken record of ownership or provenance of a piece of data from creation to current owner, and one that can be extended onwards.
In the world of financial use, of course, there are some disadvantages â the most obvious being that if I can make you sign a Bit Coin against your well, itâs irrevocably mine. There is no overarching authority that can say âno, letâs back up on that transaction, and say it never happenedâ. This is also pitched as an advantage, although many Bit Coin owners have been quite upset to find that their hugely-valuable piles of Bit Coins are now in someone elseâs ownership.
With the above perspective in the back of my head, I read the Project Bletchley report.
I even looked at the pictures.
I still didnât really understand it, but something went âpingâ in my head.
Maybe this is how C-level executives feel.
With Microsoft releasing "blockchain as a service", how long till privacy rules suggest using blockchains to track data provenance?
â Alun Jones (@ftp_alun) June 16, 2016
Hereâs my thought:
Businesses get data from customers, users, partners, competitors, outright theft and shenanigans.
Maybe in environments where privacy is respected, like the EU, blockchains could be an avenue by which regulators enforce companies describing and PROVING where their data comes from, and that it was not acquired or used in an inappropriate manner?
When I give you my data, I sign it as coming from me, and sign that itâs now legitimately possessed by you (I wonât say âownedâ, because I feel that personal data is irrevocably âownedâ by the person it describes). Unlike Bit Coin, I can do this several times with the same packet of data, or different packets of data containing various other information. That information might also contain details of what Iâm approving you to do with that information.
This is the start of a blockchain.
When information is transferred to a new party, that transfer will be signed, and the blockchain can be verified at that point. Further usage restrictions can be added.
Finally, when an information commissioner wants to check whether a company is handling data appropriately, they can ask for the blockchains associated with data that has been used in various ways. That then allows the commissioner to verify whether reported use or abuse has been legitimately approved or not.
And before this sounds like too much regulatory intervention, it also allows businesses to verify the provenance of the data they have, and to determine where sensitive data resides in their systems, because if it always travels with its blockchain, itâs always possible to find and trace it.
[Of course, if it travels without its blockchain, then it just looks like you either have old outdated software which doesnât understand the blockchain and needs to be retired, or youâre doing something underhanded and inappropriate with customersâ data.]
It even allows the revocation of a set of data to be performed â when a customer moves to another provider, for instance.
Yes, thereâs the downside of hugely increased storage requirements. Oh well.
Oh, and that revocation request on behalf of the customer, that would then be signed by the business to acknowledge it had been received, and would be passed on to partners â another blockchain.
So, maybe Iâve misunderstood, and this isnât how itâs going to be used, but I think itâs an intriguing thought, and would love to hear your comments.
So, there was this tweet that got passed around the security community pretty quickly:
BlueCoat now has a CA signed by Symantec https://t.co/8OXmtpT6eX
Here’s how to untrust it https://t.co/NDlbqKqqld pic.twitter.com/mBD68nrVsD
â Filippo Valsorda (@FiloSottile) May 26, 2016
Kind of confusing and scary if youâre not quite sure what this all means â perhaps clear and scary if you do.
BlueCoat manufactures âman in the middleâ devices â sometimes used by enterprises to scan and inspect / block outbound traffic across their network, and apparently also used by governments to scan and inspect traffic across the network.
The first use is somewhat acceptable (enterprises can prevent their users from distributing viruses or engaging in illicit behaviour from work computers, which the enterprises quite rightly believe they own and should control), but the second use is generally not acceptable, depending on how much you trust your local government.
Filippo helpfully gives instructions on blocking this from OSX, and a few people in the Twitter conversation have asked how to do this on Windows.
Don’t do this on a machine you don’t own or manage – you may very well be interfering with legitimate interference in your network traffic. If you’re at work, your employer owns your computer, and may intercept, read and modify your network traffic, subject to local laws, because it’s their network and their computer. If your government has ruled that they have the same rights to intercept Internet traffic throughout your country, you may want to consider whether your government shouldn’t be busy doing other things like picking up litter and contributing to world peace.
As with most things on Windows, thereâs multiple ways to do this. Hereâs one, which can be followed either by regular users or administrators. Itâs several steps, but itâs a logical progression, and will work for everyone.
Step 1. Download the certificate. Really, literally, follow the link to the certificate and click âOpenâ. Itâll pop up as follows:
Step 2. Install the certificate. Really, literally, click the button that says âInstall CertificateâŠâ. Youâll see this prompt asking you where to save it:
Step 3. If youâre a non-administrator, and just want to untrust this certificate for yourself, leave the Store Location set to âCurrent Userâ. If you want to set this for the machine as a whole, and youâre an administrator, select Local Machine, like this:
Step 4: Click Next, to be asked where youâre putting the certificate:
Step 5: Select âPlace all certificates in the following storeâ:
Step 6: Click the âBrowseâŠâ button to be given choices of where to place this certificate:
Step 7: Donât select âPersonalâ, because that will explicitly trust the certificate. Scroll down and youâll see âUntrusted Certificatesâ. Select that and hit OK:
Step 8: Youâre shown the store you plan to install into:
Step 9: Click âNextâ â and youâll get a final confirmation option. Read the screen and make sure you really want to do whatâs being offered â itâs reversible, but check that you didnât accidentally install the certificate somewhere wrong. The only place this certificate should go to become untrusted is in the Untrusted Certificates store:
Step 10: Once youâre sure you have it right, click âFinishâ. Youâll be congratulated with this prompt:
Step 11: Verification. Hit OK on the âimport was successfulâ box. If you still have the Certificate open, close it. Now reopen it, from the link or from the certificate store, or if you downloaded the certificate, from there. Itâll look like this:
The certificate hasnât actually been revoked, and you can open up the Untrusted Certificates store to remove this certificate so itâs trusted again if you find any difficulties.
There are other methods to do this â if youâre a regular admin user on Windows, Iâll tell you the quicker way is to open MMC.EXE, add the Certificates Snap-in, select to manage either the Local Computer or Current User, navigate to the Untrusted Certificates store and Import the certificate there. For wide scale deployment, there are group policy ways to do this, too.
OK, OK, because you asked, here’s a picture of how to do it by GPO:
I hate when people ask me this question, because I inevitably respond with a half-dozen questions of my own, which makes me seem like a bit of an arse.
To reduce that feeling, because the questions donât seem to be going away any time soon, I thought Iâd write some thoughts out.
Passwords are important objects â and because people naturally share IDs and passwords across multiple services, your holding on to a customerâs / userâs password means you are a necessary part of that userâs web of credential storage.
It will be a monumental news story when your password database gets disclosed or leaked, and even more of a story if youâve chosen a bad way of protecting that data. You will lose customers and you will lose business; you may even lose your whole business.
Take a long hard look at what youâre doing, and whether you actually need to be in charge of that kind of risk.
If you are going to verify a user, you donât need encrypted passwords, you need hashed passwords. And those hashes must be salted. And the salt must be large and random. Iâll explain why some other time, but you should be able to find much documentation on this topic on the Internet. Specifically, you donât need to be able to decrypt the password from storage, you need to be able to recognise it when you are given it again. Better still, use an acknowledged good password hashing mechanism like PBKDF2. (Note, from the â2â that it may be necessary to update this if my advice is more than a few months old)
Now, do not read the rest of this section â skip to the next question.
Seriously, what are you doing reading this bit? Go to the heading with the next question. You donât need to read the next bit.
<sigh/>
OK, if you are determined that you will have to impersonate a user (or a service account), you might actually need to store the password in a decryptable form.
First make sure you absolutely need to do this, because there are many other ways to impersonate an incoming user using delegation, etc, which donât require you storing the password.
Explore delegation first.
Finally, if you really have to store the password in an encrypted form, you have to do it incredibly securely. Make sure the key is stored separately from the encrypted passwords, and donât let your encryption be brute-forcible. A BAD way to encrypt would be to simply encrypt the password using your public key â sure, this means only you can decrypt it, but it means anyone can brute-force an encryption and compare it against the ciphertext.
A GOOD way to encrypt the password is to add some entropy and padding to it (so I canât tell how long the password was, and I canât tell if two users have the same password), and then encrypt it.
Password storage mechanisms such as keychains or password vaults will do this for you.
If you donât have keychains or password vaults, you can encrypt using a function like Windowsâ CryptProtectData, or its .NET equivalent, System.Security.Cryptography.ProtectedData.
[Caveat: CryptProtectData and ProtectedData use DPAPI, which requires careful management if you want it to work across multiple hosts. Read the API and test before deploying.]
[Keychains and password vaults often have the same sort of issue with moving the encrypted password from one machine to another.]
For .NET documentation on password vaults in Windows 8 and beyond, see: Windows.Security.Credentials.PasswordVault
For non-.NET on Windows from XP and later, see: CredWrite
For Apple, see documentation on Keychains
If youâre protecting data in a business, you can probably tell users how strong their passwords must be. Look for measures that correlate strongly with entropy â how long is the password, does it use characters from a wide range (or is it just the letter âaâ repeated over and over?), is it similar to any of the most common passwords, does it contain information that is obvious, such as the userâs ID, or the name of this site?
Maybe you can reward customers for longer passwords â even something as simple as a âstrong account awardâ sticker on their profile page can induce good behaviour.
Length is mathematically more important to password entropy than the range of characters. An eight character password chosen from 64 characters (less than three hundred trillion combinations â a number with 4 commas) is weaker than a 64 character password chosen from eight characters (a number of combinations with 19 commas in it).
An 8-character password taken from 64 possible characters is actually as strong as a password only twice as long and chosen from 8 characters â this means something like a complex password at 8 characters in length is as strong as the names of the notes in a couple of bars of your favourite tune.
Allowing users to use password safes of their own makes it easier for them to use longer and more complex passwords. This means allowing copy and paste into password fields, and where possible, integrating with any OS-standard password management schemes
Everything seems to default to sending a password reset email. This means your usersâ email address is equivalent to their credential. Is that strength of association truly warranted?
In the process to change my email address, you should ask me for my password first, or similarly strongly identify me.
What happens when I stop paying my ISP, and they give my email address to a new user? Will they have my account on your site now, too?
Every so often, maybe you should renew the relationship between account and email address â baselining â to ensure that the address still exists and still belongs to the right user.
Password hints push you dangerously into the realm of actually storing passwords. Those password hints must be encrypted as well as if they were the password themselves. This is because people use hints such as âThe password is âOompaloompahââ â so, if storing password hints, you must encrypt them as strongly as if you were encrypting the password itself. Because, much of the time, you are. And see the previous rule, which says you want to avoid doing that if at all possible.
How do you enforce occasional password changes, and why?
What happens when a user changes their password?
What happens when your password database is leaked?
What happens when you need to change hash algorithm?
Last week, Apple released a security update for iOS, indicating that the vulnerability being fixed is one that allows SSL / TLS connections to continue even though the server should not be authenticated. This is how they described it:
Impact: An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS
Description: Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.
Secure Transport is their library for handling SSL / TLS, meaning that the bulk of applications written for these platforms would not adequately validate the authenticity of servers to which they are connected.
Ignore âAn attacker with a privileged network positionâ â this is the very definition of a Man-in-the-Middle (MITM) attacker, and whereas we used to be more blasĂ© about this in the past, when networking was done with wires, now that much of our use is wireless (possibly ALL in the case of iOS), the MITM attacker can easily insert themselves in the privileged position on the network.
The other reason to ignore that terminology is that SSL / TLS takes as its core assumption that it is protecting against exactly such a MITM. By using SSL / TLS in your service, you are noting that there is a significant risk that an attacker has assumed just such a privileged network position.
Also note that âfailed to validate the authenticity of the connectionâ means âallowed the attacker to attack you through an encrypted channel which you believed to be secureâ. If the attacker can force your authentication to incorrectly succeed, you believe you are talking to the right server, and you open an encrypted channel to the attacker. That attacker can then open an encrypted channel to the server to which you meant to connect, and echo your information straight on to the server, so you get the same behaviour you expect, but the attacker can see everything that goes on between you and your server, and modify whatever parts of that communication they choose.
So this lack of authentication is essentially a complete failure of your secure connection.
As always happens when a patch is released, within hours (minutes?) of the release, the patch has been reverse engineered, and others are offering their description of the changes made, and how they might have come about.
In this case, the reverse engineering was made easier by the availability of open source copies of the source code in use. Note that this is not an intimation that open source is, in this case, any less secure than closed source, because the patches can be reverse engineered quickly â but it does give us a better insight into exactly the code as itâs seen by Appleâs developers.
if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0) goto fail; goto fail; if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0) goto fail;
Yes, thatâs a second âgoto failâ, which means that the last âifâ never gets called, and the failure case is always executed. Because of the condition before it, however, the âfailâ label gets executed with âerrâ set to 0.
So, of course, the Internet being what it is, the first reaction is to laugh at the clowns who made such a simple mistake, that looks so obvious.
T-shirts are printed with âgoto fail; goto fail;â on them. Nearly 200 have been sold already (not for me â I donât generally wear black t-shirts).
This is SSL code. You donât get let loose on SSL code unless youâre pretty smart to begin with. You donât get to work as a developer at Apple on SSL code unless youâre very smart.
Clearly âbe smartâ is already in evidence.
There is a possibility that this is too much in evidence â that the arrogance of those with experience and a track record may have led these guys to avoid some standard protective measures. The evidence certainly fits that view, but then many developers start with that perspective anyway, so in the spirit of working with the developers you have, rather than the ones you theorise might be possible, letâs see how to address this issue long term:
OK, so itâs considered macho to not rely on an IDE. Iâve never understood that. Itâs rather like saying how much you prefer pounding nails in with your bare fists, because it demonstrates how much more of a man you are than the guy with a hammer. It doesnât make sense when you compare how fast the job gets done, or the silly and obvious errors that turn up clearly when the IDE handles your indenting, colouring, and style for you.
Yes, colouring. I know, colour-blind people exist â and those people should adjust the colours in the IDE so that they make sense. Even a colour-blind person can get shade information to help them. I know syntax colouring often helps me spot when an XSS injection is just about ready to work, when I would otherwise have missed it in all the surrounding garbage of HTML code. The same is true when building code, you can spot when keywords are being interpreted as values, when string delimiters are accidentally unescaped, etc.
The same is true for indentation. Indentation, when itâs caused by your IDE based on parsing your code, rather than by yourself pounding the space bar, is a valuable indication of program flow. If your indentation doesnât match control flow, itâs because you arenât enforcing indentation with an automated tool.
Your IDE and your check-in process are a great place to enforce style standards to ensure that code is not confusing to the other developers on your team â or to yourself.
A little secret â one of the reasons Iâm in this country in the first place is that I sent an eight-page fax to my bosses in the US, criticising their programming style and blaming (rightly) a number of bugs on the use of poor and inconsistent coding standards. This was true two decades ago using Fortran, and itâs true today in any number of different languages.
The style that was missed in this case â put braces around all your conditionally-executed statements.
I have other style recommendations that have worked for me in the past â meaningful variable names, enforced indenting, maximum level of indenting, comment guidelines, constant-on-the-left of comparisons, donât include comparisons and assignments in the same line, one line does one thing, etc, etc.
Make sure you back the style requirements with statements as to what you are trying to do with the style recommendation. âMake the code look the same across the teamâ is a good enough reason, but âprevent incorrect flowâ is better.
gcc has the option â-Wunreachable-codeâ.
gcc disabled the option in 2010.
gcc silently disabled the option, because they didnât want anyoneâs build to fail.
This is not (IMHO) a smart choice. If someone has a warning enabled, and has enabled the setting to produce a fatal error on warnings, they WANT their build to fail if that warning is triggered, and they WANT to know when that warning can no longer be relied upon.
So, without a warning on unreachable code, youâre basically screwed when it comes to control flow going where you donât want it to.
And of course thereâs the trouble thatâs caused when you have dozens and dozens of warnings, so warnings are ignored. Donât get into this state â every warning is a place where the compiler is confused enough by your code that it doesnât know whether you intended to do that bad thing.
Let me stress â if you have a warning, you have confused the compiler.
This is a bad thing.
You can individually silence warnings (with much comments in your code, please!) if you are truly in need of a confusing operation, but for the most part, itâs a great saving on your code cleanliness and clarity if you address the warnings in a smart and simple fashion.
The compiler has an optimiser.
Itâs really good at its job.
Itâs better than you are at optimising code, unless youâre going to get more than a 10-20% improvement in speed.
Making code shorter in its source form does not make it run faster. It may make it harder to read. For instance, this is a perfectly workable form of strstr:
const char * strstr(const char *s1, const char *s2)
{
return (!s1||!s2||!*s2)?s1:((!*s1)?0:((*s1==*s2&&s1==strstr(s1+1,s2+1)-1)?s1:strstr(s1+1,s2)));
}
Can you tell me if it has any bugs in it?
Whatâs its memory usage? Processor usage? How would you change it to make it work on case-insensitive comparisons? Does it overflow buffers?
Better still: does it compile to smaller or more performant code, if you rewrite it so that an entry-level developer can understand how it works?
Now go and read the implementation from your CRT. Itâs much clearer, isnât it?
Releasing the patch on Friday for iOS and on Tuesday for OS X may have actually been the correct move â but it brings home the point that you should release patches when you maximise the payoff between having your customers patch the issue and having your attackers reverse engineer it and build attacks.
Where is the security announcement at Apple? I go to apple.com and search for âiOS 7.0.6 security updateâ, and I get nothing. Itâd be really nice to find the bulletin right there. If itâs easier to find your documentation from outside your web site than from inside, you have a bad search engine.
People who know me may have the impression that I hate Apple. Itâs a little more nuanced than that.
I accept that other people love their Apple devices. In many ways, I can understand why.
I have previously owned Apple devices â and I have tried desperately to love them, and to find why other people are so devoted to them. I have failed. My attempts at devotion are unrequited, and the device stubbornly avoids helping me do anything useful.
Instead of a MacBook Pro, I now use a ThinkPad. Instead of an iPad (remember, I won one for free!), I now use a Surface 2.
I feel like Steve Jobs turned to me and quoted Dr Frank N Furter: âI didnât make him for you.â
So, no, I donât like Apple products FOR ME. Iâm fine if other people want to use them.
This article is simply about a really quick and easy example of how simple faults cause major errors, and what you can do, even as an experienced developer, to prevent them from happening to you.
So, youâve probably heard about the recent flap concerning a Dutch Certificate Authority, DigiNotar, who was apparently hacked into, allowing for the hackers to issue certificates for sites such as Yahoo, Mozilla and Tor.
Iâve been reading a few comments on this topic, and one thing just seems to stick out like a sore thumb.
DigiNotarâs servers issued over 200 fraudulent certificates. These certificates were revoked â but, as with all certificate revocations, you canât really get a list of the names related to those revoked certificates to go back and see which sites you visited recently that you might want to reconsider. [You can only check to see if a certificate youâre offered matches one that was revoked.]
What behaviour would you reconsider on recently visited sites? Well, Iâd start by changing my passwords at those sites, at the very least, perhaps even checking to make sure nobody had used my account in my stead.
What does stick out is that DigiNotarâs own certificate was removed from, well, just about everyoneâs list of trusted root Certificate Authorities, once it was discovered that a fraudulent certificate in the name of *.google.com had been issued, and had not yet been revoked.
Yeah, given the title of my blog posting, Iâm sure you could guess that this was the thing that I was concerned about.
So, why is Google so special?
Iâm not sure I buy that DigiNotarâs removal from the trusted certificate list was simply because they failed to find one fraudulently issued certificate. It seems like, if that fraudulent certificate was for Joe Schmoe Electrical Repair, it would just have been revoked like all of the other certificates.
Removing a CA from the trusted list, after all, is pretty much going to kill that CA â every certificate ever issued by them will suddenly fail. All of their customers will have to install a new certificate, and whatâs the chance those customers will go back to the CA that caused them a sudden outage to their secure web site?
So itâs not something that companies like Google, Microsoft, Mozilla, etc would do at the drop of a hat.
It certainly seems like Google is special.
There is an argument I would buy, but no one is making it. It goes something like this:
âBack in July, when we first discovered the fraudulent certificates, we had no evidence that anyone was using them in the wild, and the CRL publication schedule allowed us to quietly and easily render unusable the certificates we had discovered. Anyone visiting a fraudulent web site would simply have seen the usual certificate error.
âThen, when we discovered in August that there was still one undiscovered certificate, and it was being used in the wild, it was not appropriate to revoke the certificate, because the CRL publishing schedule wasnât going to bring it to peopleâs desks in time to prevent them from being abused. So, we had to look for other ways to prevent people from being abused by this certificate.
âWe could have trusted to OCSP, but itâs unlikely that the fraudulent certificate pointed to a valid OCSP server. Besides, the use of a fraudulent certificate pretty much requires that you are a man-in-the-middle and can redirect your target to any site you like.
âWe could have added this certificate to the âuntrustedâ certificate list, but only Microsoft has a way to quickly publish that â the other browser and app vendors have to release new versions of their software, because they have a hard coded untrusted certificates list.
âAnd maybe thereâs another certificate â or pile of certificates â that we missed.
âSo we chose, in the interests of securing the Internet, and at the risk of adversely affecting valid customers, we chose to remove this one certificate authority from everyoneâs list of trusted roots.â
Iâve indented that as if itâs a quote, but as I said, this is an argument that no one is making. So itâs just a fantasy quote.
Is there another possible argument I might be missing, but willing to accept?
Quite some time ago, my wife was very sneaky. Oh, sheâs sneaky again and again, but this is the piece of sneakiness that is appropriate for this post.
I logged on to woot.com one day, as I often do, and saw that there was a 30GB Zune for sale â refurbished, and quite a bit cheaper than most places had it for sale, but still more than I could plonk down without blinking.
I told my wife about it, and she told me that no, I was right, we couldnât really afford it even at that price.
Then, months later, I found that my birthday present was a 30GB Zune â the very one from woot that she said we couldnât afford.
Ever since then, Iâve been a strong fan of Zune and woot alike.
The other day, though, it dawned on me that I could use my Zune (now I have a Zune HD 32GB) to keep up with wootâs occasional âwoot-offâ events, where they proceed throughout the day to offer several deals. Unfortunately, I canât actually buy anything from woot on the Zune.
I couldnât figure this out for a while, and assumed that it was simply a lack of Flash support.
Itâs not immediately obvious that thereâs a difference between the Zune having no Flash support, and the iPhone having no Flash support.
But there is â and itâs a little subtle.
The Zune doesnât have Flash support because Adobe havenât built it.
The iPod doesnât have Flash support because Apple wonât let Adobe build it.
I did a little experimenting, and itâs not that woot requires Flash.
I tried to logon directly to the account page at https://sslwww.woot.com/Member/YourAccount.aspx (peculiar that, the URL says âYour Accountâ, but itâs my account, not yours, that I see there. Thatâs why you shouldnât use personal pronouns in folder names).
That failed with a cryptic error â âCanât load the page you requested. OKâ
No, itâs not actually OK that you canât load the page, but thanks for telling me what the problem was.
Oh, thatâs right, you didnât, you just told me âfailedâ. Takes me right back to the days of âError 4/10â.
The best I can reckon is that, since the Zune can visit other SSL sites, and other browsers have no problem with this SSL site, the Zune simply doesnât have trust in the certificate chain.
That should be easy to fix, all I have to do on my PC, or on any number of web browsers, is to add the siteâs root certificate from its certificate chain to my Trusted Root store.
Sadly, I can find no way to do this for my Zune. So, no woot.
I think this would â for a start, it would mean that users could add web sites that were previously unavailable to them â including test web sites that they might be working on, which are supported by self-signed test certificates.
But more than that, adding a new root certificate to the trusted root certificate store on the Zune is a vital feature for another functionality that people have been begging for. Without adding a root certificate, it is often impossible to support WPA2 Enterprise wireless mode. So, the âadd certificate to my Zuneâs Trusted Root storeâ feature would be a step toward providing WPA2 Enterprise support.
Iâm not sure that the interface would have to be on the Zune itself â but perhaps the Zune could stock up failed certificate matches to pass to the Zune software, and then ask the operator of the Zune software at the next Sync, âdo you want to trust these certificates to enable browsing to these sites?â
Similarly, for the WPA Enterprise mode, it could ask the Zune software user âdo you want to connect to this WPA Enterprise network in future?â
Hidden by the smoke and noise of thirteen (13! count them!) security bulletins, with updates for 26 vulnerabilities and a further 4 third-party ActiveX Killbits (software that other companies have asked Microsoft to kill because of security flaws), we find the following, a mere security advisory:
Microsoft Security Advisory (977377): Vulnerability in TLS/SSL Could Allow Spoofing
Itâs been a long time coming, this workaround â which disables TLS / SSL renegotiation in Windows, not just IIS.
Disabling renegotiation in IIS is pretty easy â you simply disable client certificates or mutual authentication on the web server. This patch gives you the ability to disable renegotiation system-wide, even in the case where the renegotiation youâre disabling is on the client side. I canât imagine for the moment why you might need that, but when deploying fixes for symmetrical behaviour, itâs best to control it using switches that work in either direction.
The long-term fix is yet to arrive â and thatâs the creation and implementation of a new renegotiation method that takes into account the traffic that has gone on before.
To my mind, even this is a bit of a concession to bad design of HTTPS, in that HTTPS causes a âTOC/TOUâ (Time-of-check/Time-of-use) vulnerability, by not recognising that correct use of TLS/SSL requires authentication and then resource request, rather than the other way around. But thatâs a debate that has enough clever adherents on both sides to render any argument futile.
Suffice it to say that this can be fixed most easily by tightening up renegotiation at the TLS layer, and so thatâs where it will be fixed.
Iâll fall back to my standard answer to all questions: it depends.
If your servers do not use client auth / mutual auth, you donât need this patch. Your server simply isnât going to accept a renegotiation request.
If your servers do use client authentication / mutual authentication, you can either apply this patch, or you can set the earlier available SSLAlwaysNegoClientCert setting to require client authentication to occur on initial connection to the web server.
One or other of these methods â the patch, or the SSLAlwaysNegoClientCert setting â will work for your application, unless your application strictly requires renegotiation in order to perform client auth. In that case, go change your application, and point them to documentation of the attack, so that they can see the extent of the problem.
Be sure to read the accompanying KB article to find out not only how to turn on or off the feature to disable renegotiation, but also to see which apps are, or may be, affected adversely by this change â to date, DirectAccess, Exchange ActiveSync, IIS and IE.
I would have to say that on the speed front, I would have liked to see Microsoft make this change far quicker. Disabling TLS/SSL renegotiation should not be a huge amount of code, and while it has some repercussions, and will impact some applications, as long as the change did not cause instability, there may be some institutions who would want to disable renegotiation lock, stock and barrel in a hurry out of a heightened sense of fear.
Iâm usually the first to defend Microsoftâs perceived slowness to patch, on the basis that they do a really good job of testing the fixes, but for this, I have to wonder if Microsoft wasnât a little over-cautious.
While I have no quibbles with the bulletin, there are a couple of statements in the MSRC blog entry that I would have to disagree with:
IIS 6, IIS 7, IIS 7.5 not affected in default configuration
Customers using Internet Information Services (IIS) 6, 7 or 7.5 are not affected in their default configuration. These versions of IIS do not support client-initiated renegotiation, and will also not perform a server-initiated renegotiation. If there is no renegotiation, the vulnerability does not exist. The only situation in which these versions of the IIS web server are affected is when the server is configured for certificate-based mutual authentication, which is not a common setting.
Well, of course â in the default setting on most Windows systems, IIS is not installed, so itâs not vulnerable.
Thatâs clearly not what they meant.
Did they mean âthe default configuration with IIS installed and turned on, with a certificate installedâ?
Clearly, but thatâs hardly âthe default configurationâ. It may not even be the most commonly used configuration for IIS, as many sites escape without needing to use certificates.
Sadly, if I add âand mutual authentication enabledâ, weâre only one checkbox away from the âdefault configurationâ to which this article refers, and weâre suddenly into vulnerable territory.
In other words, if you require client / mutual authentication, then the default configuration of IIS that will achieve that is vulnerable, and you have to make a decided change to non-default configuration (the SSLAlwaysNegoClientCert setting), in order to remain non-vulnerable without the 977377 patch.
The other concern I have is over the language in the section âLikelihood of the vulnerability being exploited in general caseâ, which discusses only the original CSRF-like behaviour exploited under the initial reports of this problem.
There are other ways to exploit this, some of which require a little asinine behaviour on the part of the administrator, and others of which are quite surprisingly efficient. I was particularly struck by the ability to redirect a client, and make it appear that the server is the one doing the redirection.
I think that Eric and Maarten understate the likelihood of exploit â and they do not sufficiently emphasise that the chief reason this wonât be exploited is that it requires a MITM (Man-in-the-middle) attack to have already successfully taken place without being noticed. Thatâs not trivial or common â although there are numerous viruses and bots that achieve it in a number of ways.
Itâs a little unclear on first reading the advisory whether this affects just IIS or all TLS/SSL users on the affected system. Iâve asked if this can be addressed, and Iâm hoping to see the advisory change in the coming days.
Iâve rambled on for long enough â the point here is that if youâre worried about SSL / TLS client certificate renegotiation issues that Iâve reported about in posts 1, 2 and 3 of my series, by all means download and try this patch.
Be warned that it may kill behaviour your application relies upon â if that is the case, then sorry, youâll have to wait until TLS is fixed, and then drag your server and your clients up to date with that fix.
The release of this advisory is by no means the end of the story for this vulnerability â there will eventually be a supported and tested protocol fix, which will probably also be a mere advisory, followed by updates and eventually a gradual move to switch to the new TLS versions that will support this change.
This isnât a world-busting change, but it should demonstrate adequately that changes to encryption protocols are not something that can happen overnight â or even in a few short months.
In the spirit of "ten unavoidable security truths", and numerous other top-ten lists, here’s a list of ten key truths that apply to public / private key pairs:
Note that this list describes what happens when cryptography is working perfectly. There are other key facts that apply to broken cryptography and broken process:
Note that these lists are rather arbitrarily scoped to ten â there may be more important truths Iâve forgotten, or items Iâve included that arenât really so important.
Thanks to Thierry Zoller for mentioning me in the FTP section of his whitepaper summary of the TLS renegotiation attacks on various protocols. Iâm glad he also spells my name right â youâd be surprised how many people get that wrong, although Iâm sure Thierry gets his own share of people unable to spell his name.
The whitepaper itself contains some really nice and simple documentation of the SSL MITM renegotiation attack, and how it works. Itâs well worth reading if youâre looking for some insight into how this works.
First, though, a couple of corrections to Thierryâs summary â while heâs working on revising his whitepaper, Iâll post them here:
I think that where FTPS has problems that are thrown into sharp relief with the SSL MITM renegotiation attacks Iâve been discussing for a while now, it has had those problems before. If an attacker can monitor and modify the FTP control channel (because the client requested CCC and the server allowed it), the attacker can easily upload whatever data they like in place of the clientâs bona fide upload.
The renegotiation attack simply makes it easier for the attacker to hide the attack. Itâs the use of CCC which facilitates the MITM attack, far more than the renegotiation does.
To address one further comment Iâve heard with regard to SSL MITM attacks, I hear âyeah, but getting to be a man-in-the-middle is so difficult anyway, that even a really simple attack is unlikelyâ. Thatâs a true comment â for the most part, there is little chance of a man-in-the-middle attack occurring on the general Internet in a bulk situation. The âlast mileâ of home wireless, coffee bars and other public wireless hangouts, or the possibility of DNS hijacking, HOSTS file editing, broadband router hacking, or just plain viruses and worms, are the place where most man-in-the-middle entry points exist.
However, if youâre going to assert that itâs truly unlikely that an attacker can insert himself into your network stream, you basically have no reason whatever to use SSL / TLS â without a potential for that interception and modification of your traffic, thereâs really no need to authenticate it, encrypt it, or monitor its integrity along the path.
The fact that a protocol or application uses SSL /TLS means that it tacitly assumes the existence of a man in the middle. If SSL / TLS allows a man-in-the-middle attack at all, it fails in its basic raison dâetre.
Next post, I promise something other than SSL renegotiation attacks.
[Note – for previous parts in this series, see Part 1 and Part 2.]
FTP, and FTP over SSL, are my specialist subject, having written one of the first FTP servers for Windows to support FTP over SSL (and the first standalone FTP server for Windows!)
Rescorla and others have concentrated on the SSL MITM attacks and their effects on HTTPS, declining to discuss other protocols about which they know relatively far less. OK, time to step up and assume the mantle of expert, so that someone with more imagination can shoot me down.
FTPS is not vulnerable to this attack.
No, that’s plainly rubbish. If you start thinking along those lines in the security world, you’ve lost it. You might as well throw in the security towel and go into a job where you can assume everybody loves you and will do nothing to harm you. Be a developer of web-based applications, say. :-)
And they are all dependent on the features, design and implementation of your individual FTPS server and/or client. That’s why I say “possible”.
The obvious attack – renegotiation for client certificates – is likely to fail, because FTPS starts its TLS sessions in a different way from HTTPS.
In HTTPS, you open an unauthenticated SSL session, request a protected resource, and the server prompts for your client certificate.
In FTPS, when you connect to the control channel, you provide your credentials at the first SSL negotiation or not at all. There’s no need to renegotiate, and certainly there’s no language in the FTPS standard that allows the server to query for more credentials part way into the transaction. The best the server can do is refuse a request and say you need different or better credentials.
A renegotiation attack on the control channel that doesn’t rely on making the server ask for client credentials is similarly unlikely to succeed – when the TLS session is started with an AUTH TLS command, the server puts the connection into the ‘reinitialised’ state, waiting for a USER and PASS command to supply credentials. Request splitting across the renegotiation boundary might get the user name, but the password wouldn’t be put into anywhere the attacker could get to.
At first sight, the data connection, too, is difficult or impossible to attack – an attacker would have to guess which transaction was an upload in order to be able to prepend his own content to the upload.
But that’s betting without the effect that NATs had on the FTP protocol.
Because the PORT and PASV commands involve sending an IP address across the control channel, and because NAT devices have to modify these commands and their responses, in many implementations of FTPS, after credentials have been negotiated on the control channel, the client issues a “CCC” command, to drop the control channel back into clear-text mode.
Yes, that’s right, after negotiating SSL with the server, the client may throw away the protection on the control channel, so the MitM attacker can easily see what files are going to be accessed over what ports and IP addresses, and if the server supports SSL renegotiation, the attacker can put his data in at the start of the upload before renegotiating to hand off to the legitimate client. Because the client thinks everything is fine, and the server just assumes a renegotiation is fine, there’s no reason for either one to doubt the quality of the file that’s been uploaded.
How could this be abused? Imagine that you are uploading an EXE file, and the hacker prepends it with his own code. That’s how I wrote code for a ‘dongle’ check in a program I worked on over twenty years ago, and the same trick could still work easily today. Instant Trojan.
There are many formats of file that would allow abuse by prepending data. CSV files, most exploitable buffer overflow graphic formats, etc.
While I’m on FTP over SSL implementations and the data connection, there’s also the issue that most clients don’t properly terminate the SSL connection in FTPS data transfers.
As a result, the server can’t afford to report as an error when a MitM closes the TCP connection underneath them with an unexpected TCP FIN.
That’s bad – but combine it with FTP’s ability to resume a transfer from part-way into a file, and you realize that an MitM could actually stuff data into the middle of a file by allowing the upload to start, interrupting it after a few segments, and then when the client resumed, interjecting the data using the renegotiation attack.
The attacker wouldn’t even need to be able to insert the FIN at exactly the byte mark he wanted – after all, the client will be sending the REST command in clear-text thanks to the CCC command. That means the attacker can modify it, to pick where his data is going to sit.
Not as earth-shattering as the HTTPS attacks, but worth considering if you rely on FTPS for data security.
1. I never bothered implementing SSL / TLS renegotiation – didn’t see it as necessary; never had the feature requested. Implementing unnecessary complexity is often cause for a security failure.
2. I didn’t like the CCC command, and so I didn’t implement that, either. I prefer to push people towards using Block instead of Stream mode to get around NAT restrictions.
I know, it’s merely fortunate that I made those decisions, rather than that I had any particular foresight, but it’s nice to be able to say that my software is not vulnerable to the obvious attacks.
I’ve yet to run this by other SSL and FTP experts to see whether I’m still vulnerable to something I haven’t thought of, but my thinking so far makes me happy – and makes me wonder what other FTPS developers have done.
I wanted to contact one or two to see if they’ve thought of attacks that I haven’t considered, or that I haven’t covered. So far, however, I’ve either received no response, or I’ve discovered that they are no longer working on their FTPS software.
Let me know if you have any input of your own on this issue.