Monthly Archives: June 2010

Woot got my Zune, Zune can’t get my woot!

Quite some time ago, my wife was very sneaky. Oh, she’s sneaky again and again, but this is the piece of sneakiness that is appropriate for this post.

I logged on to woot.com one day, as I often do, and saw that there was a 30GB Zune for sale – refurbished, and quite a bit cheaper than most places had it for sale, but still more than I could plonk down without blinking.

I told my wife about it, and she told me that no, I was right, we couldn’t really afford it even at that price.

Then, months later, I found that my birthday present was a 30GB Zune – the very one from woot that she said we couldn’t afford.

Ever since then, I’ve been a strong fan of Zune and woot alike.

The other day, though, it dawned on me that I could use my Zune (now I have a Zune HD 32GB) to keep up with woot’s occasional “woot-off” events, where they proceed throughout the day to offer several deals. Unfortunately, I can’t actually buy anything from woot on the Zune.

I couldn’t figure this out for a while, and assumed that it was simply a lack of Flash support.

Sidebar: Why the Zune and iPhone Don’t Have Flash Support

It’s not immediately obvious that there’s a difference between the Zune having no Flash support, and the iPhone having no Flash support.

But there is – and it’s a little subtle.

The Zune doesn’t have Flash support because Adobe haven’t built it.

The iPod doesn’t have Flash support because Apple won’t let Adobe build it.

Back to the main story – why my Zune can’t woot!

I did a little experimenting, and it’s not that woot requires Flash.

I tried to logon directly to the account page at https://sslwww.woot.com/Member/YourAccount.aspx (peculiar that, the URL says “Your Account”, but it’s my account, not yours, that I see there. That’s why you shouldn’t use personal pronouns in folder names).

That failed with a cryptic error – “Can’t load the page you requested. OK”

No, it’s not actually OK that you can’t load the page, but thanks for telling me what the problem was.

Oh, that’s right, you didn’t, you just told me “failed”. Takes me right back to the days of “Error 4/10”.

The best I can reckon is that, since the Zune can visit other SSL sites, and other browsers have no problem with this SSL site, the Zune simply doesn’t have trust in the certificate chain.

That should be easy to fix, all I have to do on my PC, or on any number of web browsers, is to add the site’s root certificate from its certificate chain to my Trusted Root store.

Sadly, I can find no way to do this for my Zune. So, no woot.

Would this be a feature other people would want?

I think this would – for a start, it would mean that users could add web sites that were previously unavailable to them – including test web sites that they might be working on, which are supported by self-signed test certificates.

But more than that, adding a new root certificate to the trusted root certificate store on the Zune is a vital feature for another functionality that people have been begging for. Without adding a root certificate, it is often impossible to support WPA2 Enterprise wireless mode. So, the “add certificate to my Zune’s Trusted Root store” feature would be a step toward providing WPA2 Enterprise support.

How would that interface look on the Zune?

I’m not sure that the interface would have to be on the Zune itself – but perhaps the Zune could stock up failed certificate matches to pass to the Zune software, and then ask the operator of the Zune software at the next Sync, “do you want to trust these certificates to enable browsing to these sites?”

Similarly, for the WPA Enterprise mode, it could ask the Zune software user “do you want to connect to this WPA Enterprise network in future?”

On Full Disclosure

I’ve written before on “Full Disclosure”:

Recent events have me thinking once again about “full disclosure”, its many meanings, and how it makes me feel when bugs are disclosed publicly without allowing for the vendor or developer to address the bug for themselves.

The post that reminded me to write on this topic was Tavis Ormandy’s revelation of the Help Control Protocol vulnerability, but it could be anyone that triggered me to write this.

How you disclose implies your motivation

Securing the users

If your motivation is to help secure users and their systems, then I think your disclosure pattern should roughly be:

  1. Find the world-renowned experts in the code (usually including the software’s developers) where the vulnerability lies.
  2. Discuss the extent of the flaw, and methods to fix and/or work around it.
  3. Get consensus.
  4. Test workarounds and fixes, so as to ensure that your fix is sufficient, as well as that it does not kill more important functionality.
  5. Publicise only as much demonstration as is required to show that the problem exists, and that it is serious.
  6. Release patches and workarounds, and work with affected users to assist them in deploying these.
  7. After a reasonable amount of time, publicise the exploit in full detail, so as to encourage developers not to cause similar mistakes, and to ensure that slow users are given good reason to upgrade their systems.
  8. Only if the vendor refuses to work with you at all do you publish without their involvement.

[Obviously, some of the timing moves up if and when the exploit appears in the wild, but the order is essentially the same.]

Disadvantages:

  • The bad guys may already have the vulnerability.
    • This only makes sense with relatively obvious vulnerabilities, and even then, working with the vendor allows you and the vendor to quantify its extent beyond what you know on your own, and beyond what the bad guys currently know, so that the bug can be fixed properly. Believe it or not, enterprises get really pissed when you release a “bug fix”, and then release another fix for the same bug, and then another fix for the same bug. For every time you revise the bug fix, you decrease the number of users applying the fix.
  • Someone else may publish ahead of you.
    • That’s okay, you’re smart and you’ll get the next one – besides, most vendors you’re working with will say in their bug report that you reported it to them, rather than the guy who publishes half-cocked.
    • Your bug report, collaborating with the vendor/developer, will be correct, whereas the other guy’s report will be full of its own holes, which you and the vendor can happily poke holes in.

Personal publicity

It’s fairly clear that there are some people in the security research industry whose main goal is that of self-publicity. These are the show-offs, whether they are publicising their company or their services or just themselves.

For these people the disclosure pattern would be:

  1. Demonstrate how clever I am by detailing the depth of the exploit with full examples.
  2. Watch while everything else happens.
  3. Occasionally interject that others don’t understand how important this vulnerability is.

Disadvantages:

  • This really makes the vendor hate you – which is great if you don’t ever need their assistance.
  • Occasionally, you’ll report something stupid – something that demonstrates that not only are you clueless about the software, but you’re loudly clueless.
  • It’s obvious that you’re in this for the publicity, rather than to help the user community get secure; as a result, users don’t come to you as much for help in securing their systems. Which is a shame if that’s the job you’re trying to get publicity for.

Just for the money

When all you’re in it for is the money, the answer is clear – you shop around, describing your bugs to Tipping Point and the like, then selling your bug to the highest bidder.

Disadvantages:

  • You may not necessarily get the publicity that brings future contracts and job interest.
  • There’s a chance that the person / group buying your bug doesn’t share your motives.
  • You get no further control over the progress of your bug.

Sometimes this isn’t so bad – you get the money, and many of the vulnerability buyers will work with vendors to address the bug – all the while, protecting their subset of users with their security tool.

To punish the vendor

What a noble goal – you’re trying to make it clear to users that they have chosen the wrong vendor.

Here, the disclosure pattern is simple:

  1. Release full details of the vulnerability, with a wormable exploit that requires as little user interaction as possible.
  2. Decry the security of a vendor that would be so stupid as to produce such an obvious bug and not find it before release.
  3. Wait and watch as your posse takes up the call and similarly disses your chosen target.

Disadvantages:

  • Again, you can look like an idiot if your research isn’t quite up to snuff.
  • Actually, you can look like an idiot anyway with this approach, especially when you pick on vendors whose security has improved significantly.
  • Vendors have their own posse:
    • People who work at the vendor
    • People who admire the vendor
    • People who share the vendor’s position, and don’t want people like you being shitty to them either.
  • You have to ask yourself – what am I looking for in a vendor before I determine that they are no longer subject to punishment?
    • Or are all vendors equally complicit in evil?
    • [Or only those who are fallible enough to let a bug slip through their testing?]

Here’s the lesson

You may agree or disagree with a lot of what I’ve written above – but if you’re going to publish vulnerability research, you have to deal with the prospect that people will be watching what you post, when you post it, how you post it – and they will infer from that (even if you think you haven’t implied anything of the sort) a motive and a personality. What are your posts and your published research going to say about your motives? Is that what you want them to say? Are you going to have to spend your time explaining that this is not really what you intended?

As Tavis is discovering, you can also find it difficult to separate your private vulnerability research from your employer – this is perhaps harder in Tavis’ case to draw the line, since he is apparently employed in the capacity of vulnerability. If your employer is understanding and you have an agreement as to what is personal work and what is work work, that’s not a big problem – but it can be a significant headache if that has not been addressed ahead of time.

If Google stops trusting Windows, can Windows users trust Google?

The story at the Financial Times is that Google has quietly stopped allowing their internal users – developers, testers, etc – to use Windows operating systems. Allegedly this is because they can’t trust the operating system after the “Aurora” attack earlier this year, in which systems at Google (and other companies) were compromised to steal credentials, email and source code.


Others have already pointed out that this makes little sense – for various reasons:


  • The attack was performed through a Trojan – that mostly means that human weaknesses were as much a part of the vulnerability as any technical issues. [I don’t see Google getting rid of people as a result]
  • The attack used multiple points of entry, including PDF files and Internet Explorer bugs – with the IE flaws being reliably exploited (and a reliable exploit is needed if a Trojan is to be successful) only in IE6.
  • The penetration through IE6 – the only operating system component under attack – was successful because the systems were not only running an outdated browser with outdated protection (IE6), but also because the users were running as unprotected administrators. [Again, I don’t see Google getting rid of those users, or requiring staff to not be Administrator on their own systems]

Me, I’d like to think that this is just a bogus story – all operating systems have flaws, and when you’re protecting against an attack that is targeted against your company, rather than scattershot against an operating system at random companies, the protection afforded by running a non-majority OS is pretty much wiped out. In addition, a managed installation of Windows (i.e. a domain) provides for far greater corporate control that can be used in instances of attack to tighten security settings, or to monitor more closely the configuration and activities of those systems. Other operating systems just don’t have that level of manageability. So it seems likely that this is just some bluster on the part of the “Anyone but Microsoft” crowd, than actual corporate policy.


Or it could just be that Google wants its employees to run the Google Chrome OS more, or perhaps even that Google wants to spread its bets across different platforms – all of these would be good reasons.


But the question in my title remains – if Google does stop trusting Windows, and stop using Windows, what does that mean for Windows users? It would mean that testing of Google’s sites and applications under Windows would be an afterthought, rather than a focus. Instead of Windows being present in some part at all stages of development, Windows would be another Quality Assurance step – “now that we’ve built it, does it work on Windows without too many problems?”


It’s totally up to Google as to whether they wish to make that happen – certainly, if they see the bulk of their user-base coming from systems other than Windows, it would make sense to focus on those. But if you’re a Windows user, I hope this story makes you anticipate this as a possible behaviour, and one that could leave you without access to the Google resources – apps, documents, storage, email – upon which you rely.


So, what can you do?


Plan your exit strategy. How will you migrate your data away, and what will be your alternate applications? Or will you switch operating systems to follow Google? How will you decide when the time has come to make that change?


Of course, you can extend this discussion – what is your plan, Apple users, for when you have to choose between Apple and Adobe? That one may come sooner than any Google / Windows split.