Monthly Archives: January 2013

Removing capabilities from my first Windows Phone app

So, I thought I’d write a Windows Phone app using Visual Studio 2012 the other day. Just a simple little thing, to help me solve my son’s algebra homework without getting into the same binds he does (failure to copy correctly, fumbled arithmetic, you know the thing…)


And I run into my first problem.


The app uses no phone capabilities worth advertising – you know, things like the choice to track your location, so that the app’s install will ask the user “do you want to allow this app to have access to your location”, and you say either “allow”, or “why the hell does a flashlight application need to know where I am?”


And yet, when I run the “Automated Tests” under the Store Test Kit, I get the following:


image


If you can’t read the image, or you’re searching for this in Google, I’ll tell you that it wants me to know that it’s validated all the capabilities I’m using, and has noticed that I’m using ID_CAP_MEDIALIB and ID_CAP_NETWORKING.


Weird, because I don’t do any networking, and I don’t access any of the phone user’s media.


It’s just my son and me using the app right now, but I can picture some paranoid person wondering why I need access to their media library or networking simply so I can solve the occasional simultaneous or quadratic equation!


Quite frankly, I was mystified, too. Did a bit of searching across the interWebs, but all the articles I found said the same thing – the MediaLib capability must be because you’re using something with the word “Radio” or “Media” in it somewhere (I’m not), and the Networking capability because you’re doing something across the network. I removed all the “using System.Net” lines from all of my code files, but still no joy.


[A quick tip: to find all these rules yourself, look in C:\Program Files (x86)\Microsoft SDKs\Windows Phone\v7.1\Tools\Marketplace for the file “rules.xml”, which will tell you what the capability detection code is looking for]


Nothing in my own code seemed to be actually causing this, so I took a step back and had a look at other references being included by the compiler by default.


System.Net seemed to be an obvious offender, so I removed that, to no effect (quite right, too, because it isn’t the offender, and doesn’t, on its own, cause ID_CAP_NETWORKING to be detected).


No, here’s the culprit:


image


Microsoft.Expression.Interactions – what on earth is that doing there?


It’s not something I remember including, and quite honestly, when I went looking for it, I’m disappointed to find that it’s associated with Expression Blend, not something I’ve actually used EVER. [Maybe I should, but that’s a topic for another time].


Removing this reference and rebuilding, the XAP tests clear of all capabilities. Which is nice.


So, now I have my “Big Algebra” app in beta test, and it doesn’t tell the user that it’s going to need their media library or their network connection – because it’s not going to need them!

Could Google prompt Microsoft to provide easier syncing?

As a Windows Phone user, and trying to persuade my wife to one day become one (instead of the Blackberry she totes around), I’m constantly stopped by the prospect that there is no way to sync my calendar and contacts without going through some online service.

This is a very strange situation, because even Apple’s iPhone can apparently synchronise Outlook contacts and calendar entries over the USB connection.

Microsoft’s answer to date has always been that we should rent or borrow an Exchange Server of some sort, push our calendar and contact details to that server, and then fetch them down later. Not exactly secure (you’re sharing a high-value target, possibly operated by a company with whom you compete, as your mail server), and free under generally limited circumstances.

Maybe I’m too paranoid, but I don’t really fancy that level of reliance on someone else’s service to host and protect information that, up until now, I’ve held physically on only a couple of devices. So, I make do without my contacts on my phone, and I rarely have a firm idea of my calendar commitments until I’m back at home base.

There are solutions, of course, because there always are – and they rely, essentially, on setting up something that pretends to be Exchange on your local WiFi.

Not exactly secure, and not exactly cheap. Certainly not free.

Then comes the bad news – Google has decided to close down Exchange connectivity to GMail, so that Windows Phones will not be able to use GMail any more. I’m sure that’s not the reason they give in press releases, but it seems likely that’s at least seen as a handy side-benefit. [Does this mean Google sees a threat from Windows Phone?]

Rather uncharacteristically, but in a welcome move, Microsoft turned around and, instead of turning it into a raging war of words, said that if they couldn’t get at GMail that way any more, they’d support one of the other ways of getting at GMail – this means that they’ll start supporting IMAP and the Calendar and Contact sync formats supported by GMail.

That changes everything

Because now, you don’t have to find an Exchange lookalike in order to sync locally – all you need is IMAP support, and support for the two formats, CardDAV and CalDAV.

These are simpler formats, and more widely documented and supported, than the Exchange protocols previously insisted upon by Microsoft. I can see that when they open up IMAP support, a lot of Windows Phone users will be opened up to their email accounts, and when CardDAV and CalDAV are added, we should see very quickly some solutions that allow for syncing of contacts and calendar while connected by USB.

2013 should be a good year to be a Windows Phone user.

And yes, I’m still waiting for my carrier to push Windows Phone 7.8.

On new exploit techniques

Last year’s discussion on “Scriptless XSS” made me realise that there are two kinds of presentation about new exploits – those that talk about a new way to trigger the exploit, and those that talk about a new way to take advantage of the exploit.

Since I didn’t actually see the “Scriptless XSS” presentation at Blue Hat (not having been invited, I think it would be bad manners to just turn up), I won’t address it directly, and it’s entirely possible that much of what I say is actually irrelevant to that particular paper. I’m really being dreadfully naughty here and setting up a strawman to knock down. In the tech industry, this practice is often referred to as “journalism”.

So where’s my distinction?

Let’s say you’re new to XSS. It’s possible many of you actually are new to XSS, and if you are, please read my previous articles about how it’s just another name for allowing an attacker to inject content (usually HTML) into a web page.

Your first XSS exploit example may be that you can put “<script>alert(1)</script>” into a search field, and it gets included without encoding into the body of the results page. This is quite frankly so easy I’ve taught my son to do it, and we’ve had fun finding sites that are vulnerable to this. Of course, we then inform them of the flaw, so that they get it fixed. XSS isn’t perhaps the most damaging of exploits – unlike SQL injection, you’re unlikely to use it to steal a company’s entire customer database – but it is an embarrassing indication that the basics of security hygiene are not being properly followed by at least some of your development team.

The trigger of the exploit here is the “<>” portion (not including the part replaced by ellipsis), and the exploit itself is the injection of script containing an alert(1) command.

Let’s say now that the first thing a programmer tries to protect his page is to replace the direct embedding of text with an <input> tag, whose value is set to the user-supplied text, in quotes.

Your original exploit is foiled, because it comes out as:

<input readonly=1 value="<script>alert(1)</script>">

That’s OK, though, because the attacker will see that, and note that all he has to do is provide the terminating quote and angle bracket at the start of his input, to produce instead:

<input readonly=1 value=""><script>alert(1)</script>">

This is a newer method of exploiting XSS-vulnerable code. Although a simple example, this is the sort of thing it’s worth getting excited about.

Why is that exciting?

It’s exciting because it causes a change in how you planned to fix the exploit. You had a fix that prevented the exploit from happening, and now it fails, so you have to rethink this. Any time you are forced to rethink your assumptions because of new external data, why, that’s SCIENCE!

And the other thing is…?

Well, the other thing is noting that if the developer did the stupid thing, and blocked the word “alert”, the attacker can get around that defence by using the “prompt” keyword instead, or by redirecting the web page to somewhere the attacker controls. This may be a new result, but it’s not a new trigger, it’s not a new cause.

When defending your code against attack, always ask yourself which is the trigger of an attack, rather than the body of the attack itself. Your task is to prevent the trigger, at which point the body becomes irrelevant.

Time to defend myself.

I’m sure that someone will comment on this article and say that I’m misrepresenting the field of attack blocking – after all, the XSS filters built into major browsers surely fall into the category of blocking the body, rather than the trigger, of an attack, right?

Sure, and that’s one reason why they’re not 100% effective. They’re a stopgap measure – a valuable stopgap measure, don’t get me wrong, but they are more in the sense of trying to recognise bad guys by the clothing they choose to wear, rather than recognising bad guys by the weapons they carry, or their actions in breaking locks and planting explosives. Anyone who’s found themselves, as I have, in the line of people “randomly selected for searching” at the airport, and looked around and noted certain physical similarities between everyone in the line, will be familiar with the idea that this is more an exercise in increasing irritation than in applying strong security.

It’s also informative to see methods by which attacks are carried out – as they grow in sophistication from fetching immediate cookie data to infecting browser sessions and page load semantics, it becomes easier and easier to tell developers “look at all these ways you will be exploited, and you will begin to see that we can’t depend on blocking the attack unless we understand and block the triggers”.

Keep doing what you’re doing

I’m not really looking to change any behaviours, nor am I foolish enough to think that people will start researching different things as a result of my ranting here.

But, just as I’ve chosen to pay attention to conferences and presentations that tell me how to avoid, prevent and fix, over those that only tell me how to break, I’ll also choose to pay attention to papers that show me an expansion of the science of attacks, their detection and prevention, over those that engage in a more operational view of “so you have an inlet, what do you do with it?”