General Security – Page 4 – Tales from the Crypto

General Security

Error 860 in Windows 8.1 / Surface VPN

It should be easy enough to set up a VPN in Windows, and everything should work well, because Microsoft has been doing these sorts of things for some years.

clip_image002

Sure enough, if you open up the Charms bar, choose Settings, Change PC Settings, and finally Network, you’re brought to this screen, with a nice big friendly button to add a VPN connection. Tapping on it leads me to the following screen:

clip_image004

No problems, I’ve already got these settings ready to go.

clip_image006

Probably not the best to name my VPN settings “New VPN”, but then I’m not telling you my VPN endpoint. So, let’s connect to this new connection.

clip_image008

So far, so good. Now it’s verifying my credentials…

clip_image010

And then we should see a successful connection message.

clip_image012

Not quite. For the search engines, here’s the text:

Error 860: The remote access connection completed, but authentication failed because of an error in the certificate that the client uses to authenticate the server.

This is upsetting, because of course I’ve spent some time setting the certificate correctly (more on that in a later post), and I know other machines are connecting just fine.

I’m sure that, at this point, many of you are calling your IT support team, and they’re reminding you that they don’t support Windows 8 yet, because some lame excuse about ‘not yet stable, official, standard, or Linux”.

Don’t take any of that. Simply open the Desktop.

What? Yes, Windows 8 has a Desktop. And a Command Prompt, and PowerShell. Even in the RT version.

Oh, uh, yeah, back to the instructions.

Forget navigating the desktop, just do Windows-X, and then W, to open the Network Connections group, like this:

clip_image014

Select the VPN network you’ve created, and select the option to “Change settings of this connection”:

clip_image016

In the Properties window that pops up, you need to select the Security tab:

clip_image018

OK, so that’s weird. The Authentication Group Box has two radio buttons – but neither one is selected. My Grandma had a radio like that, you couldn’t tell what station you were going to get when you turn it on – and the same is generally true for software. So, we should choose one:

clip_image020

It probably matters which one you choose, so check with your IT team (tell them you’re connecting from Windows 7, if you have to).

Then we can connect again:

clip_image022clip_image024clip_image026

And… we’re connected.

Now for another surprise, when you find that the Desktop Internet Explorer works just fine, but the “Modern UI” (formerly known as “Metro”) version of IE decides it will only talk to sites inside your LAN, and won’t talk to external sites. Oh, and that behavior is extended to any Metro app that embeds web content.

I’m still working on that one. News as I have it!

There is no such thing as “small sample code”

Every few months, something encourages me to make the tweet that:

There is no such thing as “small sample code”, every sample you publish is an SDK of its own

OK, so the choice of calling these “SDKs” is rooted in my Microsoft dev background, where “sample code” didn’t need documentation or bug tracking, whereas an SDK does. You can adjust the terminology to suit.

The basic point here is to remind you that you do not get to abrogate all responsibility by saying “this is sample code, you will need to add error checking and security”, even if you do say it in the article – even if you say it in the comments of the sample!

Why do I care so much? It’s only three lines of code!

Simply stated, I’ve seen too many cases where people have included three lines of code (or five, or twenty, the count doesn’t matter) into a program, and they’ve stepped away and shipped that code.

“It wasn’t my fault,” they say, when the incident happens, “I copied that code from a sample online.”

This is the point at which the re-education machine is engaged – because, of course, it totally is your fault, if you include code in your development without treating it with the same rigour as if you had written every line of it yourself. You will get punished – usually by having to stay late and fix it.

It’s also the sample writer’s fault.

He gave you the mini-SDK that you imported blindly into your application, without testing it, without checking errors in it, without appropriate security measures, and he brushed you off with “well, of course, you should add your own error checks and security magic to it”.

Here’s an example of what I’m talking about, courtesy of Troy Hunt linking to an ASP forum.

No, if you’re providing sample code on the Internet, it’s important to make sure it doesn’t embody BAD design; this is code that will be taken up by people by definition less keen, less eager, less smart and less motivated to do things right than you are – after all, rather than figuring out how to write this code for themselves, they are allowing you to do it for them, to teach them how it’s done. If you then teach them how it’s done badly, that’s how they will learn to do it – badly. And they will teach others.

So, instead, make your three line samples five lines, and add enough error checking that unexpected issues or other bad things will break the sample’s execution.

Oh yeah, and what about updates, when you find a horrendous bug – how do you distribute those?

In which a coffee store learns not to blacklist

I’ve been playing a lot lately with cross-site scripting (XSS) – you can tell that from my previous blog entries, and from the comments my colleagues make about me at work.

Somehow, I have managed to gain a reputation for never leaving a search box without injecting code into it.

And to a certain extent, that’s deserved.

But I always report what I find, and I don’t blog about it until I’m sure the company has fixed the issue.

So, coffee store, we’re talking Starbucks, right?

Right, and having known a few people who’ve worked in the Starbucks security team, I was surprised that I could find anything at all.

Yet it practically shouted at me, as soon as I started to inject script:

0-oops

Well, there’s pretty much a hint that Starbucks have something in place to prevent script.

But it’s not the only thing preventing script, as I found with a different search:

1-prompt

So, one search takes me to an “oops” page, another takes me to a page telling me that nothing happened – but without either one executing the script.

The oops page doesn’t include any of my script, so I don’t like that page – it doesn’t help my injection at all.

The search results page, however, that includes some of my script, so if I can just make that work for me, I’ll be happy.

Viewing source is pretty helpful, so here’s what I get from that, plus searching for my injected script:

2-social

So, while my intended JavaScript, “”-prompt(1)-“”, is not executed, and indeed is in the wrong context to be executed, every character has successfully made it into the source sent back to the user’s browser.

At this point, I figure that I need to find some execution that is appropriate for this context.

Maybe the XSS fish will help, so I search for that:

3-XSSFish

Looks promising – no “oops”, let’s check the source:

4-XSSFishSrc

This is definitely working. At this point, I know the site has XSS, I just have to demonstrate it. If I was a security engineer at Starbucks, this would be enough to cause me to go beat some heads about.

I think I should stress that. If you ever reach this point, you should fix your code.

This is enough evidence that a site has XSS issues to make a developer do some work on fixing it. I have escaped the containing quotes, I have terminated/escaped the HTML tag I was in, and I have started something like a new tag. I have injected into your page, and now all we’re debating about is how much I can do now that I’ve broken in.

And yet, I must go on.

I have to go on at this point, because I’m an external researcher to this company. I have to deliver to them a definite breach, or they’ll probably dismiss me as a waste of time.

The obvious thing to inject here is “”><script>prompt(1)</script>” – but we saw earlier that produced an “oops” page. We’ve seen that “prompt(1)” isn’t rejected, and the angle-brackets (chevrons, less-than / greater-than signs, etc, whatever you want to call them) aren’t rejected, so it must be the word “script”.

That, right there, is enough to tell me that instead of encoding the output (which would turn those angle-brackets into “&lt;” and “&gt;” in the source code, while still looking like angle-brackets in the display), this site is using a blacklist of “bad words to search for”.

Why is a blacklist wrong?

That’s a really good question – and the basic answer is because you just can’t make most blacklists complete. Only if you have a very limited character set, and a good reason to believe that your blacklist can be complete.

A blacklist that might work is to say that you surround every HTML tag’s attributes with double quotes, and so your blacklist is double quotes, which you encode, as well as the characters used to encode, which you also encode.

I say it “might work”, because in the wonderful world of Unicode and developing HTML standards, there might be another character to escape the encoding, or a set of multiple code points in Unicode that are treated as the encoding character or double quote by the browser.

Easier by far, to use a whitelist – only these few characters are safe,and ALL the rest get encoded.

You might have an incomplete whitelist, but that’s easily fixed later, and at its worst is no more than a slight inefficiency. If you have an incomplete blacklist, you have a security vulnerability.

Back to the story

OK, so having determined that I can’t use the script tag, maybe I can add an event handler to the tag I’m in the middle of displaying, whether it’s a link or an input. Perhaps I can get that event handler to work.

Ever faithful is the “onmouseover” event handler. So I try that.

You don’t need to see the “oops” page again. But I did.

The weirdest thing, though, is that the “onmooseover” event worked just fine.

Except I didn’t have a moose handy to demonstrate it executing.

5-mooseover

So, that means that they had a blacklist of events, and onmouseover was on the list, but onmooseover wasn’t.

Similarly, “onfocus” triggered the “oops” page, but “onficus” didn’t. Again, sadly I didn’t have a ficus with me.

You’re just inventing event names.

Sure, but then so is the community of browser manufacturers. There’s a range of  “ontouch” events that weren’t on the blacklist, but are supported by a browser or two – and then you have to wonder if Google, maker of the Chrome browser and the Glass voice-controlled eyewear, might not introduce an event or two for eyeball tracking. Maybe a Kinect-powered browser will introduce “onwaveat”. Again, the blacklist isn’t future-proof. If someone invents a new event, you have to hope you find out about it before the attackers try to use it.

Again, back to the story…

Then I tried adding characters to the beginning of the event name. Curious – that works.

6-query

And, yes, the source view showed me the event was being injected. Of course, the browser wasn’t executing it, because of course, “?onmouseover” can’t be executed. The HTML spec just doesn’t allow for it.

Eventually, I made my way through the ASCII table to the forward-slash character.

7-slash

Magic!

Yes, that’s it, that executes. There’s the prompt.

Weirdly, if I used “alert” instead of “prompt”, I get the “oops” page. Clearly, “alert” is on the blacklist, “prompt” is not.

I still want to make this a ‘hotter’ report before I send it off to Starbucks, though.

How “hotter”?

Well, it’d be nice if it didn’t require the user to find and wave their mouse over the page element that you’ve found the flaw in.

Fortunately, I’d also recently found a behaviour in Internet Explorer that allows a URL to set focus to an element on the page by its ID or name. And there’s an “onfocus” event I can trigger with “/onfocus”.

8-focused

So, there we are – automated execution of my chosen code.

Anything else to make it more sexy?

Sure – how about something an attacker might try – a redirect to a site of their choosing. [But since I’m not an attacker, we’ll do it to somewhere acceptable]

I tried to inject “onfocus=’document.location=”//google.com”’” – but apparently, “document” and “location” are also on the banned list.

“ownerDocu”, “ment”, “loca” and “tion” aren’t on the blacklist, so I can do “this[“ownerDocu”+”ment”][“loca”+”tion”]=” …

Very quickly, this URL took the visitor away from the Starbucks search page and on to the Google page.

Now it’s ready to report.

Hard part over, right?

Well, no, not really. This took me a couple of months to get reported. I tried “security@starbucks.com”, which is the default address for reporting security issues.

An auto-reply comes my way, informing me this is for Starbucks staff to report [physical] security issues.

I try the webmaster@ address, and that gets me nowhere.

The “Contact Us” link takes me to a customer service representative, and an entertaining exchange that results in them telling me that they’ve passed my email around everyone who’s interested, and the general consensus is that I should go ahead and publish my findings.

So you publish, right?

No, I’m not interested in self-publicising at the cost of someone else’s security. I do this so that things get more secure, not less.

So, I reach out to anyone I know who works for Starbucks, or has ever worked for Starbucks, and finally get to someone in the Information Security team.

This is where things get far easier – and where Starbucks does the right things.

The Information Security team works with me, politely, quickly, calmly, and addresses the problem quickly and completely. The blacklist is still there, and still takes you to the “oops” page – but it’s no longer the only protection in place.

My “onmooseover” and “onficus” events no longer work, because the correct characters are quoted and encoded.

The world is made safer and more secure, and a half a year later, I post this article, so that others can learn from this experience, too.

By withholding publishing until well after the site is fixed, I ensure that I’m not making enemies of people who might be in a position to help me later. By fixing the site quickly and quietly, Starbucks ensure that they protect their customers. And I, after all, am a customer.

The Starbucks Information Security team have also promised that there is now a route from security@ to their inbox, as well as better training for the customer service team to redirect security reports their way, rather than insisting on publishing. I think they were horrified that anyone suggested that. I know I was.

And did I ever tell you about the time I got onto Google’s hall of fame?

Why don’t we do that?

Reading a story on the consequences of the theft of Adobe’s source code by hackers, I come across this startling phrase:

The hackers seem to be targeting vulnerabilities they find within the stolen code. The prediction is that they’re sifting through the code, attempting to find widespread weaknesses, intending to exploit them with maximum effect by using zero-day attacks.

What I’d love to know is why we aren’t seeing a flood of developers crying out to be educated in how they, too, can learn to sift through their own code, attempt to find widespread weaknesses, so they can shore them up and prevent their code from being exploited.

An example of the sort of comments we are seeing can be found here, and they are fairly predictable – “does this mean Open Source is flawed, if having access to the source code is a security risk”, schadenfreude at Adobe’s misfortune, all manner of assertions that Adobe weren’t a very secure company anyway, etc.

Something that’s missing is an acknowledgement that we are all subject to the same pool of developers.

And attackers.

So, if you’re in the business of developing software – whether to sell, licence, give away, or simply to use in your own endeavours, you’re essentially in the same boat as Adobe prior to the hackers breaching their defences. Possibly the same boat as Adobe after the breach, but prior to the discovery.

Unless you are doing something different to what Adobe did, you are setting yourself up to be the next Adobe.

Obviously, Adobe isn’t giving us entire details of their own security program, and what’s gone right or wrong with it, but previous stories (as early as mid-2009) indicated that they were working closely with Microsoft to create an SDL (Security Development Lifecycle) for Adobe’s development.

So, instead of being all kinds of smug that Adobe got hacked, and you didn’t, maybe you should spend your time wondering if you can improve your processes to even reach the level Adobe was at when they got hacked.

And, to bring the topic back to what started the discussion – are you even doing to your software what these unidentified attackers are doing to Adobe’s code?

Are you poring over your own source code to find flaws?

How long are you spending to do that, and what tools are you using to do so?

Government Shuts Down for Cyber Security

In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.

In case the DHS isn’t able to make things happen without funding, here’s what they originally had planned:

image

I’m sure you’ll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.

Maybe without the government in charge, we can stop using the “C” word to describe it.

UPDATE 1

The “C” word I’m referring to is, of course, “Cyber”. Bad word. Doesn’t mean anything remotely like what people using it think it means.

UPDATE 2

The main page of the DHS.GOV web site actually does carry a small banner indicating that there’s no activity happening at the web site today.

image

So, there may be many NCSAM events, but DHS will not be a part of them.

Security-SPP errors in the event log. EVERY. THIRTY. SECONDS.

I admit that it’s a little strange to look at your event log fairly often, but I occasionally find interesting behaviour there, and certainly whenever I encounter an unexpected error, that’s where I look first.

Why?

Because that’s actually where developers put information relating to problems you’re experiencing.

So, when I tried to install Windows 8.1 and was told that I would be able to keep “Nothing” – no apps, no settings, etc – I assumed there would be an error in the log.

But all I saw was this:

image

So, yes, that’s an error with:

Source: Security-SPP
Event ID: 16385
Error Code: 0x80041316

This goes back to September 2, but only because the Application log that it’s in has already run out of room and ‘rolled over’ with too many entries. Presumably, then, the occurrence that caused this was prior to that.

Searching online, I find that there are some others who have experienced the same thing, the most recent of which is in January 2013, and who posted of this error to the TechNet forums.

A Microsoft representative had answered indicating that the cause could be (of all strange things) a partition with no name. Odd. Then they suggested Refreshing or Reinstalling the PC.

I’m not reinstalling unless there’s something hugely wrong, and the refresh didn’t help at all.

So, on to tracing the cause of the problem.

“Schedule” suggests it might be a Task Scheduler issue, and sure enough, when I open up the Task Scheduler (it’s under the Administrative Tools in the Control Panel, so making it very hard to find in Windows 8), I get the following error:

image

Or for the search engines to find, title: “Task Scheduler”, text: “Task SvcRestartTask: The task XML contains an unexpected node.”

It’s a matter of fairly simple searching (as an Administrator, naturally) to find this file “SvcRestartTask” under C:\Windows\System32\Tasks\Microsoft\Windows\SoftwareProtectionPlatform.

So I moved this file to a document SvcRestartTask.xml in a different folder.

Time to edit it.

Among other lines in the file, these stood out:

    <RestartOnFailure>
      <Priority>3</Priority>
      <Priority>PT1M</Priority>
    </RestartOnFailure>

Odd – two values for Priority, one numeric, one text. So I went hunting in a file from a system that didn’t have that problem. I found these lines in the same place:

    <Priority>7</Priority>
    <RestartOnFailure>
      <Interval>PT1M</Interval>
      <Count>3</Count>
    </RestartOnFailure>

So, clearly something had written to the SvcRestartTask file with incorrect names for these elements. Changing them around in my XML version of the file, I reopened the Task Scheduler UI, navigated down to Microsoft / Windows / SoftwareProtectionPlatform, and imported the XML file there. [This is under “Actions”, but you can also right-click the folder SoftwareProtectionPlatform and select “Import”, then “Refresh”]

Sadly, this wasn’t quite the end of things, because the Task Scheduler UI fails to talk to the Task Scheduler service. Nor can I restart the Task Scheduler service directly.

So a restart will take care of that, and sure enough, now that I’ve restarted, I see no more of these 16385 errors from Security-SPP.

It’s just a shame it took so long to get this answer, and that the Microsoft-supplied answer in the forums is incomplete.

Oh, and of course, one last thing – what does SPP (Software Protection Platform) actually do?

Since this is an element of the Windows Genuine Advantage initiative, with the goal of preventing use of pirated copies of Windows, you might consider you don’t really need / want it around. Either way, you definitely don’t want it clearing your Application event log out every three weeks!

Training developers to write secure code

I’ve done an amount of training developers recently, and it seems like there are a number of different kinds of responses to my security message.

[You can safely assume that there’s also something that’s wrong with the message and the messenger, but I want to learn about the thing I likely can’t control or change – the supply of developers]

Here are some unfairly broad descriptions of stereotypes I’ve encountered along the way. The truth, as ever, is more nuanced, but I think if I can reach each of these target personas, I should have just about everyone covered.

Is there anyone I’ve missed?

The previous victim

I’m always happy to have one or more of these people in the room – the sort of developer who has some experience, and has been on a project that was attacked successfully at some point or another.

This kind of developer has likely quickly learned the lesson that even his own code is subject to attack, vulnerable and weak to the persistent probes of attackers. Perhaps his experience has also included examples of his own failures in more ordinary ways – mere bugs, with no particular security implications.

Usually, this will be an older developer, because experience is required – and his tales of terror, unrehearsed and true, can sometimes provide the “scared straight” lesson I try to deliver to my students.

The previous attacker

This guy is usually a smart, younger individual. He may have had some previous nefarious activity, or simply researched security issues by attacking systems he owns.

But for my purposes, this guy can be too clever, because he distracts from my talk of ‘least privilege’ and ‘defence in depth’ with questions about race conditions, side-channel attacks, sub-millisecond time deltas across multi-second latency routes, and the like. IF those were the worst problems we see in this industry, I’d focus on them – but sadly, sites are still vulnerable to simple attacks, like my favourite – Reflected XSS in the Search field. [Simple exercise – watch a commercial break, and see how many of the sites advertised there have this vulnerability in them.]

But I like this guy for other reasons – he’s a possible future hire for my team, and a probable future assistant in finding, reporting and addressing vulnerabilities. Keeping this guy interested and engaged is key to making sure that he tells me about his findings, rather than sharing them with friends on the outside, or exploiting them himself.

“I did a security class at college”

Unbelievably to me, there are people who “done a project on it”, and therefore know all they want to about security. If what I was about to tell them was important, they’d have been told it by their professor at college, because their professor knew everything of any importance.

I personally wonder if this is going to be the kind of SDE who will join us for a short while, and not progress – because the impression they give to me is that they’ve finished learning right before their last final exam.

Salaryman

Related to the previous category is the developer who only does what it takes to get paid and to receive a good performance review.

I think this is the developer I should work the hardest to try and reach, because this attitude lies at the heart of every developer on their worst days at their desk. When the passion wanes, or the task is uninteresting, the desire to keep your job, continue to get paid, and progress through your career while satisfying your boss is the grinding cog that keeps you moving forward like a wind-up toy.

This is why it is important to keep searching to find ways of measuring code quality, and rewarding people who exhibit it – larger rewards for consistent prolonged improvement, smaller but more frequent rewards to keep the attention of the developer who makes a quick improvement to even a small piece of code.

Sadly, this guy is in my class because his boss told him he ought to attend. So I tell him at the end of my class that he needs to report back to his boss the security lesson that he learned – that all of his development-related goals should have the adverb “securely” appended to them. So “develop feature X” becomes “develop feature X securely”. If that is the one change I can make to this developer’s goals, I believe it will make a difference.

Fanboy

I’ve been doing this for long enough that I see the same faces in the crowd over and over again. I know I used to be a fanboy myself, and so I’m aware that sometimes this is because these folks learn something new each time. That’s why I like to deliver a different talk each time, even if it’s on the same subject as a previous lesson.

Or maybe they just didn’t get it all last time, and need to hear it again to get a deeper understanding. Either way, repeat visitors are definitely welcome – but I won’t get anywhere if that’s all I get in my audience.

Vocational

Some developers do the development thing because they can’t NOT write code. If they were independently wealthy and could do whatever they want, they’d be behind a screen coding up some fun little app.

I like the ones with a calling to this job, because I believe I can give them enough passion in security to make it a part of their calling as well. [Yes, I feel I have a calling to do security – I want to save the world from bad code, and would do it if I was independently wealthy.]

Stereotypical / The Surgeon

Sadly, the hardest person to reach – harder even than the Salaryman – is the developer who matches the stereotypical perception of the developer mindset.

Convinced of his own superiority and cleverness, even if he doesn’t express it directly in such conceited terms, this person will see every suggested approach as beneath him, and every example of poor code as yet more proof of his own superiority.

“Sure, you’ve had problems with other developers making stupid security mistakes,” he’ll think to himself, “But I’m not that dumb. I’ve never written code that bad.”

I certainly hope you won’t ever write code as bad as the examples I give in my classes – those are errant samples of code written in haste, and which I wouldn’t include in my class if they didn’t clearly illustrate my point. But my point is that your colleagues – everyone around you – are going to write this bad a piece of code one day, and it is your job to find it. It is also their job to find it in the code you write, so either you had better be truly as good as you think you are, or you had better apply good security practices so they don’t find you at your worst coding moment.

Playing with security blogs

I’ve found a new weekend hobby – it takes only a few minutes, is easily interruptible, and reminds me that the state of web security is such that I will never be out of a job.

I open my favourite search engine (I’m partial to Bing, partly because I get points, but mostly because I’ve met the guys who built it), search for “security blog”, and then pick one at random.

Once I’m at the security blog site – often one I’ve never heard of, despite it being high up in the search results – I find the search box and throw a simple reflected XSS attack at it.

If that doesn’t work, I view the source code for the results page I got back, and use the information I see there to figure out what reflected XSS attack will work. Then I try that.

[Note: I use reflected XSS, because I know I can only hurt myself. I don’t play stored XSS or SQL injection games, which can easily cause actual damage at the server end, unless I have permission and I’m being paid.]

Finally, I try to find who I should contact about the exploitability of the site.

It’s interesting just how many of these sites are exploitable – some of them falling to the simplest of XSS attacks – and even more interesting to see how many sites don’t have a good, responsive contact address (or prefer simply not to engage with vuln discoverers).

So, what do you find?

I clearly wouldn’t dream of disclosing any of the vulnerabilities I’ve found until well after they’re fixed. Of course, after they’re fixed, I’m happy to see a mention that I’ve helped move the world forward a notch on some security scale. [Not sure why I’m not called out on the other version of that changelog.] I might allude to them on my twitter account, but not in any great detail.

From clicking the link to exploit is either under ten minutes or not at all – and reporting generally takes another ten minutes or so, most of which is hunting for the right address. The longer portion of the game is helping some of these guys figure out what action needs to be taken to fix things.

Try using a WAF – NOT!

You can try using a WAF to solve your XSS problem, but then you’ve got two problems – a vulnerable web site, and that you have to manage your WAF settings. If you have a lot of spare time, you can use a WAF to shore up known-vulnerable fields and trap known attack strings. But it really doesn’t ever fix the problem.

Don’t echo my search query

If you can, don’t echo back to me what I sent you, because that’s how these attacks usually start. Don’t even include it in comments, because a good attack will just terminate the comment and start injecting HTML or script.

Remove my strange characters

Unless you’re running a source code site, you probably don’t need me to search for angle brackets, or a number of other characters. So take them out of my search – or plain reject my search if I include them in my search.

Encode everything

OK, so you don’t have to encode the basics – what are the basics? I tend to start with alphabetic and numeric characters, maybe also a space. Encode everything else.

Which encoding?

Yeah, that’s always the hard part. Encode it using the right encoding. That’s the short version. The long version is that you figure out what’s going to decode it, and make sure you encode for every layer that will decode. If you’re putting my text into a web page as a part of the page’s content, HTML encode it. If it’s in an attribute string, quote the characters using HTML attribute encoding – and make sure you quote the entire attribute value! If it’s an attribute string that will be used as a URL, you should URL encode it. Then you can HTML encode it, just to be sure.

[Then, of course, check that your encoding hasn’t killed the basic function of the search box!]

Respond to security reports

You should definitely respond to security reports – I understand that not everyone can have a 24/7 response team watching their blog (I certainly don’t) – you should try to respond within a couple of days, and anything under a week is probably going to be alright. Some vuln discoverers are upset if they don’t get a response much sooner, and see that as cause to publish their findings.

Me, I send a message first to ask if I’ve found the right place to send a security vulnerability report to, and only when I receive a positive acknowledgement do I send on the actual details of the exploit.

Be like Billy – Mind your XSS Manners!

I’ve said before that I wish programmers would respond to reports of XSS as if I’d told them I caught them writing a bubble sort implementation in Cobol. Full of embarrassment at being such a beginner.

Using URL anchors to enliven XSS exploits

I hope this is original, I certainly couldn’t find anything in a quick bit of research on “Internet Explorer”, “anchor” / “fragment id” and “onfocus” or “focus”. [Click here for the TLDR version.]

Those of you who know me, or have been reading this blog for a while know that I have something of a soft spot for the XSS exploits (See here, here, here and here – oh, and here). One of the reasons I like them is that I can test sites without causing any actual damage to them – a reflected XSS that I launch on myself only really affects me. [Stored XSS, now that’s a different matter] And yet, the issues that XSS brings up are significant and severe.

A quick reminder

XSS issues are significant and severe because:

  • An attacker with a successful XSS is rewriting your web site on its way to the user
  • XSS attacks can be used to deliver the latest Java / ActiveX / Flash / Acrobat exploits
  • Stored XSS can affect all of your customers, and can turn your web server into a worm to infect all of your users all of the time
  • A reflected XSS can be used to redirect your users to a competitor’s or attacker’s web site
  • A reflected or stored XSS attack can be used to void any CSRF protection you have in place
  • XSS vulnerability is usually a sign that you haven’t done the “fit and finish” checks in your security reviews
  • XSS vulnerability is an embarrassing rookie mistake, made often by seasoned developers

Make it “SEXY”

So, I enjoy reporting XSS issues to web sites and seeing how they fix them.

It’s been said I can’t pass a Search box on a web site without pasting in some kind of script and seeing whether I can exploit it.

So, the other day I decided for fun to go and search for “security blog” and pick some entries at random. The first result that came up – blog.schneier.com – seemed unlikely to yield any fruit, because, well, Bruce Schneier. I tried it anyway, and the search box goes to an external search engine, which looked pretty solid. No luck there.

A couple of others – and I shan’t say how far down the list, for obvious reasons – turned up trumps. Moderately simple injections into attributes in HTML tags on the search results page.

One only allowed me to inject script into an existing “onfocus” event handler, and the other one allowed me to create the usual array of “onmouseover”, “onclick”, “onerror”, etc handlers – and yes, “onfocus” as well.

I reported them to the right addresses, and got the same reply back each time – this is a “low severity” issue, because the user has to take some action, like wiggling the mouse over the box, clicking in it, etc.

Could I raise the severity, they asked, by making it something that required no user interaction at all, save for loading the link?

Could I make the attack more “sexy”?

Try something stupid

Whenever I’m faced with an intellectual challenge like that, I find that often a good approach is to simply try something stupid. Something so stupid that it can’t possibly work, but in failing it will at least give me insight into what might work.

I want to set the user’s focus to a field, so I want to do something a bit like “go to the field”. And the closest automatic thing that there is to “going to a field” in a URL is the anchor portion, or “fragment id” of the URL.

Anchor? What’s that in a URL?

You’ll have seen them, even if you haven’t really remarked on them very much. A URL consists of a number of parts:

protocol://address:port/path1//path2?query#anchor

The anchor is often called the “hash”, because it comes after the “hash” or “sharp” or “pound” (if you’re not British) character. [The query often consists of sets of paired keys and values, like “key1=value1&key2=value2”, etc]

The purpose of an anchor is to scroll the window to bring a specific portion to the top. So, you can give someone a link not just to a particular page, but to a portion of that page. It’s a really great idea. Usually an anchor in the URL takes you to a named anchor tag in the page – something that reads “<a name=foobar></a>” will, for instance, be scrolled to the top whenever you visit it with a URL that ends with “#foobar”.

[The W3C documentation only states that the anchor or fragment ID is used to “visit” the named tag. The word “visit” is never actually defined. Common behaviour is to load the page if it’s not already loaded, and to scroll the page to bring the visited element to the top.]

This anchor identifier in the URL is also known as a “fragment identifier”, because technically the anchor is the entire URL. Not what people make as common usage, though.

XSS fans like myself are already friendly with the anchor identifier, because it has the remarkable property of never being sent to the server by the browser! This means that if your attack depends on something in the anchor identifier, you don’t stand much chance of being detected by the server administrators.

Sneaky.

The stupid thing

So, the stupid thing that I thought about is “does this work for any name? and is it the same as focus?”

Sure enough, in the W3C documentation for HTML, here it is:

Destination anchors in HTML documents may be specified either by the A element (naming it with the name attribute), or by any other element (naming with the id attribute).

[From http://www.w3.org/TR/html4/struct/links.html#h-12.1]

So, that means any tag with an “id” attribute can be scrolled into view. This effectively applies to any element with a “name” attribute too, because:

This attribute [name] names the current anchor so that it may be the destination of another link. The value of this attribute must be a unique anchor name. The scope of this name is the current document. Note that this attribute shares the same name space as the id attribute. [my emphasis]

[From http://www.w3.org/TR/html4/struct/links.html#adef-name-A]

This is encouraging, because all those text boxes already have to have ids or names to work.

So, we can bring a text box to the top of the browser window by specifying its id or name attribute as a fragment.

That’s the first stupid thing checked off and working.

Bringing it into focus

But moving a named item to the top of the screen isn’t the same as selecting it, clicking on it, or otherwise giving it focus.

Or is it?

Testing in Firefox, Chrome and Safari suggested not.

Testing in Internet Explorer, on the other hand, demonstrated that even for as old a version as IE8, all the way through IE9 and IE10, caused focus behaviour – including any “onfocus” handler – to trigger.

The TLDR version:

Internet Explorer has a behaviour different from other browsers which makes it easier to exploit a certain category of XSS vulnerabilities in web sites.

If you are attacking users of a vulnerable site that allows an attacker to inject code into an “onfocus” handler (new or existing), you can force visitors to trigger that “onfocus” event, simply by adding the id or name of the vulnerable HTML tag to the end of the URL as a fragment ID.

You can try it if you like – using the URL http://www.microsoft.com/en-us/default.aspx#ctl00_ctl16_ctl00_ctl00_q

OK, so you clicked it and it didn’t drop down the menu that normally comes when you click in the search field on Microsoft’s front page. That’s because the onfocus handler wasn’t loaded when the browser set the focus. Try reloading it.

You can obviously build any number of test pages to look at this behaviour:

<form>

<input type="text" name="exploit" id="exploitid" onfocus="alert(1)" />

</form>

Loading that with a link to formpage.html#exploit or formpage.html#exploitid will pop up an ‘alert’ dialog box.

So, that’s a security flaw in IE, right?

No, I don’t think it is – I don’t know that it’s necessarily even a flaw.

The documentation I linked to above only talks about the destination anchor being used to “visit” a resource. It doesn’t even say that the named anchor should be brought into view in any way. [Experiment: what happens if the ID in the fragment identifier is a “type=hidden” input field?]

It doesn’t say you should set focus; it also doesn’t say you should not set focus. Setting focus may be simply the most convenient way that Internet Explorer has to bring the named element into view.

And the fact that it makes XSS exploits a little easier doesn’t make it a security flaw either – the site you’re visiting STILL has to have an XSS flaw on it somewhere.

Is it right to publish this?

Finally, the moral question has to be asked and answered.

I start by noting that if I can discover this, it’s likely a few dozen other people have discovered it too – and so far, they’re keeping it to themselves. That seems like the less-right behaviour – because now those people are going to be using this on sites unaware of it. Even if the XSS injection is detected by the web site through looking in their logs, those same logs will tell them that the injection requires a user action – setting focus to a field – and that there’s nothing causing that to happen, so it’s a relatively minor issue.

Except it’s not as minor as that, because the portion of the URL that they CAN’T see is going to trigger the event handler that just got injected.

So I think the benefit far outweighs the risk – now defenders can know that an onfocus handler will be triggered by a fragment ID in a URL, and that the fragment ID will not appear in their log files, because it’s not sent to the server.

I’ve already contacted Microsoft’s Security team and had the response that they don’t think it’s a security problem. They’ve said they’ll put me in touch with the Internet Explorer team for their comments – and while I haven’t heard anything yet, I’ll update this blog when / if they do.

In general, I believe that the right thing to do with security issues is to engage in coordinated disclosure, because the developer or vendor is generally best suited to addressing specific flaws. In this case, the flaw is general, in that it’s every site that is already vulnerable to XSS or HTML injection that allows the creation or modification of an “onfocus” event handler. So I can’t coordinate.

The best I can do is communicate, and this is the best I know how.

That old “cyber offence” canard again

I’m putting this post in the “Programmer Hubris” section, but it’s really not the programmers this time, it’s the managers. And the lawyers, apparently.

Something set me off again

Well, yeah, it always does, and this time what set me off is an NPR article by Tom Gjelten in a series they’re currently doing on “cybersecurity”.

This article probably had a bunch of men talking to NPR with expressions such as “hell, yeah!” and “it’s about time!”, or even the more balanced “well, the best defence is a good offence”.

Absolute rubbish. Pure codswallop.

But aren’t we being attacked? Shouldn’t we attack back?

Kind of, and no.

We’re certainly not being “attacked” in the means being described by analogy in the article.

"If you’re just standing up taking blows, the adversary will ultimately hit you hard enough that you fall to the ground and lose the match. You need to hit back." [says Dmitri Alperovitch, CrowdStrike’s co-founder.]

Yeah, except we’re not taking blows, and this isn’t boxing, and they’re not hitting us hard.

"What we need to do is get rid of the attackers and take away their tools and learn where their hideouts are and flush them out," [says Greg Hoglund, co-founder of HBGary, another firm known for being hacked by a bunch of anonymous nerds that he bragged about being all over]

That’s far closer to reality, but the people whose job it is to do that is the duly appointed law enforcement operatives who are able to enforce law.

"It’s [like] the government sees a missile heading for your company’s headquarters, and the government just yells, ‘Incoming!’ " Alperovitch says. "It’s doing nothing to prevent it, nothing to stop it [and] nothing to retaliate against the adversary." [says Alperovitch again]

No, it’s not really like that at all.

There is no missile. There is no boxer. There’s a guy sending you postcards.

What? Excuse me? Postcards?

Yep, pretty much exactly that.

Every packet that comes at you from the Internet is much like a postcard. It’s got a from address (of sorts) and a to address, and all the information inside the packet is readable. [That’s why encryption is applied to all your important transactions]

So how am I under attack?

There’s a number of ways. You might be receiving far more postcards than you can legitimately handle, making it really difficult to assess which are the good postcards, and which are the bad ones. So, you contact the postman, and let him know this, and he tracks down (with the aid of the postal inspectors) who’s sending them, and stops carrying those postcards to you. In the meantime, you learn how to spot the obvious crappy postcards and throw them away – and when you use a machine to do this, it’s a lot less of a problem. That’s a denial of service attack.

Then there’s an attack against your web site. Pretty much, that equates to the postcard sender learning that there’s someone reading the postcards, whose job it is to do pretty much what the postcards tell them to do. So he sends postcards that say “punch the nearest person to you really hard in the face”. Obviously a few successes of this sort lead you to firing the idiot who’s punching his co-workers, and instead training the next guy as to what jobs he’s supposed to do on behalf of the postcard senders.

I’m sure that my smart readers can think up their own postcard-based analogies of other attacks that go on, now that you’ve seen these two examples.

Well if it’s just postcards, why don’t I send some of my own?

Sure, send postcards, but unless you want the postman to be discarding all your outgoing mail, or the law enforcement types to turn up at your doorstep, those postcards had better not be harassing or inappropriate.

Even if you think you’re limiting your behaviour to that which the postman won’t notice as abusive, there’s the other issue with postcards. There’s no guarantee that they were sent from the address stated, and even if they were sent from there, there is no reason to believe that they were official communications.

All it takes is for some hacker to launch an attack from a hospital’s network space, and you’re now responsible for attacking an innocent target where lives could actually be at risk. [Sure, if that were the case, the hospital has shocking security issues of its own, but can you live with that rationalisation if your response to someone attacking your site winds up killing someone?]

I don’t think that counterattack on the Internet is ethical or appropriate.