Alun’s code – Page 5 – Tales from the Crypto

Alun’s code

How FTP Data Connections Work Part 2 (OR: Fun With Port 20)

As we mentioned in the 1st part of this series, FTP is a more complex protocol than many, using one control connection and one data connection.

A recap of the first post…

In typical Stream Mode operation, a new data connection is opened and closed for each data transfer, whether that’s an upload, a download, or a directory listing. To avoid confusion between different data connections, and as a recognition of the fact that networks may have old packets shuttling around for some time, these connections need to be distinguishable from one another.

In the previous article, we noted that two network sockets are distinguished by the five elements of “Local Address”, “Local Port”, “Protocol”, “Remote Address”, and “Remote Port”. For a data connection associated with any particular request, the local and remote addresses are fixed, as the addresses of the client and server. The protocol is TCP, and only the two ports are variable.

For a PASV, or passive data connection, the client-side port is chosen randomly by the client, and the server-side port is similarly chosen randomly by the server. The client connects to the server.

For a PORT, or active data connection, the client-side port is chosen randomly by the client, and the server-side port is set to port 20. The server connects to the client.

All of these work through firewalls and NAT routers, because firewalls and NAT routers contain an Application Layer Gateway (ALG) that watches for PORT and PASV commands, and modifies the control (in the case of a NAT) and/or uses the values provided to open up a firewall hole.

Isn’t there a totally predictable data connection?

For the default data connection (what happens if no PORT or PASV command is sent before the first data transfer command), the client-side port is predictable (it’s the same as the source port the client used when connecting the control channel), and the server-side port is 20. Again, the server connects to the client.

Because firewalls and NATs open up a ‘reverse’ hole for TCP sockets, the default data port works with firewalls and NATs that aren’t running an ALG, or whose ALG cannot scan for PORT and PASV commands.

Why would an ALG stop scanning for PORT and PASV commands?

There are a couple of reasons – the first is that it doesn’t know that the service connected to is running the FTP protocol. This is common if the server is running on a port other than the usual port 21.

The second reason is that the FTP control connection doesn’t look like it contains FTP commands – usually because the connection is encrypted. This can happen because you’re tunneling the FTP control connection through an encrypted tunnel such as SSH (don’t laugh – it does happen!), or hopefully it’s because you’re running FTP over SSL, so that the control and data connections can be encrypted, and you can authenticate the identity of the FTP server.

So how do you get FTP over SSL to work through a firewall?

In the words of Deep Thought: “Hmm… tricky”.

There are a couple of classic solutions:

  1. Allow PASV data connections, select a wide range of ports, and open that range for incoming traffic from all external addresses in your firewall configuration; hope that your FTP server can be configured to use only that range of ports (WFTPD Pro can), and that it has protections against traffic stealing attacks (again, WFTPD Pro has). Still, this option seems really risky.
  2. Block all PASV connections, and make the clients responsible for opening up holes in their firewalls. If you’re convinced the risk is too great to do this on your server, how does it look to convince your users that they should accept that risk?
  3. After you’ve authenticated the server and provided your username and password in the encrypted control connection, issue the “CCC” (Clear Control Channel) command, to switch the control connection back into clear-text. I dislike this as a solution, because it requires the ALG pay attention to a lot of SSL traffic in the hope that there might be clear-text coming up, and because you may want the control channel to remain encrypted.

Awright, clever clogs, you solve the problem.

The astute reader can probably see where I’m going with this.

The default data port is predictable – if the client connects from port U to port L at the server (L is usually 21), then the default data port will be opened from port L-1 at the server to port U at the client.

The default data port doesn’t need the firewall to do anything other than allow reverse connections back along the port that initiated the connection. You don’t need to open huge ranges at the server’s firewall (in fact you should be able to simply open port 21 inbound to your server).

The default data port is required to be supported by FTP servers going back a long way- at least a couple of decades. Yes, really, that long.

If it’s that simple, why isn’t everyone doing it?

Good point, that, and a great sentence to use whenever you wish to halt innovation in its tracks.

Okay, it’s obvious that there are some drawbacks:

  • In stream mode, the data transfer is ended by closing the stream. This means that you have to open a new control connection. Not good, given the number of round-trips you need for a logon, and the work needed to start an SSL connection.
  • Most FTP clients view the default data connection as, at best, a fail-over in case the PORT or PASV commands fail to work. Obviously, that means it’s not likely to be a well-tested or favoured solution on these clients.

Even with those drawbacks, there are still further solutions to apply – the first being to use Block-mode instead of Stream-mode. In Stream-mode, each data transfer requires opening and closing the data connection; in Block-mode, which is a little like HTTP’s chunked mode, blocks of data are sent, and followed by an “EOF” marker (End of File), so that the data connection doesn’t need to be closed. If you can convince your FTP client to request Block-mode with the default data connection, and your FTP server supports it (WFTPD Pro has done so for several years), you can achieve FTP over SSL through NATs and firewalls simply by opening port 21.

For the second problem, it’s worth noting that many FTP client authors implemented default data connections out of a sense of robustness, so default data connections will often work if you can convince the PORT and PASV commands to fail – by, for instance, putting restrictive firewalls or NATs in the way, or perhaps by preventing the FTP server from accepting PORT or PASV commands in some way.

Clearly, since Microsoft’s IIS 7.5 downloadable FTP Server supports FTPS in block mode with the default data port, there has been some consideration given to my whispers to them that this could solve the FTP over SSL through firewall problem.

Other than my own WFTPD Explorer, I am not aware of any particular clients that support the explicit use of FTP over SSL with Block-mode on the default data connection – I’d love to hear of your experiments with this mode of operation, to see if it works as well for you as it does for me.

How FTP Data Connections Work Part 1 (OR: Don’t Open Port 20 in your Firewall!)

This will be the first of a couple of articles on FTP, as I’ve been asked to post this information in an easy-to-read format in a public place where it can be referred to. I think my expertise in developing and supporting WFTPD and WFTPD Pro allow me to be reliable on this topic. Oh, that and the fact that I’ve contributed to a number of RFCs on the subject.

Enough TCP to be dangerous

First, a quick refresher on TCP – every TCP connection can be thought of as being associated with a “socket” at each device along the way – from one computer, through routers, to the other computer. The socket is identified by five individual items – the local IP address, the local port, the remote IP address, the remote port, and the protocol (in this case, the protocol is TCP).

Firewalls are essentially a special kind of router, with rules not only for how to forward data, but also rules on connection requests to drop or allow. Once a connection request is allowed, the entire flow of traffic associated with that connection request is allowed, also – any traffic flow not associated with a previously allowed connection request is discarded.

When you set up a firewall to allow access to a server, you have to consider the first segment – the “SYN”, or connection request from the TCP client to the TCP server. The rule can refer to any data that would identify the socket to be created, such as “allow any connection request where the source IP address is 10.1.1.something, and the destination port is 54321”.

Typically, an external-facing firewall will allow all outbound connections, and have rules only for inbound connections. As a result, firewall administrators are used to saying things like “to enable access to the web server, simply open port 80”, whereas what they truly mean is to add a rule that applies to incoming TCP connection requests whose source address and source port could be anything, but whose destination port is 80, and whose destination address is that of the web server.” This is usually written in some short hand, such as “allow tcp”, where “” stands for “any address” and “:0” stands for “any port”.

Firewall rules for FTP

For an FTP server, firewall rules are known to be a little trickier than for most other servers.

Sure, you can set up the rule “allow tcp”, because the default port for the control connection of FTP is 21. That only allows the control connection, though.

What other connections are there?

In the default transfer mode of “Stream”, every file transfer gets its own data connection. Of course, it’d be lovely if this data connection was made on port 21 as well, but that’s not the way the protocol was built. Instead, Stream mode data connections are opened either as “Active” or “Passive” connections.

Active and Passive Data Connections

The terms "Active" and "Passive" refer to how the FTP server connects. The choice of connection method is initiated by the client, although the server can choose to refuse whatever the client asked for, at which point the client should fail over to using the other method.

In the Active method, the FTP server connects to the client (the server is the “active” participant, the client just lies back and thinks of England), on a random port chosen by the client. Obviously, that will work if the client’s firewall is configured to allow the connection to that port, and doesn’t depend on the firewall at the server to do anything but allow connections outbound. The Active method is chosen by the client sending a “PORT” command, containing the IP address and port to which the server should connect.

In the Passive method, the FTP client connects to the server (the server is now the “passive” participant), on a random port chosen by the server. This requires the server’s firewall to allow the incoming connection, and depends on the client’s firewall only to allow outbound connections. The Passive method is chosen by the client sending a “PASV” command, to which the server responds with a message containing the IP address and port at the server that the client should connect to.

The ALG comes to the rescue!

So in theory, your firewall now needs to know what ports are going to be requested by the PORT and PASV commands. For some situations, this is true, and you need to consider this – we’ll talk about that in part 2. For now, let’s assume everything is “normal”, and talk about how the firewall helps the FTP user or administrator.

If you use port 21 for your FTP server, and the firewall is able to read the control connection, just about every firewall in existence will recognise the PORT and PASV commands, and open up the appropriate holes. This is because those firewalls have an Application Level Gateway, or ALG, which monitors port 21 traffic for FTP commands, and opens up the appropriate holes in the firewall. We’ve discussed the FTP ALG in the Windows Vista firewall before.

So why port 20?

Where does port 20 come in? A rather simplistic view is that administrators read the “Services” file, and see the line that tells them that port 20 is “ftp-data”. They assume that this means that opening port 20 as a destination port on the firewall will allow FTP data connections to flow. By the “elephant repellant” theory, this is proved “true” when their firewalls allow FTP data connections after they open ports 21 and 20. Nobody bothers to check that it also works if they only open port 21, because of the ALG.

OK, so if port 20 isn’t needed, why is it associated with “ftp-data”? For that, you’ll have to remember what I said early on in the article – that every socket has five values associated with it – two addresses, two ports, and a protocol. When the data connection is made from the server to the client (remember, that’s an Active data connection, in response to a PORT command), the source port at the server is port 20. It’s totally that simple, and since nobody makes firewall rules that look at source port values, it’s relatively unimportant. That “ftp-data” in the Services file is simply so that the output from “netstat” has a meaningful service name instead of “:20” as a source port.

Coming up in part 2…

Next time, we’ll expand on this topic, to go into the inability of the ALG to process encrypted FTP control traffic, and the resultant issues and solutions that face encrypted FTP.

The CWE Top 25 Programming Mistakes

I’ve read some debate about the top 25 programming mistakes as documented by the CWE (Common Weakness Enumeration) project, in collaboration with the SANS Institute and the MITRE . That the list isn’t complete, that there are some items that aren’t in the list, but should be, or vice-versa.

I think we should look at the CWE top-25 as something like the PCI Data Security Standard – it’s not the be-all and end-all of security, it’s not universally applicable, it’s not even a “gold standard”. It’s just the very bare minimum that you should be paying attention to, if you’ve got nowhere else to start in securing your application.

As noted by the SANS Institute, the top 25 list will allow schools and colleges to more confidently teach secure development as a part of their classes.

I personally would like to see a more rigorous taxonomy, although in this field, it’s really hard to do that, because in large part it’s a field that feeds off publicity – and you just can’t get publicity when you use phrases like “rigorous taxonomy”. Here’s my take on the top 25 mistakes, in the order presented:

Insecure Interaction Between Components

“These weaknesses are related to insecure ways in which data is sent and received between separate components, modules, programs, processes, threads, or systems.”

  • CWE-20: Improper Input Validation
    • What’s proper input validation? Consider the thought that there is no input, no output, only throughput. A string is received at the browser, and turned into a byte encoding; this byte encoding is sent to the web server, and possibly re-encoded, before being held in storage, or passed to a processing unit. For every input, there is an output, even if it’s only to local in-memory storage.
    • Validating the input portion falls broadly into two categories – validating for length, and validating for content. Validating for length seems simple – is it longer than the output medium is expecting? You should, however, check your assumptions about an encoding – sometimes encodings will add, and sometimes they will remove, counts of the members of the sequence – and sometimes they may do both.
    • Validating for content can similarly be broken into two groups – validating for correctness against the encoding expected, and then validating for content as to “business logic” (have you supplied a telephone number with a square-root sign or an apostrophe in it, say). Decide whether to strip invalid codes, or simply to reject the entire transaction. Usually, it is best (safest) to reject the entire transaction.
  • CWE-116: Improper Encoding or Escaping of Output
    • The other part of “throughput validation” – and while we constantly tell programmers that they should refuse to trust input, that should not be held as an excuse to produce untrustworthy output. There are many times when your code is trusted to produce good quality output. Some examples:
      • When you write a web application visited by a user, that user trusts you not to forward other people’s code on to them. Just your own, and that of your business partners. [See Cross-Site Scripting, below]
      • When your application is used internally [See SQL Injection, below]
    • Be conservative in what you send – make sure it rigorously follows whatever protocol or design-time contract has been agreed to. And above all, when sending data that isn’t code, make sure to encode it so that it can’t be read as code!
  • CWE-89: Failure to Preserve SQL Query Structure (aka ‘SQL Injection’)
    • SQL Injection is a throughput validation issue. In its essence, it involves an attacker who feeds SQL command codes into an interface, and that interface passes them on to a SQL database server.
    • This is almost an inexcusable error, as it is relatively easy to fix. The fix is usually hampered somewhat in that the SQL database server is required to trust the web server interface code, but that means only that the web server interface code must either encode, or remove, elements of the data that is being passed in the SQL command sequence being sent to the server. The most reliable way to do this is to use parameterised queries or stored procedures. Avoid building SQL commands through concatenation at almost any cost.
  • CWE-79: Failure to Preserve Web Page Structure (aka ‘Cross-site Scripting’)
    • I hate the term “cross-site scripting”. It’s far easier to understand if you just call it “HTML injection”. Like SQL injection, it’s about an attacker injecting HTML code into a web page (or other HTML page) by including it as data, in such a way that it is provided to the user as code.
    • Again, a throughput content validation issue, anything that came in as data and needs to go out as a part of an HTML page should be HTML encoded, ideally so that only the alphanumerics are unencoded.
  • CWE-78: Failure to Preserve OS Command Structure (aka ‘OS Command Injection’)
    • Like SQL injection, this is about generating code and including data. Don’t use your data as part of the generation of code.
    • There are many ways to fix this kind of an issue – my favourite is to save the data to a file, and make the code read the file. Don’t derive the name or location of the file from the user-supplied data.
  • CWE-319: Cleartext Transmission of Sensitive Information
    • What’s sensitive information? You decide, based on an analysis of the data you hold, and a reading of appropriate laws and contractual regulations. For example, with PCI DSS, sensitive information would include the credit card number, magnetic track data, and personal information included with that data. Depending on your state, personal contact information is generally sensitive, and you may also decide that certain business information is also sensitive.
    • Seriously, SSL and IPsec are not significant performance drains – if your system is already so overburdened that it cannot handle the overhead of encrypting sensitive data, you are ALREADY too slow, and only providence has saved you from problems.
    • Especially where the data is not your own, make an informed decision as to whether you will be communicating in clear text.
  • CWE-352: Cross-Site Request Forgery (CSRF)
    • Another confusing term, CSRF refers to the ability of one web page to send you HTML code that your browser will execute against another web page. This really is cross-site, and forges requests that look to come from the user, but really come from a web page being viewed in the user’s browser.
    • The fix for this is that every time you display a form (or even a solitary button, if that button’s effects should be unforgeable), you should include a hidden value that contains a random number. Then, when the “submit” (or equivalent) button is pressed, this hidden value will be sent back with the other contents of the form. Your server must, of course, validate this number is correct, and must not allow the number to be long-lived, or be used a second time. A simple fix, but one that you have to apply to each form.
    • This really falls under a category of guaranteeing that you are talking to the user (or the user’s trusted agent), and not someone pretending to be the user. Related to non-repudiation.
  • CWE-362: Race Condition
    • Race conditions refer to any situation in which the execution of two parallel threads or processes behaves differently when the order of execution is altered. If I tell my wife and son to go get a bowl and some flour, and to pour the flour into the bowl, there’s going to be a mess if my wife doesn’t get the bowl as quickly as my son gets the flour. Similarly, programs are full of occasions where a precedence is expected or assumed by the designer or programmer, but where that precedence is not guaranteed by the system.
    • There are books written on the topic of thread synchronisation and resource locking, so I won’t attempt to address fixing this class of issues.
  • CWE-209: Error Message Information Leak
    • Be helpful, but not too helpful. Give the user enough information to fix his side of the error, but not so much that he has the ability to learn sensitive information from the error message.
    • “Incorrect user name or password” is so much better than “Incorrect password for that user name”.
    • “Internal error, please call technical support, or wait a few minutes and try again” is better than “Buffer length exceeded at line 543 in file c:\dev\web\creditapp\cardcruncher.c”
    • Internal information like that should be logged in a file that is accessible to you when fixing your system, but not accessible to the general end users.
Risky Resource Management

“The weaknesses in this category are related to ways in which software does not properly manage the creation, usage, transfer, or destruction of important system resources.”

  • CWE-119: Failure to Constrain Operations within the Bounds of a Memory Buffer
    • The old “buffer overflow” – a throughput length validation issue.  Any time you take data from one source and place it into another destination, you have to reliably predict whether the destination is large enough to hold it, and you also have to decide what you will do if it is not.
    • Don’t rely solely on .NET or Java “protecting you from buffer overruns” – when you try and access an element outside of a buffer’s limits, they will simply throw an exception – crashing your program dead in its tracks. This in itself could cause half-complete files or other communications, which could feed into and damage other processes. [And simply catching all exceptions and continuing blindly is something I’ve complained about before]
  • CWE-642: External Control of Critical State Data
    • By “Critical State Data”, this refers to information about where in the processing your user is. The obvious example of bad external control of critical state data is sending the price to the user, and then reading it back from the user. It obviously isn’t too hard from an attacker to simply modify the value before sending it to the server.
    • Other examples of poorly chosen state being passed includes the use of customer ID numbers in URLs, in such a way that it is obvious how to select a different customer’s number.
    • State data such as this should generally be held at the server, and a ‘reference’ value exchanged to allow the server to regain state when a user responds. If this value is populated among users sufficiently sparsely, it’s close to impossible for an attacker to steal someone else’s state.
  • CWE-73: External Control of File Name or Path
    • This is related to forced-browsing, path-traversal, and other attacks. The idea is that any time you have external paths (such as URLs) with a direct 1:1 relationship to internal paths (directories and paths), it is usually possible to pass path control from the external representation into the internal representation.
    • Make sure that all files requested can only come from a known set of files; disable path representations (such as “..”, for ‘parent directory’) that your code doesn’t actually make use of.
    • Instead of trying to parse the strings yourself to guess what file name the operating system will use, always use the operating system to tell you what file name it’s going to access. Where possible, open the file and then query the handle to see what file it really represents.
  • CWE-426: Untrusted Search Path
    • Windows’ LoadLibrary is the classic example of this flaw in design – although the implicit inclusion of the current directory in Windows’ execution PATH searched is another.
    • When writing programs, you can only trust the code that you load or call if you can verify where you are loading or calling it from.
    • A favourite trick at college was to place ‘.’ at the front of your path, add a malicious shell file called ‘rm’, and invite a system administrator to show you how to kill a print job. The “lprm” command he’d run would call “rm”, and would run the local version, rather than the real command. Bingo, instant credentials!
    • Don’t search for code that you trust – know where it is, and if it isn’t there, fail.
  • CWE-94: Failure to Control Generation of Code (aka ‘Code Injection’)
    • I find it hard to imagine the situation that makes it safe to generate code in any way based off user input.
    • Perhaps you could argue that this is what you do when you generate HTML that contains, as part of its display, user input. OK then, the answer here is to properly encode that which you embed, so that the code processor cannot become confused as to what is code and what is data.
  • CWE-494: Download of Code Without Integrity Check
    • Either review the code that you download, or insist that it is digitally signed by a party with whom you have contracted for that purpose. Otherwise you don’t know what you are downloading or what you are executing.
  • CWE-404: Improper Resource Shutdown or Release
    • This covers a large range of issues:
      • Don’t “double-free” resources. Make sure you meticulously enforce one free / delete for every allocation you make. Otherwise, you wind up releasing a resource that you wanted to hang onto, or you may crash your program.
      • If the memory you’re about to release (or file you’re about to close) contained sensitive information, make sure it is wiped before release. Verify in the release build that the optimiser hasn’t optimised away this wiping!
      • Make sure you release resources when they are no longer in use, so that there are no memory leaks or other resource overuse problems that will lead to your application becoming bloated and fragile.
  • CWE-665: Improper Initialization
    • Lazy languages like Javascript, where a mistype becomes an instant variable assignment, should be avoided.
    • Define all variables’ types – no “IMPLICIT INTEGER*4 (I-N)” (Am I showing my age?)
    • Put something into your variables, so that you know what’s there. Don’t rely on the compiler unless the compiler is documented to guarantee initialisation.
    • By “variable”, I mean anything that might act like a variable – stretches of memory, file contents, etc.
  • CWE-682: Incorrect Calculation
    • Again, a multitude of sins:
      • “should have used sin, but we actually used cos”
      • divide by zero – or some similar operation – that causes the program to halt
      • length validation / numeric overflow – in a single byte, 128 + 128 = 0
    • As you can see, a denial of service can definitely occur, as can remote execution (usually a result of calculating too short a buffer, as a result of numeric overflow, and then overflowing the buffer itself)
    • Don’t underestimate the possible results of just plain getting the answer wrong – cryptographic implementations have been brought to their knees (and resulted in approving untrustworthy access) because they couldn’t add up properly.
Porous Defenses

“The weaknesses in this category are related to defensive techniques that are often misused, abused, or just plain ignored.”

  • CWE-285: Improper Access Control (Authorization)
    • This one pretty much speaks for itself. There’s public parts of your application, and there’s non-public parts. Make sure that you have to provide authentication before crossing that boundary, and make sure that the user account verified in authentication is the one that’s used for authorisation to access resources.
    • Carry user authentication information around carefully, without letting it be exposed to other forms of attack, but also to make sure that the information is available the next time you need to authorise access to resources.
  • CWE-327: Use of a Broken or Risky Cryptographic Algorithm
    • Translation – get a crypto expert to manage your crypto. [Note – this is why I recommend using CryptoAPI rather than OpenSSL, because you have to be your own expert to use OpenSSL.]
    • New algorithms arise, and old ones become obsolete. In the case of cryptographic algorithms, obsolete means “no longer effectively cryptographic”. In other words, if you use an old algorithm, or a broken algorithm, or don’t use an existing algorithm the right way, your data isn’t as protected as you thought it was.
    • Where possible, use a cryptographic framework such as SSL, where the choice of cryptographic algorithms available can be adjusted over time to deal with changing realities.
  • CWE-259: Hard-Coded Password
    • If there’s a hard-coded password, it will be discovered. And when discovered, it will be disseminated, and then you have to figure out how to get the message out to all of your users that they can now be owned because of your application. Not an easy conversation to have, at a guess.
    • This is a “just don’t do it” recommendation, not a “do it this way” or “do it that way”.
  • CWE-732: Insecure Permission Assignment for Critical Resource
    • If a low-privilege user can lock, or corrupt, a resource that is required for high-importance transactions, you’ve created an easy denial-of-service.
    • If a low-privilege user can modify something that is used as a basis for trust assignments, there’s an elevation of privilege attack.
    • And if a low-privilege user can write to your code base, you’re owned.
  • CWE-330: Use of Insufficiently Random Values
    • Give me a random number. 7. Give me another random number. 7. And another? 7.
    • How do you tell if a number is random enough? You hire a mathematician to do a statistical analysis to see if the next number is predictable if you know any or all of the previous numbers.
    • This mostly ties into CWE-327, don’t do your own crypto if you’re not a crypto expert (and by the way, you’re not a crypto expert). However, if you’re hosting a poker web site, it’s pretty important to be able to shuffle cards in an unpredictable manner!
    • Remember that the recent Kaminsky DNS attack, as well as the MD5 collision issues, could have been avoided entirely by the use of unpredictable numbers.
  • CWE-250: Execution with Unnecessary Privileges
    • Define “unnecessary”? No, define “necessary”. That which is required to do the job. Start your development and testing process as a restricted user. When you run into a function that fails because of lack of privileges, ask yourself “is this because I need this privilege, or can I continue without?”
    • Too many applications have been written that ask for “All” access to a file, when they only need “Read”.
    • Too many applications demand administrator access when they don’t really need it. I’m talking to you, Sansa Media Converter.
  • CWE-602: Client-Side Enforcement of Server-Side Security
    • I’ve seen this one hundreds of times. “We prompt the user for their birth date, and we reject invalid day numbers”; “Where do you reject those?”; “In the user interface so it’s nice and quick”. Great, so I can go in and make a copy of your web page, delete the checks, and input any number I like. Don’t consider it impossible that an attacker has written his own copy of the web browser, or can interfere with the information passing through the network.

What’s missing?

Glaringly absent, as usual, is any mention of logging or auditing.

Protections will fail, always, or they will be evaded. When this happens, it’s vital to have some idea of what might have happened – that’s impossible if you’re not logging information, if your logs are wiped over, or if you simply can’t trust the information in your logs.

Maybe I say this because my own “2ndAuth” tool is designed to add useful auditing around shared accounts that are traditionally untraceable – or maybe it’s the other way around, that I wrote 2ndAuth, because I couldn’t deal with the fact that shared accounts are essentially unaudited without it?

Of course, that leads to other subtleties – the logs should not provide interesting information to an attacker, for instance, and you can achieve this either by secreting them away (which makes them less handy), or by limiting the information in the logs (which makes them less useful).

Another missing issue is that of writing software to serve the user (all users) – and not to frustrate the attacker. [Some software reverses the two, frustrating the user and serving the attacker.] We developers are all trained to write code that does stuff – we don’t tend to get a lot of instruction on how to write code that doesn’t do stuff.

Another mistake, though it isn’t a coding mistake as such, is the absence of code review. You really can’t find all issues with code review alone, or with code analysis tools alone, or with testing alone, or with penetration testing alone, etc. You have to do as many of them as you can afford, and if you can’t afford enough to protect your application, perhaps there are other applications you’d be better off producing.

Other mistakes that I’d like to face head-on? Trusting the ‘silver bullet’ promises of languages and frameworks that protect you; releasing prototypes as production, or using prototype languages (hello, Perl, PHP!) to develop production software; feature creep; design by coding (the design is whatever you can get the code to do); undocumented deployment; fear/lack of dead code removal (“someone might be using that”); deploy first, secure later; lack of security training.

FAQ on 2nd Auth

I’ve already received a number of questions about my secondary authentication tool, 2ndAuth. Here’s a few answers:

  • You only show it working for Windows Server 2003 and Windows XP – does it work on other platforms?
    Currently, we only support using it for Windows Server 2003 and Windows XP, although it’s possible that it might work in Windows 2000 Server. The technique used certainly won’t work in Windows Vista or Windows Server 2008, but I have plans to make a different version of the same idea to work there.
  • Is this a custom GINA? Does it work with other custom GINAs?
    This is definitely not a custom GINA, but it ties in to the WinLogon process that the GINA is required to call. As a result, on some custom GINAs, it’s possible that it might not work correctly, if the custom GINA does not call the WinLogon functions in the correct sequence or with the correct desktop visible. So, if you’re finding that it has issues with your custom GINA solution, try it without the GINA to see how it’s supposed to work.
  • Does the secondary authentication prompt occur on all logons?
    The prompt only occurs on interactive logons – these are logons that go through the GINA and WinLogon UI process. That means when you logon using Ctrl-Alt-Del at the desktop, or when you logon from a remote terminal session using Remote Desktop Protocol / Remote Desktop Connection. The prompt does not occur for service logons, batch logons, network logons, or any other non-interactive logons.
    This is a good thing, as it means that you can use 2ndAuth to provide auditing on service account accesses, such that all interactive logons using the service account can be audited – you will finally know who is using that service account to illicitly get domain admin privileges!
  • What are the plans for developing this in the future?
    As I mentioned earlier, a Windows Vista version is definitely on the way. I’m thinking also that we would do well to have a little bit of User Interface to configure the shared accounts, and maybe a help file.
    What do you want to see in the next version of this tool?
    Oh, and of course the other thing we’ll be adding is a fee for its use.
    One other feature I’m thinking of is to expand where the 2nd auth dialog pops up – perhaps there is reason to have it appear when unlocking a workstation.
  • Couldn’t an administrator just disable the 2ndAuth DLL?
    Absolutely. The whole point of this, however, is to keep people honest by making it easy for them to record who’s accessing a shared account. Your administrator could very easily abuse shared accounts with or without this tool, so it’s serving its purpose of making it less likely that a shared account will be used without some form of tracking.
    And there are other tools that will alert you if a critical system file is removed or altered – you can make those tools watch the configuration and DLL for 2ndAuth to make sure that they are not changed.

I was very pleased to see Larry Seltzer at the PC Magazine Security Watch Blogs pick the original posting up – thanks, Larry!

HTML Help in MFC

I recently got around to converting an old MFC project from WinHelp format to HTML Help. Mostly this was to satisfy customers who are using Windows Vista or Windows Server 2008, but who don’t want to install WinHlp32 from Microsoft. (If you do want to install WinHlp32, you can find it for Windows Vista or Windows Server 2008 at Microsoft’s download site.]

Here’s a quick round trip of how I did it:

1. Convert the help file – yeah, this is the hard part, but there are plenty of tools, including Microsoft’s HTML Help Editor, that will do the job for you. Editing the help file in HTML format can be a little bit of a challenge, too, but many times your favourite HTML editor can be made to do the job for you.

2. Call EnableHtmlHelp() from the CWinApp-derived class’ constructor.

3. Remove the line ON_COMMAND(ID_HELP_USING, CWinApp::OnHelpUsing), if you have it – there is no HELP_HELPONHELP topic in HTML.

4. Add the following function:

void CWftpdApp::HelpKeyWord(LPCSTR sKeyword)
switch (GetHelpMode())
case afxHTMLHelp:
akLink.cbStruct = sizeof(HH_AKLINK);
akLink.pszMsgText=(CString)”Failed to find information in the help file on ” + sKeyword;
akLink.pszMsgTitle=”HTML Help Error”;
case afxWinHelp:
AfxGetApp()->WinHelp((long)(char *)sKeyword,HELP_KEY);

5. Change your keyword help calls to call this new function:

((CWftpdApp *)AfxGetApp()->WinHelp((long)(char *)”Registering”);



6. If you want to trace calls to the WinHelp function to watch what contexts are being created, trap WinHelpInternal:

void CWftpdApp::WinHelpInternal(DWORD_PTR dwData, UINT nCmd)
TRACE(“Executing WinHelp with Cmd=%d, dwData=%d (%x)\r\n”,nCmd,dwData,dwData);

This trace comes in really, really (and I mean REALLY) handy when you are trying to debug “Failed to load help” errors. It will tell you what numeric ID is being used, and you can compare that to your ALIAS file.

7. If your code gives a dialog box that reads:

HTML Help Author Message
HH_HELP_CONTEXT called without a [MAP] section.



What it means is that the HTML Help API could not find the [MAP] or the [ALIAS] section – without an [ALIAS] section, but with a [MAP] section, this message still will appear.

8. Don’t edit the ALIAS or MAP sections of your help file in HTML Help Editor – Microsoft has a long-standing bug here that makes it crash (losing much of your unsaved work, of course) unpredictably when editing these sections. Edit the HHP file by hand to work on these sections.

9. Most of your MAP section entries are automatically generated by the compiler, as .HM files, which hold macros appropriate for the specific control in the right dialog. Simply include the right HM file, and all you will need to do is create the right ALIAS mappings.

10. The MFC calls to HtmlHelp discard error returns from the function, so there’s really no good troubleshooting to go on when debugging access to help file entries.

Let me know if any of these helpful hints prove to be of use to you, or if you need any further clarification.

Shared accounts got you down?

Here’s a description of a tool I’ve been itching to release for some time now – “2ndAuth”, short for “secondary authentication”.

This is how it works:

1. The user logs on using a shared account – an account that is known to be shared by a number of different people. Often this is a service account, or an account specific to a particular application.

Logon as a shared user

2. The user is prompted to identify their true account, by entering their username and password. At this point, a “known shared” account is not accepted. A timeout, or a repeated failure to logon, will result in the logon attempt being aborted.

Prompt for the individual's username


Error when the user tries to use a shared account


3. The 2ndAuth tool logs to the event log that it is allowing a shared account logon, and lets the user in.

And now he's allowed in.


I figure this tool would be great for allowing auditing of access to shared accounts, because if you can track down where and when a shared account was used maliciously (or accidentally), you could then find out exactly which individual was responsible for the misuse.




Currently, I have it available for Windows XP and Windows 2003, and I’m looking for beta testers. Drop me a line if you’re interested in testing this.

FTP – Untrustworthy? I Don’t Think So!

Lately, as if writers all draw from the same shrinking paddling-pool of ideas, I’ve noticed a batch of stories about how unsafe, unsecure and untrustworthy is FTP.

SC Magazine says so.

First it was an article in the print version of SC Magazine, sadly not repeated online, titled “2 Minutes On… FTP integrity challenged”, by Jim Carr. I tried to reach Jim by email, but his bounce message tells me he doesn’t work for SC Magazine any more.

This article was full of interesting quotes.

“8,700 FTP server credentials were being used to access and infect more than 2,000 legitimate websites in the US”. The article goes on to quote Finjan’s director of security research who says they were “most likely hijacked by malware” – since most malware can do keystroke logging for passwords, there’s not much can be done at the protocol level to protect against this, so this isn’t really an indictment of FTP so much as it is an indication of the value and ubiquity of FTP.

Then we get to a solid criticism of FTP: “The problem with FTP is it transfers data, including authorization credentials, in plain text rather than in encrypted form, says Jeff Debrosse, senior research analyst at security vendor ESET”. Okay, that’s true – but in much the same vein as saying that the same problems all apply to HTTP.

Towards the end of the article, we return to Finjan’s assertion that malware can steal credentials for FTP sites – and as I’ve mentioned before, malware can get pretty much any user secret, so again, that’s not a problem that a protocol such as FTP – or SFTP, HTTP, SSH, SCP, etc – can fix. There’s a password or a secret key, and once malware is inside the system, it can get those credentials.

Fortunately, the article closes with a quote from Trent Henry, who says “That means FTP is not the real issue as much as it is a server-protection issue.”

OK, But a ZDNet blogger says so, too.

Well, yeah, an article in a recent ZDNet blog entry – on storage, not networking or security (rather like getting security advice from Steve Gibson, a hard-drive expert) – rants on about how his web site got hacked into (through WordPress, not FTP), and as a result, he’s taken to heart a suggestion not to use FTP.

Such a non-sequitur just leaves me breathless. So here’s my take:

FTP Has Been Secure for Years

But some people have just been too busy, or too devoted to other solutions, to take notice.

FTP first gained secure credentials with the addition of support for SASL and SKey. These are mechanisms for authenticating users without passing a password or password-equivalent (and by “password-equivalent”, I’m including schemes where the hash is passed as proof that you have the password – an attacker can simply copy the hash instead of the password). These additional authentication methods give FTP the ability to check identity without jeopardising the security of the identified party. [Of course, prior to this, there were IPsec and SOCKS solutions that work outside of the protocol.]

OK, you might say, but that only protects the authentication – what about the data?

FTP under GSSAPI was defined in RFC 2228, which was published in October 1997 (the earliest draft copy I can find is from March 1995), from a draft developed over the preceding couple of years. What’s GSSAPI? As far as anyone really needs to know, it’s Kerberos.

This inspired the development of FTP over SSL in 1996, which became FTP over TLS, and which finally became RFC 4217. From 1997 to 2003, those of us in the FTPExt Working Group were wondering why the standard wasn’t yet an RFC, as draft after draft were submitted with small changes, and then apparently sat on by the RFC editor – during this time, several compatible FTP clients, servers and proxies were produced that compatibly supported FTP over TLS (and/or SSL).

Why so long from draft to publication?

One theory that was raised is that the IETF were trying to get SSH-based protocols such as SFTP out before FTP over TLS (which has become known as “FTPS”, for FTP over SSL).

SFTP was abandoned after draft 13, which was made available in July 2006; RFC 4217 was published in October 2005. So it seems a little unlikely that this is the case.

The more likely theory is simply that the RFC Editor was overworked – the former RFC Editor, Jon Postel, died in 1998, and it’s likely that it took some time for the new RFC Editor to sort all the competing drafts out, and give them his attention.

What did the FTPExt Working Group do while waiting?

While we were waiting for the RFC, we all built compatible implementations of the FTP over TLS standard.

One or two of us even tried to implement SFTP, but with the draft mutating rapidly, and internal discussion on the SFTP mailing list indicating that no-one yet knew quite what they wanted SFTP to be when it grew up, it was like nailing the proverbial jelly to a tree. Then the SFTP standardisation process ground to a halt, as everyone lost interest. This is why getting SFTP implementations to interoperate is sometimes so frustrating an experience.

FTPS, however – that was solidly defined, and remains a very compatible protocol with few relevant drawbacks. Sadly, even FTP under GSSAPI turned out to have some reliability issues (the data transfer and the control connection, though over different asynchronous channels, share the same encryption context, which means that the receiver must synchronise the two asynchronous channels exactly as the sender did, or face a loss of connection) – but FTP over TLS remains strong and reliable.

So, why does no-one know about FTPS?

Actually, there’s lots of people that do – and many clients and servers, proxies and tunnels, exist in real life implementations. Compatibility issues are few, and generally revolve around how strict servers are about observing the niceties of the secure transaction.

Even a ZDNet blogger or two has come across FTPS, and recommends it, although of course he recommends the wrong server.

My recommendation?

WFTPD Pro. Unequivocally. Because I know who wrote it, and I know what went into it. It’s all good stuff.

Vistafy Me.

I have a little time over the next couple of weeks to devote to developing WFTPD a little further.

This is a good thing, as it’s way past time that I brought it into Vista’s world.

I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.

The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.

So, WFTPD and WFTPD Pro run in Windows Vista and Windows Server 2008.

But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.

I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.

Here’s my list of significant changes to make over the next couple of weeks:

  1. Convert the Help file from WinHelp to HTML Help.
    • WinHelp is not supported in Vista – you can download a WinHelp version, but it’s far better to support the one format of Help file that Windows uses. So, I’m converting from WinHelp to HTML Help.
  2. Changing the Control Panel Applet for WFTPD Pro.
    • CPL files still work in Windows Vista, but they’re considered ‘old’, and there’s an ugly user experience when it comes to making them elevate – run as administrator.
    • There are two or three ways to go here –
      1. one is to create an EXE wrapper that calls the old CPL file. That’s fairly cheap, and will probably be the first version.
      2. Another is to write an MMC plugin. That’s a fair amount of work, and requires some thought and design. That’s going to take more than a couple of weeks.
      3. A third option is to create some form of web-based interface. I don’t want to go that way, because I don’t want to require my users to install IIS or some other web server.
    • So, first blush it seems will be to wrap the existing interface, and secondly I’ll be investigating what an MMC should look like.
  3. Support for IPv6.
    • I already have this implemented in a trial version, but have yet to fully wire it up to a user interface that I’m willing to unleash on the world. So that’s on the cards for the next release.
  4. Multiple languages
    • There are two elements to support for multiple languages in FTP:
      1. File names in non-Latin character sets
      2. Text messages in languages other than English
    • The first, file names in different character sets, will be achieved sooner than the second. If the second ever occurs, it will be because customers are sufficiently interested to ask me specifically to do it.
  5. SSL Client Certificate authentication
    • SSL Client Certificate Auth has been in place for years – it’s a secret feature. The IIS guys warned me off developing it, saying “that’s really hard, don’t try and do anything with client certs”.
    • I didn’t have the heart to tell them I had the feature working already (but without an interface), and that it simply required a little patience.
  6. Install under Local Service and Network Service accounts
  7. Build in Visual Studio 2008, to get maximum protection using new compiler features.
    • /analyze, Address Space Layout Randomisation, SAL – all designed to catch my occasional mistakes.

As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.

Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.

Another side-project of mine is an EFS tool. I may use some time to work on that.

Searching for Weak Debian / Ubuntu SSL Certificates

Tuxkeys_2 I’ve seen a number of people promote packages that have shipped for Debian and Ubuntu, which allow users to scan their collected keys – OpenSSH or OpenSSL or OpenVPN, to discover whether they’re too weak to be of any functional use. [See my earlier story on Debian and the OpenSSL PRNG]

These tools all have one problem.

They run on the Linux systems in question, and they scan the certificates in place.

Given that the keys in question could be as old as 2 years, it seems likely that many of them have migrated off the Linux platforms on which they have started, and onto web sites outside of the Linux platform.

Or, there may simply be a requirement for a Windows-centric security team to be able to scan existing sites for those Linux systems that have been running for a couple of years without receiving maintenance (don’t nod like that’s a good thing).

So, I’ve updated my SSLScan program. I’m attaching a copy of the tool to this blog post, (along with a copy of the Ubuntu OpenSSL blacklists for 1024-bit and 2048-bit keys if I can get approval), though of course I would suggest keeping up with your own copies of these blacklists. It took a little research to find out how to calculate the quantity being used for the fingerprint by Debian, but I figure that it’s best to go with the most authoritative source to begin with.

Please let me know if there are other, non-authoritative blacklists that you’d like to see the code work with – for now, the tool will simply search for “blacklist.RSA-1024” and “blacklist.RSA-2048” in the current directory to build a list of weak key fingerprints.

I’ve found a number of surprising certificates that haven’t been reissued yet, and I’ll let you know about them after the site owners have been informed.

[Sadly, I didn’t find before it was changed – its certificate is shared with, of all places, – yes, the White House, home of the President of America, is hosted from the same server as the Chinese government. The certificate was changed yesterday, 2008/5/21.’s certificate was issued two days ago, 2008/5/20 – coincidence?]

My examples are from the web, but the tool will work on any TCP service that responds immediately with an attempt to set up an SSL connection – so LDAP over SSL will work, but FTP over SSL will not. It won’t work with SSH, because that apparently uses a different key format.

Simply run SSLScan, and enter the name of a web site you’d like to test, such as– don’t enter “http://” at the beginning, but remember that you can test a host at a non-standard port (which you will need to do for LDAP over SSL!) by including the port in the usual manner, such as

If you’re scanning a larger number of sites, simply put the list of addresses into a fie, and supply the file’s name as the argument to SSLScan.

Let me know if you think of any useful additions to the tool.

Here is some slightly modified output from a sample run of the tool (the names have been changed to protect the innocent):Image-0195_2

The text to look for here is “>>>This Key Is A Weak Debian Key<<<“.

Wireless PC Lock – part 2

Over the last several days, I’ve been getting more and more requests for my updated Wireless PC Lock software that I described way back last year.

Possibly, it’s because of stories like this one:

At New York-based Big Four accounting firm Ernst & Young, the security department confiscates laptops if they are unlocked when not in use, say employees (who wish to remain anonymous). To reclaim the confiscated PCs, workers must explain why they forgot to lock their machines and then they get a quick refresher course in security. These employees say they dread that walk to IT, so many have gotten better at remembering to lock them.

Well, that’s a really amusing story, and I will confess that at my workplace, any workstation found unlocked tends to be used to invite the rest of the team out for lunch – you don’t forget to lock your workstation too often [whether that’s because lunch for a whole team is expensive, or because you just don’t want to have to spend an hour with your colleagues, is beyond me].

I work in a physically-secured building, where RFID cards have to be used to get in and out, but the problem of locked workstations is still an important one to us – the data that I can access is quite different from the data that can be accessed by the people across the hall, or by the people in other buildings. And if any inappropriate data access occurs from my workstation under my account, it’ll be my job that’s on the line – nobody’s going to try dusting for fingerprints to check that it wasn’t me.

So, I like to have an ‘insurance policy’ against forgetting that simple Windows-L keystroke. My insurance policy is the Wireless PC Lock, which detects when I get up and walk out of range, locking my computer if I haven’t already done so.

The crap software that comes with the Wireless PC Lock is a problem, though. It requires to be installed, which I don’t want (because I’m a restricted user); it doesn’t really lock the workstation (it puts up a full-screen bitmap of dolphins); it unlocks the workstation when you get back in range (even when it’s on the other side of a wall); etc, etc.

So, I decided it would be handy to have some replacement software that could be installed / used on a per-user basis. For the first release, this is strictly personal software – there’s no install. You copy the EXE into place, and run it from startup.

Insert the USB stick into your system and away we go. Right-click the new icon in your system tray (it looks a little like the transmitter fob on my unit – yours may be different), and choose to register with your fob.

The program will ask you to turn the fob off and then on again, so that it knows whose fob to lock against; once you have this set, that may be all the configuration you need to do – but of course, I have added configuration for the timeouts.

And, if you go and visit your Windows sound schemes, you’ll find there are additional sounds for the Wireless PC Lock, allowing you to hear when you’re about to get locked out by an absence of wireless fob.

Obviously, this is a real lock of your workstation that’s going to happen, so you will, yes, have to type in your password every time you come back to your workstation – your fob carries a two-byte code, which is not nearly difficult enough to hack to make it a valid logon protector. Sorry.

If you lose your fob, or your fob loses batteries, don’t worry – you can use your password to unlock, as usual, and then once you’re unlocked, the Wireless PC Lock software won’t activate again until it registers the presence of your fob again. Just remember that the Wireless PC Lock is a convenience measure, and is a “backup” against you forgetting to press Windows-L to lock up your machine when you’re walking away from it.

I’ve attached a zip file containing the Wireless PC Lock application – please let me know what you think of it!