As we mentioned in the 1st part of this series, FTP is a more complex protocol than many, using one control connection and one data connection.
In typical Stream Mode operation, a new data connection is opened and closed for each data transfer, whether thatâs an upload, a download, or a directory listing. To avoid confusion between different data connections, and as a recognition of the fact that networks may have old packets shuttling around for some time, these connections need to be distinguishable from one another.
In the previous article, we noted that two network sockets are distinguished by the five elements of âLocal Addressâ, âLocal Portâ, âProtocolâ, âRemote Addressâ, and âRemote Portâ. For a data connection associated with any particular request, the local and remote addresses are fixed, as the addresses of the client and server. The protocol is TCP, and only the two ports are variable.
For a PASV, or passive data connection, the client-side port is chosen randomly by the client, and the server-side port is similarly chosen randomly by the server. The client connects to the server.
For a PORT, or active data connection, the client-side port is chosen randomly by the client, and the server-side port is set to port 20. The server connects to the client.
All of these work through firewalls and NAT routers, because firewalls and NAT routers contain an Application Layer Gateway (ALG) that watches for PORT and PASV commands, and modifies the control (in the case of a NAT) and/or uses the values provided to open up a firewall hole.
For the default data connection (what happens if no PORT or PASV command is sent before the first data transfer command), the client-side port is predictable (itâs the same as the source port the client used when connecting the control channel), and the server-side port is 20. Again, the server connects to the client.
Because firewalls and NATs open up a âreverseâ hole for TCP sockets, the default data port works with firewalls and NATs that arenât running an ALG, or whose ALG cannot scan for PORT and PASV commands.
There are a couple of reasons â the first is that it doesnât know that the service connected to is running the FTP protocol. This is common if the server is running on a port other than the usual port 21.
The second reason is that the FTP control connection doesnât look like it contains FTP commands â usually because the connection is encrypted. This can happen because youâre tunneling the FTP control connection through an encrypted tunnel such as SSH (donât laugh â it does happen!), or hopefully itâs because youâre running FTP over SSL, so that the control and data connections can be encrypted, and you can authenticate the identity of the FTP server.
In the words of Deep Thought: âHmmâŠ trickyâ.
There are a couple of classic solutions:
The astute reader can probably see where Iâm going with this.
The default data port is predictable â if the client connects from port U to port L at the server (L is usually 21), then the default data port will be opened from port L-1 at the server to port U at the client.
The default data port doesnât need the firewall to do anything other than allow reverse connections back along the port that initiated the connection. You donât need to open huge ranges at the serverâs firewall (in fact you should be able to simply open port 21 inbound to your server).
The default data port is required to be supported by FTP servers going back a long way- at least a couple of decades. Yes, really, that long.
Good point, that, and a great sentence to use whenever you wish to halt innovation in its tracks.
Okay, itâs obvious that there are some drawbacks:
Even with those drawbacks, there are still further solutions to apply â the first being to use Block-mode instead of Stream-mode. In Stream-mode, each data transfer requires opening and closing the data connection; in Block-mode, which is a little like HTTPâs chunked mode, blocks of data are sent, and followed by an âEOFâ marker (End of File), so that the data connection doesnât need to be closed. If you can convince your FTP client to request Block-mode with the default data connection, and your FTP server supports it (WFTPD Pro has done so for several years), you can achieve FTP over SSL through NATs and firewalls simply by opening port 21.
For the second problem, itâs worth noting that many FTP client authors implemented default data connections out of a sense of robustness, so default data connections will often work if you can convince the PORT and PASV commands to fail â by, for instance, putting restrictive firewalls or NATs in the way, or perhaps by preventing the FTP server from accepting PORT or PASV commands in some way.
Clearly, since Microsoftâs IIS 7.5 downloadable FTP Server supports FTPS in block mode with the default data port, there has been some consideration given to my whispers to them that this could solve the FTP over SSL through firewall problem.
Other than my own WFTPD Explorer, I am not aware of any particular clients that support the explicit use of FTP over SSL with Block-mode on the default data connection â Iâd love to hear of your experiments with this mode of operation, to see if it works as well for you as it does for me.
This will be the first of a couple of articles on FTP, as Iâve been asked to post this information in an easy-to-read format in a public place where it can be referred to. I think my expertise in developing and supporting WFTPD and WFTPD Pro allow me to be reliable on this topic. Oh, that and the fact that Iâve contributed to a number of RFCs on the subject.
First, a quick refresher on TCP â every TCP connection can be thought of as being associated with a âsocketâ at each device along the way â from one computer, through routers, to the other computer. The socket is identified by five individual items â the local IP address, the local port, the remote IP address, the remote port, and the protocol (in this case, the protocol is TCP).
Firewalls are essentially a special kind of router, with rules not only for how to forward data, but also rules on connection requests to drop or allow. Once a connection request is allowed, the entire flow of traffic associated with that connection request is allowed, also â any traffic flow not associated with a previously allowed connection request is discarded.
When you set up a firewall to allow access to a server, you have to consider the first segment â the âSYNâ, or connection request from the TCP client to the TCP server. The rule can refer to any data that would identify the socket to be created, such as âallow any connection request where the source IP address is 10.1.1.something, and the destination port is 54321â.
Typically, an external-facing firewall will allow all outbound connections, and have rules only for inbound connections. As a result, firewall administrators are used to saying things like âto enable access to the web server, simply open port 80â, whereas what they truly mean is to add a rule that applies to incoming TCP connection requests whose source address and source port could be anything, but whose destination port is 80, and whose destination address is that of the web server.â This is usually written in some short hand, such as âallow tcp 0.0.0.0:0 10.1.2.3:80â, where â0.0.0.0â stands for âany addressâ and â:0â stands for âany portâ.
For an FTP server, firewall rules are known to be a little trickier than for most other servers.
Sure, you can set up the rule âallow tcp 0.0.0.0:0 10.1.2.3:21â, because the default port for the control connection of FTP is 21. That only allows the control connection, though.
What other connections are there?
In the default transfer mode of âStreamâ, every file transfer gets its own data connection. Of course, itâd be lovely if this data connection was made on port 21 as well, but thatâs not the way the protocol was built. Instead, Stream mode data connections are opened either as âActiveâ or âPassiveâ connections.
The terms "Active" and "Passive" refer to how the FTP server connects. The choice of connection method is initiated by the client, although the server can choose to refuse whatever the client asked for, at which point the client should fail over to using the other method.
In the Active method, the FTP server connects to the client (the server is the âactiveâ participant, the client just lies back and thinks of England), on a random port chosen by the client. Obviously, that will work if the client’s firewall is configured to allow the connection to that port, and doesn’t depend on the firewall at the server to do anything but allow connections outbound. The Active method is chosen by the client sending a âPORTâ command, containing the IP address and port to which the server should connect.
In the Passive method, the FTP client connects to the server (the server is now the âpassiveâ participant), on a random port chosen by the server. This requires the server’s firewall to allow the incoming connection, and depends on the client’s firewall only to allow outbound connections. The Passive method is chosen by the client sending a âPASVâ command, to which the server responds with a message containing the IP address and port at the server that the client should connect to.
So in theory, your firewall now needs to know what ports are going to be requested by the PORT and PASV commands. For some situations, this is true, and you need to consider this â weâll talk about that in part 2. For now, letâs assume everything is ânormalâ, and talk about how the firewall helps the FTP user or administrator.
If you use port 21 for your FTP server, and the firewall is able to read the control connection, just about every firewall in existence will recognise the PORT and PASV commands, and open up the appropriate holes. This is because those firewalls have an Application Level Gateway, or ALG, which monitors port 21 traffic for FTP commands, and opens up the appropriate holes in the firewall. Weâve discussed the FTP ALG in the Windows Vista firewall before.
Where does port 20 come in? A rather simplistic view is that administrators read the âServicesâ file, and see the line that tells them that port 20 is âftp-dataâ. They assume that this means that opening port 20 as a destination port on the firewall will allow FTP data connections to flow. By the âelephant repellantâ theory, this is proved âtrueâ when their firewalls allow FTP data connections after they open ports 21 and 20. Nobody bothers to check that it also works if they only open port 21, because of the ALG.
OK, so if port 20 isnât needed, why is it associated with âftp-dataâ? For that, youâll have to remember what I said early on in the article â that every socket has five values associated with it â two addresses, two ports, and a protocol. When the data connection is made from the server to the client (remember, thatâs an Active data connection, in response to a PORT command), the source port at the server is port 20. Itâs totally that simple, and since nobody makes firewall rules that look at source port values, itâs relatively unimportant. That âftp-dataâ in the Services file is simply so that the output from ânetstatâ has a meaningful service name instead of â:20â as a source port.
Next time, weâll expand on this topic, to go into the inability of the ALG to process encrypted FTP control traffic, and the resultant issues and solutions that face encrypted FTP.
Iâve read some debate about the top 25 programming mistakes as documented by the CWE (Common Weakness Enumeration) project, in collaboration with the SANS Institute and the MITRE . That the list isnât complete, that there are some items that arenât in the list, but should be, or vice-versa.
I think we should look at the CWE top-25 as something like the PCI Data Security Standard â itâs not the be-all and end-all of security, itâs not universally applicable, itâs not even a âgold standardâ. Itâs just the very bare minimum that you should be paying attention to, if youâve got nowhere else to start in securing your application.
As noted by the SANS Institute, the top 25 list will allow schools and colleges to more confidently teach secure development as a part of their classes.
I personally would like to see a more rigorous taxonomy, although in this field, itâs really hard to do that, because in large part itâs a field that feeds off publicity â and you just canât get publicity when you use phrases like ârigorous taxonomyâ. Hereâs my take on the top 25 mistakes, in the order presented:
âThese weaknesses are related to insecure ways in which data is sent and received between separate components, modules, programs, processes, threads, or systems.â
âThe weaknesses in this category are related to ways in which software does not properly manage the creation, usage, transfer, or destruction of important system resources.â
âThe weaknesses in this category are related to defensive techniques that are often misused, abused, or just plain ignored.â
Glaringly absent, as usual, is any mention of logging or auditing.
Protections will fail, always, or they will be evaded. When this happens, itâs vital to have some idea of what might have happened â thatâs impossible if youâre not logging information, if your logs are wiped over, or if you simply canât trust the information in your logs.
Maybe I say this because my own â2ndAuthâ tool is designed to add useful auditing around shared accounts that are traditionally untraceable â or maybe itâs the other way around, that I wrote 2ndAuth, because I couldnât deal with the fact that shared accounts are essentially unaudited without it?
Of course, that leads to other subtleties â the logs should not provide interesting information to an attacker, for instance, and you can achieve this either by secreting them away (which makes them less handy), or by limiting the information in the logs (which makes them less useful).
Another missing issue is that of writing software to serve the user (all users) â and not to frustrate the attacker. [Some software reverses the two, frustrating the user and serving the attacker.] We developers are all trained to write code that does stuff â we donât tend to get a lot of instruction on how to write code that doesnât do stuff.
Another mistake, though it isnât a coding mistake as such, is the absence of code review. You really canât find all issues with code review alone, or with code analysis tools alone, or with testing alone, or with penetration testing alone, etc. You have to do as many of them as you can afford, and if you canât afford enough to protect your application, perhaps there are other applications youâd be better off producing.
Other mistakes that Iâd like to face head-on? Trusting the âsilver bulletâ promises of languages and frameworks that protect you; releasing prototypes as production, or using prototype languages (hello, Perl, PHP!) to develop production software; feature creep; design by coding (the design is whatever you can get the code to do); undocumented deployment; fear/lack of dead code removal (âsomeone might be using thatâ); deploy first, secure later; lack of security training.
Iâve already received a number of questions about my secondary authentication tool, 2ndAuth. Hereâs a few answers:
I was very pleased to see Larry Seltzer at the PC Magazine Security Watch Blogs pick the original posting up â thanks, Larry!
I recently got around to converting an old MFC project from WinHelp format to HTML Help. Mostly this was to satisfy customers who are using Windows Vista or Windows Server 2008, but who donât want to install WinHlp32 from Microsoft. (If you do want to install WinHlp32, you can find it for Windows Vista or Windows Server 2008 at Microsoftâs download site.]
Hereâs a quick round trip of how I did it:
1. Convert the help file â yeah, this is the hard part, but there are plenty of tools, including Microsoftâs HTML Help Editor, that will do the job for you. Editing the help file in HTML format can be a little bit of a challenge, too, but many times your favourite HTML editor can be made to do the job for you.
2. Call EnableHtmlHelp() from the CWinApp-derived classâ constructor.
3. Remove the line ON_COMMAND(ID_HELP_USING, CWinApp::OnHelpUsing), if you have it – there is no HELP_HELPONHELP topic in HTML.
4. Add the following function:
void CWftpdApp::HelpKeyWord(LPCSTR sKeyword)
akLink.cbStruct = sizeof(HH_AKLINK);
akLink.pszMsgText=(CString)”Failed to find information in the help file on ” + sKeyword;
akLink.pszMsgTitle=”HTML Help Error”;
5. Change your keyword help calls to call this new function:
((CWftpdApp *)AfxGetApp()->WinHelp((long)(char *)”Registering”);
6. If you want to trace calls to the WinHelp function to watch what contexts are being created, trap WinHelpInternal:
void CWftpdApp::WinHelpInternal(DWORD_PTR dwData, UINT nCmd)
TRACE(“Executing WinHelp with Cmd=%d, dwData=%d (%x)\r\n”,nCmd,dwData,dwData);
This trace comes in really, really (and I mean REALLY) handy when you are trying to debug âFailed to load helpâ errors. It will tell you what numeric ID is being used, and you can compare that to your ALIAS file.
7. If your code gives a dialog box that reads:
HTML Help Author Message
HH_HELP_CONTEXT called without a [MAP] section.
What it means is that the HTML Help API could not find the [MAP] or the [ALIAS] section – without an [ALIAS] section, but with a [MAP] section, this message still will appear.
8. Donât edit the ALIAS or MAP sections of your help file in HTML Help Editor â Microsoft has a long-standing bug here that makes it crash (losing much of your unsaved work, of course) unpredictably when editing these sections. Edit the HHP file by hand to work on these sections.
9. Most of your MAP section entries are automatically generated by the compiler, as .HM files, which hold macros appropriate for the specific control in the right dialog. Simply include the right HM file, and all you will need to do is create the right ALIAS mappings.
10. The MFC calls to HtmlHelp discard error returns from the function, so thereâs really no good troubleshooting to go on when debugging access to help file entries.
Let me know if any of these helpful hints prove to be of use to you, or if you need any further clarification.
Hereâs a description of a tool Iâve been itching to release for some time now – â2ndAuthâ, short for âsecondary authenticationâ.
This is how it works:
1. The user logs on using a shared account â an account that is known to be shared by a number of different people. Often this is a service account, or an account specific to a particular application.
2. The user is prompted to identify their true account, by entering their username and password. At this point, a âknown sharedâ account is not accepted. A timeout, or a repeated failure to logon, will result in the logon attempt being aborted.
3. The 2ndAuth tool logs to the event log that it is allowing a shared account logon, and lets the user in.
I figure this tool would be great for allowing auditing of access to shared accounts, because if you can track down where and when a shared account was used maliciously (or accidentally), you could then find out exactly which individual was responsible for the misuse.
Currently, I have it available for Windows XP and Windows 2003, and Iâm looking for beta testers. Drop me a line if youâre interested in testing this.
Lately, as if writers all draw from the same shrinking paddling-pool of ideas, I’ve noticed a batch of stories about how unsafe, unsecure and untrustworthy is FTP.
First it was an article in the print version of SC Magazine, sadly not repeated online, titled “2 Minutes On… FTP integrity challenged”, by Jim Carr. I tried to reach Jim by email, but his bounce message tells me he doesn’t work for SC Magazine any more.
This article was full of interesting quotes.
“8,700 FTP server credentials were being used to access and infect more than 2,000 legitimate websites in the US”. The article goes on to quote Finjan’s director of security research who says they were “most likely hijacked by malware” – since most malware can do keystroke logging for passwords, there’s not much can be done at the protocol level to protect against this, so this isn’t really an indictment of FTP so much as it is an indication of the value and ubiquity of FTP.
Then we get to a solid criticism of FTP: “The problem with FTP is it transfers data, including authorization credentials, in plain text rather than in encrypted form, says Jeff Debrosse, senior research analyst at security vendor ESET”. Okay, that’s true – but in much the same vein as saying that the same problems all apply to HTTP.
Towards the end of the article, we return to Finjan’s assertion that malware can steal credentials for FTP sites – and as I’ve mentioned before, malware can get pretty much any user secret, so again, that’s not a problem that a protocol such as FTP – or SFTP, HTTP, SSH, SCP, etc – can fix. There’s a password or a secret key, and once malware is inside the system, it can get those credentials.
Fortunately, the article closes with a quote from Trent Henry, who says “That means FTP is not the real issue as much as it is a server-protection issue.”
Well, yeah, an article in a recent ZDNet blog entry – on storage, not networking or security (rather like getting security advice from Steve Gibson, a hard-drive expert) – rants on about how his web site got hacked into (through WordPress, not FTP), and as a result, he’s taken to heart a suggestion not to use FTP.
Such a non-sequitur just leaves me breathless. So here’s my take:
But some people have just been too busy, or too devoted to other solutions, to take notice.
FTP first gained secure credentials with the addition of support for SASL and SKey. These are mechanisms for authenticating users without passing a password or password-equivalent (and by “password-equivalent”, I’m including schemes where the hash is passed as proof that you have the password – an attacker can simply copy the hash instead of the password). These additional authentication methods give FTP the ability to check identity without jeopardising the security of the identified party. [Of course, prior to this, there were IPsec and SOCKS solutions that work outside of the protocol.]
OK, you might say, but that only protects the authentication – what about the data?
FTP under GSSAPI was defined in RFC 2228, which was published in October 1997 (the earliest draft copy I can find is from March 1995), from a draft developed over the preceding couple of years. What’s GSSAPI? As far as anyone really needs to know, it’s Kerberos.
This inspired the development of FTP over SSL in 1996, which became FTP over TLS, and which finally became RFC 4217. From 1997 to 2003, those of us in the FTPExt Working Group were wondering why the standard wasn’t yet an RFC, as draft after draft were submitted with small changes, and then apparently sat on by the RFC editor – during this time, several compatible FTP clients, servers and proxies were produced that compatibly supported FTP over TLS (and/or SSL).
One theory that was raised is that the IETF were trying to get SSH-based protocols such as SFTP out before FTP over TLS (which has become known as “FTPS”, for FTP over SSL).
SFTP was abandoned after draft 13, which was made available in July 2006; RFC 4217 was published in October 2005. So it seems a little unlikely that this is the case.
The more likely theory is simply that the RFC Editor was overworked – the former RFC Editor, Jon Postel, died in 1998, and it’s likely that it took some time for the new RFC Editor to sort all the competing drafts out, and give them his attention.
While we were waiting for the RFC, we all built compatible implementations of the FTP over TLS standard.
One or two of us even tried to implement SFTP, but with the draft mutating rapidly, and internal discussion on the SFTP mailing list indicating that no-one yet knew quite what they wanted SFTP to be when it grew up, it was like nailing the proverbial jelly to a tree. Then the SFTP standardisation process ground to a halt, as everyone lost interest. This is why getting SFTP implementations to interoperate is sometimes so frustrating an experience.
FTPS, however – that was solidly defined, and remains a very compatible protocol with few relevant drawbacks. Sadly, even FTP under GSSAPI turned out to have some reliability issues (the data transfer and the control connection, though over different asynchronous channels, share the same encryption context, which means that the receiver must synchronise the two asynchronous channels exactly as the sender did, or face a loss of connection) – but FTP over TLS remains strong and reliable.
Actually, there’s lots of people that do – and many clients and servers, proxies and tunnels, exist in real life implementations. Compatibility issues are few, and generally revolve around how strict servers are about observing the niceties of the secure transaction.
Even a ZDNet blogger or two has come across FTPS, and recommends it, although of course he recommends the wrong server.
WFTPD Pro. Unequivocally. Because I know who wrote it, and I know what went into it. It’s all good stuff.
I have a little time over the next couple of weeks to devote to developing WFTPD a little further.
This is a good thing, as it’s way past time that I brought it into Vista’s world.
I’ve been very proud that over the last several years, I have never had to re-write my code in order to make it work on a new version of Windows. Unlike other developers, when a new version of Windows comes along, I can run my software on that new version without changes, and get the same functionality.
The same is not true of developers who like to use undocumented features, because those are generally the features that die in new releases and service packs. After all, since they’re undocumented, nobody should be using them, right? No, seriously, you shouldn’t be using those undocumented features.
But that’s not enough. With each new version of Windows, there are better ways of doing things and new features to exploit. With Windows Vista and Windows Server 2008, there are also a few deprecated older behaviours that I can see are holding WFTPD and WFTPD Pro down.
I’m creating a plan to “Vistafy” these programs, so that they’ll continue to be relevant and current.
Here’s my list of significant changes to make over the next couple of weeks:
As I work on each of these items, I’ll be sure to document any interesting behaviours I find along the way. My first article will be on converting your WinHelp-using MFC project to using HTML Help, with minimal changes to your code, and in such a way that you can back-pedal if you have to.
Of course, I also have a couple of side projects – because I’ve been downloading a lot from BBC 7, I’ve been writing a program to store the program titles and descriptions with the MP3 files, so that they show up properly on the MP3 player. ID3Edit – an inspired name – allows me to add descriptions to these files.
Another side-project of mine is an EFS tool. I may use some time to work on that.
I’ve seen a number of people promote packages that have shipped for Debian and Ubuntu, which allow users to scan their collected keys – OpenSSH or OpenSSL or OpenVPN, to discover whether they’re too weak to be of any functional use. [See my earlier story on Debian and the OpenSSL PRNG]
These tools all have one problem.
They run on the Linux systems in question, and they scan the certificates in place.
Given that the keys in question could be as old as 2 years, it seems likely that many of them have migrated off the Linux platforms on which they have started, and onto web sites outside of the Linux platform.
Or, there may simply be a requirement for a Windows-centric security team to be able to scan existing sites for those Linux systems that have been running for a couple of years without receiving maintenance (don’t nod like that’s a good thing).
So, I’ve updated my SSLScan program. I’m attaching a copy of the tool to this blog post, (along with a copy of the Ubuntu OpenSSL blacklists for 1024-bit and 2048-bit keys if I can get approval), though of course I would suggest keeping up with your own copies of these blacklists. It took a little research to find out how to calculate the quantity being used for the fingerprint by Debian, but I figure that it’s best to go with the most authoritative source to begin with.
Please let me know if there are other, non-authoritative blacklists that you’d like to see the code work with – for now, the tool will simply search for “blacklist.RSA-1024” and “blacklist.RSA-2048” in the current directory to build a list of weak key fingerprints.
I’ve found a number of surprising certificates that haven’t been reissued yet, and I’ll let you know about them after the site owners have been informed.
[Sadly, I didn’t find https://whitehouse.gov before it was changed – its certificate is shared with, of all places, https://www.gov.cn – yes, the White House, home of the President of America, is hosted from the same server as the Chinese government. The certificate was changed yesterday, 2008/5/21. https://www.cacert.org’s certificate was issued two days ago, 2008/5/20 – coincidence?]
My examples are from the web, but the tool will work on any TCP service that responds immediately with an attempt to set up an SSL connection – so LDAP over SSL will work, but FTP over SSL will not. It won’t work with SSH, because that apparently uses a different key format.
Simply run SSLScan, and enter the name of a web site you’d like to test, such as www.example.com– don’t enter “http://” at the beginning, but remember that you can test a host at a non-standard port (which you will need to do for LDAP over SSL!) by including the port in the usual manner, such as www.example.com:636.
If you’re scanning a larger number of sites, simply put the list of addresses into a fie, and supply the file’s name as the argument to SSLScan.
Let me know if you think of any useful additions to the tool.
The text to look for here is “>>>This Key Is A Weak Debian Key<<<“.
Over the last several days, I’ve been getting more and more requests for my updated Wireless PC Lock software that I described way back last year.
Possibly, it’s because of stories like this one:
At New York-based Big Four accounting firm Ernst & Young, the security department confiscates laptops if they are unlocked when not in use, say employees (who wish to remain anonymous). To reclaim the confiscated PCs, workers must explain why they forgot to lock their machines and then they get a quick refresher course in security. These employees say they dread that walk to IT, so many have gotten better at remembering to lock them.
Well, that’s a really amusing story, and I will confess that at my workplace, any workstation found unlocked tends to be used to invite the rest of the team out for lunch – you don’t forget to lock your workstation too often [whether that’s because lunch for a whole team is expensive, or because you just don’t want to have to spend an hour with your colleagues, is beyond me].
I work in a physically-secured building, where RFID cards have to be used to get in and out, but the problem of locked workstations is still an important one to usÂ – the data that I can access is quite different from the data that can be accessed by the people across the hall, or by the people in other buildings. And if any inappropriate data access occurs from my workstation under my account, it’ll be my job that’s on the line – nobody’s going to try dusting for fingerprints to check that it wasn’t me.
So, I like to have an ‘insurance policy’ against forgetting that simple Windows-L keystroke. My insurance policy is the Wireless PC Lock, which detects when I get up and walk out of range, locking my computer if I haven’t already done so.
The crap software that comes with the Wireless PC Lock is a problem, though. It requires to be installed, which I don’t want (because I’m a restricted user); it doesn’t really lock the workstation (it puts up a full-screen bitmap of dolphins); it unlocks the workstation when you get back in range (even when it’s on the other side ofÂ a wall); etc, etc.
So, I decided it would be handy to have some replacement software that could be installed / used on a per-user basis. For the first release, this is strictly personal software – there’s no install. You copy the EXE into place, and run it from startup.
Insert the USB stick into your system and away we go. Right-click the new icon in your system tray (it looks a little like the transmitter fob on my unit – yours may be different), and choose to register with your fob.
The program will ask you to turn the fob off and then on again, so that it knows whose fob to lock against; once you have this set, that may be all the configuration you need to do – but of course, I have added configuration for the timeouts.
And, if you go and visit your Windows sound schemes, you’ll find there are additional sounds for the Wireless PC Lock, allowing you to hear when you’re about to get locked out by an absence of wireless fob.
Obviously, this is a real lock of your workstation that’s going to happen, so you will, yes, have to type in your password every time you come back to your workstation – your fob carries a two-byte code, which is not nearly difficult enough to hack to make it a valid logon protector. Sorry.
If you lose your fob, or your fob loses batteries, don’t worry – you can use your password to unlock, as usual, and then once you’re unlocked, the Wireless PC Lock software won’t activate again until it registers the presence of your fob again. Just remember that the Wireless PC Lock is a convenience measure, and is a “backup” against you forgetting to press Windows-L to lock up your machine when you’re walking away from it.
I’ve attached a zip file containing the Wireless PC Lock application – please let me know what you think of it!