A tribute to a network service

Early days of the Internet produced many great ideas. Some of them transformed the way we live, others didn’t quite make it. Finger user information protocol (RFC742, RFC1288) and service is one of those virtually unknown to the new generation.

It was a good idea. You could look up user database on the server (using login name or partial real name as the search criteria) and receive infomation like contact details, time of last logon and device used to connect. It’s an early attempt to provide information about users of a system to other users (of the Internet!) – featuring elements of presence and location! And that’s not all. Quoting from RFC1288:

 Vending machines

   Vending machines SHOULD respond to a {C} request with a list of all
   items currently available for purchase and possible consumption.
   Vending machines SHOULD respond to a {U}{C} request with a detailed
   count or list of the particular product or product slot.  Vending
   machines should NEVER NEVER EVER eat money.

Even today an IP-connected vending machine is a rarity. Some people are visionaries indeed!

Why Finger became history? Many reasons:

  • The name. World-wide web is cool. But can you think about enterprise finger?
  • Command line interface;
  • Bad reputation. Finger daemon had a backdoor used by the Morris worm;
  • Secrecy that is sometimes confused with (and used in place of) security. Marcus Ranum explains why that is wrong.

Finger.exe is still in Windows. I treasure the relic.

Degradation: a new generation of computer worms

Suddenly the definition of a computer worm has changed. It used to be something that doesn’t require any action from a system user or administrator to install and propagate. From memory  – the Morris worm was mutiplatform one, it compiled itself upon arrival and used a vulnerability in finger daemon for propagation. More lately, SQL Slammer used a vulnerability in Microsoft SQL Server TDS network protocol implementation to propagate, and Sasser exploited vulnerability in LSA, the very core system service, making pretty much all Windows systems vulnerable. These are examples of sophisticated analysis and engineering.

That’s a rarity today. More recently we have new crop of virii – they have only propagation mechanism, and rely solely on human factor for installation. Exploit may not be required. Take Cabir, a well-publicised worm for Symbian OS. It actually requires two user actions: accepting file transfer and agreeing to install the software. And there  is Stration, a Skype worm that is now modified to propagate over MSN Messenger and ICQ connections. It also needs accepting the download, followed by double-clicking on the executable file (and perhaps ignoring Vista UAC warnings). Why no exploit? Perhaps the worm creators don’t have skills, but also they experience shortage of bugs to exploit.

Which means that software is getting better. Do the computer users get better? I think so. Meanwhile, the worms and viruses are definitely not getting better. I think economics have something to do with it.

The yardstick of a democracy

Received via Arvin Meyer, a fellow MVP:

The yardstick of a democracy is the degree of unorthodoxy permitted


I couldn’t agree more. Full consensus is a bad starting point for decision making in any system – that being political, judiciary or business. Orthodoxy thrives where there’s no democracy. Democracy may not be the best solution for everything (Down With Internet Democracy – Why you don’t want anonymous volunteers powering your search engine). But dogmatism is a bigger danger – resulting in anything from useless security best practices to Taliban.

Single authority principle

One of the biggest issues in today’s IT architectures is overengineering. Excessively complicated solutions are bound to be less reliable and secure. Any other component in a solution is potential point of failure. Well, not necessarily in terms of reliability (clusters, you know) – but certainly from security point, as it adds potential vulnerability and attack point.

And it creates some interesting issues. An example is a popular approach to identity management: using HR database as a “source of truth” about company staff. The corporate directory (your AD, or NDS) is populated from the HR database, using middleware. If you’re using same corporate directory for controlling access to the HR database, that will result in an access management chicken-and-egg problem: who is the authority? Besides, now attackers have two targets for taking control over entire enterprise infrastructure (and middleware, the identity management system, is the third equally important). The approach avoiding that situation is developed centuries ago by the world’s military: use single authority.

Using HR database as the source of truth does make sense, as it contains information about those who are paid by the company. However, it so happens, incidents that are most difficult to investigate are not caused by paid staff using their own accounts. Few years ago the security industry was shifting its focus towards malicious insider. But recent events prove that classic intrusions without apparent access abuse are still big threat. Take TJX and their insecure wireless network. Nevertheless, we should soon see more close integration between business systems like HR, identity management solutions and corporate directory. Oracle is making steps in that direction already. I like Microsoft’s Active Directory and would like to see some effort in that echosystem as well.

But that is a move towards identity and access management based on military principle. It would be very interesting to see a system that is based on democratic principles. That is unseen so far but may well be an interesting change in the enterprise space.

Tracing phone communications: mission expensive and impossible

Herald Sun, a local tabloid, reports:

VICTORIANS will be surprised to learn that the major telecommunications companies, including Telstra, charge the police when they check on calls by criminals.

This year Victoria Police’s total bill will be about $800,000.

The service is provided at cost, but Chief Commissioner Christine Nixon wants it to be free.

Telstra said it received more than 300,000 requests a year nationally from police.

I’m not surprised. But these are interesting details. Looks like every call list costs the police tens of dollars – while same information is provided for free to the criminals in question (as they are Telstra’s, Vodafone’s and Optus’ customers for the telephone service). Which is not fair.

And while the number of requests on their cost grows every year, criminals are getting smarter:

Police are increasingly worried crooks are using false identification to buy bulk pre-paid SIM cards so their calls stay anonymous.

Opportunities for anonymous communivations today are endless. The premise is free connection to the Internet that is available in many locations in Australia and elsewhere in the world. You can then sign up for any of services that give you free calls (Live Messenger, Skype, Wengo, you name it). One bit that is a little difficult is anonymous payment. Opportunities are in prepaid/gift credit cards as well as alternative payment systems. But payment is only required for interfacing with the legacy telephone system. It is interesting to see how availability of free and anonymous communications will transform crime – but there’s little doubt that it will.

How to prevent 1% of cybercrime?

An interesting picture appears on the PBS Shop Web site:

HACKER SAFE certified sites prevent over 99.9% of hacker crime.

Because of what it says I felt an urge to click on it. The first attempt (a right-click) resulted in the following message box:

Prohibited by Law

I think the law that prohibits copying the picture doesn’t exist. Otherwise my Web browser would be breaking the law by caching the picture, for example. And the trademark law, at least in Australia, USA and other Western countries, actually allows nominative fair use (as well as parody).

But I don’t need to do any copying anyway. The “HACKER SAFE” picture above is provided to you directly from its source, controlscan.com (and “certifies” sites other than this weblog). Clicking on it will show a page that says, among other things:

Research indicates sites remotely scanned for known vulnerabilities on a daily basis, such as those earning HACKER SAFE certification, can prevent over 99% of hacker crime. 

I would be really interested in the methodology of that research. Why 99% and not 99.9%? But mentioning research is just weasel words here.

The company that brings you the “HACKER SAFE” picture provides many services related to Web security and privacy protection. Every single one comes with its own picture (they are called “trust seals”):

Internet Security By ControlScan

That, as I wrote, gives a false sense of security. Looking at the service offerings reveals more interesting facts:

  • The company provides vulnerability scanning for those who need to be compliant with flawed and largely useless Payment Cards Industry Data Security Standard;
  • The company offers vulnerability scanning bundled together with EV SSL certificates – overpriced ones, supposedly more secure and with questionable benefits;
  • EV SSL certificates are positioned to secure E-Mail applications among other things. Internet email standards generally don’t require a browser, and current EV certificates’ main distinction is the green address bar in IE7. You can encrypt SMTP using SSL but the fact that the the SSL certificate is Externed Validation will make exactly zero difference compared to any other SSL certificate. I won’t be surprised though if EV flavour of mail signing certificates will emerge;
  • And the certificates are positioned as those giving the Highest Level of Digital Encryption available in industry – even though the level of encryption doesn’t really have much to do with the type, or issuer, of the certificate.

Vulnerability scanning has its value. It’s a very basic security control mechanism that allows to identify trivial system administrators’ mistakes independently of their process. But it doesn’t prevent 99% of security exposures. If it does, what about the remaining 1%? Is one attack out of a hundred successful? One attacker out of a hundred? That doesn’t make sense.

In the example above we see how aggressive marketing can be misleading, even deceptive, and therefore diminish the value of otherwise useful service.


Measuring efficiency of systems management

Have you ever wondered how efficient your systems management is? Here’s some questions that will allow you to create some metrics of that:

  • How many network interfaces are currently connected to your IP network?
  • How many hosts are there, and what OS are they running?
  • For each OS, how many systems are up to date with the latest patches?
  • How long did it take to complete the latest patch cycle (that successfully updated 100% of the OS population)?
  • For the systems that run an antivirus/malware protection, how many are up to date with the latest configurations?
  • How many users are currently conected to the network?

Except for the last one, you should have a way of answering those questions. If you don’t then you can stop pretending that the systems connecting to your network are actually managed.

You can as well remove the network from the picture (maybe because endpoint security is not there yet) and consider the entire fleet of computer systems that belong to your organisation. But I reckon that won’t help much answering the questions. How do you know what’s on the system that hasn’t connected back to your network for three months? Information about its patching and malware protection state, application environment and the user is not available. So perhaps we’ll have to wait for the system to connect back to network where it can be managed (which brings us to question 1)?

No. The Internet is also your network. And there is a good model to follow for systems management: BlackBerry Enterprise Solution. It allows to buy a device off the shelf and build it to become a part of your enterprise – securely. It is location- and connectivity-independent. And  it allows to configure the devices in a way that they self-destroy after not calling home for extended period of time. restrictions do apply here but this is where it’s going. So at every point in time you have reasonable idea of the current status of your systems – and the answers to the questions above. Restrictions do apply here but this is where it’s going. I’d like to see similar type of approach implemented for Windows and UNIX/Linux workstations.

The alternative is to give up the workstations and concentrate on the server room/datacentre security. It should be easy to provide the metrics for the server-only environments. In this case, document control becomes a real issue. Perhaps thin client access will help? Maybe, but I’m not overly enthusiastic about taking away the sense of my computer from the users. And thin client environments can be spectacular disasters.

Good systems management has everything to do with security – they go hand by hand. It is a responsibility of security administrators too to make sure that the systems are properly managed. Another responsibility is to model the situations where systems management will fail (due to intrusion, or negligence), and have a plan for the response. But if security management excludes some systems and users that have access to your information, however insignificant they are – that becomes the weakest link where the entire organisation’s information security fails.