News: Web is dangerous

VoIP is scary, if you rememeber. Now, there’s something else that is scary: WWW, the World-Wide Web. And thanks to Tim O’Reilly and his invention of Web 2.0, it’s scarier than ever.

As in: there’s much more to FUD about. Here’s a perfect example: Web 2.0 Threats and Risks for Financial Services (by Shreeraj Shah). It’s full of dung, as pretty much any other FUD. But being targeted at the financial industry (people with your money) it excels at that. Let’s analyse:

The financial industry estimates that 95% of information exists in non-RSS formats and could become a key strategic advantage if it can be converted into RSS format.

RSS is just a way of delivering dynamic content (not quite a format), and not much of financial information really can use RSS. Market news (think of Reuters and Bloomberg services) and that is pretty much all. And the model is simple: authenticate and deliver content securely. RSS has no security implications here. And where the figure of 95% came from?

Ajax, Flash (RIA) and Web Services deployment is critical for Web 2.0 applications. Financial services are putting these technologies in place; most without adequate threat assessment exercises.

Of all corporations, financial industry is one of the most conservative. Every technology that is used undergoes rigorous assessment. And adequate (to the organisation’s risk management and regulatory requirements) security is one of the top priorities there. The process of the evaluation may not be the most efficient, but that’s a different issue – nothing to do with Web. Besides, Flash belongs more to entertainment industry: it’s neither critical nor required by financial institutions for business-critical applications.

In the last few months, several cross-site scripting attacks have been observed, where malicious JavaScript code from a particular Web site gets executed on the victim’s browser thereby compromising information on the victim’s system. Poorly written Ajax routines can be exploited in financial systems. Ajax uses DOM manipulation and JavaScript to leverage a browser’s interface. It is possible to exploit document.write and eval() calls to execute malicious code in the current browser context. This can lead to identity theft by compromising cookies. Browser session exploitation is becoming popular with worms and viruses too. Infected sessions in financial services can be a major threat. The attacker is only required to craft a malicious link to coax unsuspecting users to visit a certain page from their Web browsers. This vulnerability existed in traditional applications as well but AJAX has added a new dimension to it.

AJAX doesn’t add any new dimension to the XSS attacks: both the attack techniques and the ways to prevent cross-site scripting haven’t changed.

One of the key elements of Web 2.0 application is its flexibility to talk with several data sources from a single application or page. This is a great feature but from a security perspective, it can be deadly.

And may be not. The decision to use multiple data sources is driven by functional requirements. And it can be well-secured.

Web 2.0 based financial applications use Ajax routines to do a lot of work on the client-side, such as client-side validation for data types, content-checking, date fields, etc. Normally client-side checks must be backed up by server-side checks as well. Most developers fail to do so; their reasoning being the assumption that validation is taken care of in Ajax routines.

At this point, an example is necessary. Abstract applications and developers aren’t good enough. In the past couple of years the developers actually have learnt server-side data validation and more often use it than not. And the risk is of stupid developer, not of AJAX – if anything, AJAX is raising the bar for developers.

Web Services are picking up in the financial services sector and are becoming part of trading and banking applications. Service-oriented architecture is a key component of Web 2.0 applications. WSDL (Web Services Definition Language) is an interface to Web services. This file provides sensitive information about technologies, exposed methods, invocation patterns, etc. that can aid in defining exploitation methods. Unnecessary functions or methods kept open can spell potential disaster for Web services. Web Services must follow WS-security standards to counter the threat of information leakage from the WSDL file. WSDL enumeration helps attacker to build an exploit. Web Services WSDL file access to unauthorized users can lead to private data access.

Mr. Shah seriously suggests that security though obscurity is essential. That’s rubbish.

A lot more analysis needs to be done before financial applications can be integrated with their core businesses using Web 2.0.

If we need analysis, that must be nothing like Mr. Shah’s.

Come on, let’s FUD again!

I’m having almost religious moment now: I’m ready to call the spirit of Mark Russinovich, Microsoft fellow and a celebrity kernel hacker.

This is why: two Indian guys, Nitin and Vipin Kumar, invented the ultimate rootkit. It’s only 1500 bytes, it lives in the master boot record on the disk (or in BIOS, for it’s too big for the MBR), and it is patching Vista kernel. It’s called Vbootkit. Bruce Schneier, the world’s FUDmaster, and Symantec’s Security Focus duely spead the word. So it’s famous – before anyone actually have seen it.

It may be a very clever piece of code – a kind of tiny virtual machine hypervisor that can access the guest memory in situ. But I don’t think it is. I believe that the Vbootkit requires something special (conveniently omitted from the product description) to work. I believe that the claims about its powers, let alone impact, are largely exaggerated. The key word here is – believe – because I cannot substantiate my claim with any analysis or evidence.

So I call spirit of Mark Russinovich to come up from the deeps of Windows core and consider the reality of Vbootkit. Meanwhile I’m waiting for the Black Hat Europe, the rootkit code release, and find comfort in Bitlocker.

Picture authentication: threat modeling

In an article colourfully named The Two-Way Peephole, Forbes, a pro-Giuliani business magazine, describes advancements in Web security and new techniques that companies start to implement to reduce fraud. The article makes an interesting reading.

Here’s the part that triggered some research on my side:

The new higher security levels now work in both directions, with you and your bank proving legitimacy to one another. Phishing, the spammer tactic of duping e-mail recipients into logging in to a phony site and then nabbing their personal info, has everyone confused. Phishers sent out a billion and a half e-mails last year, up 19% from 2005, according to Internet security firm Symantec. At Zions Bank, headquartered in Salt Lake City, and at Yahoo, for that matter, customers know they’re safely logged in after they’ve seen a prearranged picture of, say, pansies or Labrador puppies.

I decided to go to Yahoo! and see how the picture is proving Yahoo’s legitimacy to me. Here’s the initial Yahoo! Mail logon box:

No picture

After clicking on Prevent Password Theft, you see the setup page (, where you choose the logon box colour, and either a text or a picture to diplay. This is my logon box after I made it orange and featuring foto of Albion Alley in Melbourne:


This is it. From now on, after logging on using my profile on this computer, I will receive this custom logon box. That is done using a permanent cookie. The cookie (associated with ) contains cryptic strings like:


When I request login page from, the cookie gets automatically sent to the server, which is using it to locate my picture on the Yahoo! file system. The URL of the Albion Alley image above is – it includes some kind of session ID, which is necessary to access the image. Like the login page (, the image store is only available through SSL-encrypted connection.

What can go wrong? Can this login box be forged by a malicious site? Let’s do some threat modeling. Below is the list of attacks and comments with regards to picture authentication:

  • Traffic interception. Since SSL is required, a non-issue;

  • Tampering with the network environment (traffic redirection, DNS poisoning, DNS spoofing and so on). The cookie will be sent to even though it’s tampered with and is malicious. A purpose-built, server side HTML processor is required to extract the login box elements from the genuine site and place it on the malicious page, which is quite complicated but possible;

  • Cross-site scripting (XSS) attack. Cookies are designed to be read only by the site that provides them, not by other sites. XSS gets around this restriction. Very dependent on the browser and the site design, the attack should also incorporate the HTML processor;

  • Password-grabbing trojans. The picture doesn’t protect from those;

  • User negligence. The mother of all phishing attacks. It is required in the redirection attack (ignoring warnings about SSL or lack thereof). Some users will happily ignore the fact that the picture has changed or disappeared.

The picture is never shown to the client that doesn’t have the cookie. To create the cookie, you need the picture. It takes compromising the client and stealing the cookie (or the picture) to misrepresent the server.

I was very sceptical about the idea of picture authentication by apparently Yahoo!’s implementation is raising the bar for man-in-the-middle attacks significantly. Way above the capabilities of an average stupid phisher. However it’s not clear what percentage of the $13 billion fraud economy (according to Forbes and Javelin strategy and research – where do they get those numbers from?) is dependent on simplistic phishing and if technologies like PassMark that do almost same as Yahoo!’s (but worse) but cost their customers, financial institutions, $1 a customer a year, is everyone’s guess. Either way, phishing is doomed.

False sense of security

Anyone noticing security seals on the Web sites? If not, here’s how they look like:

Verisign Globalsign Entrust 

This is how they work: you click on the seal, and a pop-up window opens telling you that the bearer of this is indeed who they claim they are. Plus some marketing material and sometimes a link to abuse report form. Please go to the web sites of the SSL certificate vendors to see this amazing functionality yourself. Moreover, according to Verisign:

Displaying the seal on your Web site can increase visitor-to-sales conversions, lower shopping cart abandonment, and result in larger average purchases.

They also call  it a trust mark. Never mind that the real trust mark is the padlock that is displayed by the browser. Well, there’s one problem with that: not too many people are paying attention to the padlock. So someone in the marketing department came up with the seal idea.

In reality the seals closely resemble Web page ads. And they have a similar role: the seals allow vendors of SSL certificates to collect information about visitors of the owners of Web sites using those SSL certificates. Thawte even displays a convinient invisible image (, the type often used for user tracking, to those who click their seal.

Meanwhile the users tend to ignore picture ads – especially those saying “click me”. So the primary, advertised function isn’t achieved. Not that the picture, or the pop-up windows prove anything. Spoofing is trivial.

Commercial certification authorities must end this practice. As something that gives false sense of security, the secure seal is bad for security.

Crack the PIN

Security of PINs (Personal Identification Numbers) that are used in your debit and credit cards is an interesting topic. Behind the scenes, the way PINs handled evolved together with science, technology, and business. And secure operation was always number one priority here.

For example, take PIN entry devices. As IT industry struggles with the concept of endpoint security for computer systems, financial institutions have vast networks of secure PIN pads for ages. They are at least tamper-evident, initiated in secure environments and rendered unusable of someone tries to change them.

Attacks on PIN evolve too. Rapid increase in computing capacity made PIN brute forcing possible. Here’s the attack against VISA PVV DES encryption. Further cryptanalysis gave us the decimalisation table attacks – which also requires quite high level of access to the systems dealing with PINs.

Along comes The Unbearable Lightness of PIN Cracking. This “attack” not only requires something like full ownership of ATM processing network, but also is using certain APIs to the hardware security modules that generally don’t exist. Yes, that’s an illustration of unbearable lightness of sensationalist bulldust.

Which got me thinking – are PINs really so secure? And I came to conclusion that one trivial attack – namely, distributed manual brute forcing – is largely overlooked. the idea is simple: as most PINs have four-digit PINs and the card’s magnetic stripe is easily copied, massively parallel brute forcing yields certain success. Either the scenario is thought to be too hard to implement (ATMs were quite rare just few years back), or the risk is considered low for anothe reason – I don’t know. Still the attack doesn’t seem to be publicly discussed anywhere – so I have published an article about it in 2600 – The Hacker Quarterly. Also I think that fraud monitoring systems may not be be much of a help in certain situations – namely, if PIN is verified against PIN verification value stored on the card before the transaction is sent to the issuer for authorisation (funds available checks, etc). If that is the case, unsuccessful PIN tries aren’t visible to the bank – and the whole distributed PIN brute forcing attempt will be virtually undetectable.

Similarly to Windows security, backwards compatibility is going to be risky for the banks for a long while.

Network QoS: a losing game

Achieving a reliable quality of service (QoS) mechanism on the networks is a long and unfulfilled dream of network engineers. As circuit-switched networks are becoming a rarity, the concern of bandwidth starvation doesn’t go away, and QoS topic is lively as ever.

Looking at history of the subject, we can make an interesting observation: all attempts at the network QoS are some kind of a failure. It all started when IP, the Internet protocol, was in its infancy and SNA (IBM’s Systems Network Architecture) was looking as respectable business-oriented universal network protocol stack candidate. SNA introduced a concept of class of service back in nineteen-seventies. Search for TERMPRIORITY for details. For example, this one:

Transaction processing priority is equal to the sum of the terminal priority, transaction priority, and operator priority, not exceeding 255.

Amazing idea. It went nowhere.

Internet Protocol was born and raised without QoS. Every now and then people asked – Why my downloads are so slow? Can we really talk online? And (this is really one of the FAQs) – how do I make sure that the boss doesn’t notice that hundreds of other people are using same channel to the Internet?

Implementing QoS was one of the suggested answers. Early on, a byte in the IP header was allocated for TOS, the Type of Service – but its definition is ever-changing. We had a protocol with cool name RSVP. We have diffserv. Microsoft incorporates QoS features in Winsock – this actually helps to solve the boss problem… But network guys aren’ Windows guys so identity awareness is out of question, and application awareness is rather limited: protocols that use dynamic port ranges and those tunneled through HTTP (and perhaps SSL) both are not supported by router-based traffic shaping – the favourite QoS solution. Which is only manageable in point-to-point scenarios and quickly becomes a nightmare as a enterprise network grows.

Meanwhile growth of demand for bandwidth doesn’t seem to slow, and kilobyte a second tends to be cheaper every year. So the real solution to the bandwidth shortage is increasing the capacity of communication channels. Same as it always was. And those dreaming of network QoS may as well use avian carriers for data transmission.

Let there be ping!

It’s amazing how many system administrators prefer to block ICMP pings. Many don’t even remember the classic justification for it – to prevent the Ping of Death attack: it was a concern some 10 years ago. So perhaps they are following the least privilege principle? Well, the principle is to take away unneeded access.

And this is where paranoia fails the admins. Ping is not a necessity but it’s bloody useful, for many reasons:

  • It’s very convenient way of checking connectivity – one that you can talk through over the phone, with an average user on the other end;

  • ICMP ping with increasing buffer sizes is actually the best way to troubleshoot MTU issues, stil occuring a lot (especially in the organisations that use excessive arrays of redundant firewalls);

  • The protocol doesn’t create much load on the system;

  • Ping monitors are good complement to application-aware availability monitoring systems;

  • And allowing ICMP ping to reach your system/network and monitoring its use is a very good basic honeypot. Every intrusion starts with exploration, and the first step of active exploration is usually a ping (as an initial stage of nmap). On the other hand, only sys. admins and other support personnel have legitimate need for using ping. So exceptions should raise questions.

Allowing ping is easy. This is how you do that in Windows Firewall:

Allowing ICMP echo requests in ICF

In enterprise firewalls, that’s not much harder. So I suggest – change your defaults to allow ping!

Alliances of incapable

Anyone remembers United Linux? An attempt of few Linux distro makers to take on Red Hat, the market leader, by creating a common product core, it has become a spectacular failure.

many didn’t learn the lesson. There are two other industry alliances,
both working in the information security space, that look very much
like the abovementioned failure.

The first one is called Liberty Alliance.
The stated goal is to create open standards for federated identity
management as well as business and deployment guidelines, and the best practices for managing privacy.
The real goal was to respond to Microsoft’s Hailstorm (or .Net My
Services). Microsoft’s initiative never meterialised but the Liberty
Alliance drags on, without focus and with really good and viable alternatives
. They even release specifications – as useful as Microsoft® .NET My Services Specification, also available (from $0.01).

The other alliance is OATH – the Initiative for Open Authentication.
The stated goal is to address issues like theft of information and
unauthorised access with a set of open standards. OATH is taking an all-encompassing approach, delivering solutions that
allow for strong authentication of all users on all devices, across all
. The real goal is to
counter RSA Security (and its really good proprietary one-time password solution) advances in the market.

the issues with the alliances: they are created based on marketing
considerations; they try all-encompassing solutions and position
themselves as
best practice from the beginning, before gaining any credibility
outside of the alliance members and their customers; and their strategy
is dictated by their competition.

Grassroots movements with no obvious corporate alignment produce much more valuable outcomes. 

Decision making too hard

Amazing news from the US:

The Federal Communications Commission has officially grounded the idea of allowing airline passengers to use cellular telephones while in flight.

Existing rules require cellular phones to be turned off once an aircraft leaves the ground in order to avoid interfering with cellular network systems on the ground. The agency began examining the issue in December 2004.

In an order released Tuesday, the FCC noted that there was “insufficient technical information” available on whether airborne cell phone calls would jam networks on the ground.

It takes more than two years for a bunch of government employees (with employment benefits many Americans can only dream about) to make decision that they cannot make decision. And therefore leave restrictions in place. The restrictions that are wrongly presented to us the airline customers as a safety measure (responsible for the flight safety is another authority, the Federal Aviation Administration).

It probably would be cheaper for American taxpayers to finance full-scale testing, with cell networks and airplanes stuffed with hundreds of active mobile phones flying above those somewhere in Arizona desert. But apparently the government bureaucrats aren’t interested in making decisions based on facts. That’s sad.

Who needs standards like this?

Payment Card Industry (PCI) Data Security Standard is a interesting light reading. It incorporates all the best practices of running IT infrastructure securely, in condenced form.

And it’s a perfect illustration of what’s wrong with following the best practices blindly. Let’s see:

Firewalls are a key protection mechanism for any computer network. 

No they aren’t. Not the current generation of firewalls anyway. Lacking both identity and application awareness, and increasingly helpless in preventing security exposures, firewalls are more or less useless. Identity and access management systems are the key protection mechanism.

For wireless environments, change wireless vendor defaults, including but not limited to, wired equivalent privacy (WEP) keys, default service set identifier (SSID), passwords, and SNMP community strings. Disable SSID broadcasts.

SSID is a basic network identification mechanism and has nothing to do with security. You cannot hide SSID if the network is actually used. Security through obscurity is not real security – and in this case even obscurity isn’t achieved.

Implement only one primary function per server (for example, web servers, database servers, and DNS should be implemented on separate servers).

This comes to an old argument: “If my web server is compromised, my database server is still safe”. Never mind that all users that use the Web server are compromised as well. Never mind virtualisation. You cannot avoid point of security failure – however many servers you will use for your setup, the threat models and overall risk will be roughly the same. Besides, as history shows us, running everything on a single mainframe is a valid approach to building systems.

Ensure that anti-virus programs are capable of detecting, removing, and protecting against other forms of malicious software, including spyware and adware.


And this is my favourite bit:

For wireless networks transmitting cardholder data, encrypt the transmissions by using WiFi protected access (WPA or WPA2) technology, IPSEC VPN, or SSL/TLS. Never rely exclusively on wired equivalent privacy (WEP) to protect confidentiality and access to a wireless LAN. If WEP is used, do the following:
• Use with a minimum 104-bit encryption key and 24 bit-initialization value
• Use ONLY in conjunction with WiFi protected access (WPA or WPA2) technology, VPN, or SSL/TLS
• Rotate shared WEP keys quarterly (or automatically if the technology permits)
• Rotate shared WEP keys whenever there are changes in personnel with access to keys
• Restrict access based on media access code (MAC) address.

Restriction of MAC address is useless, as MAC information is easily intercepted and forged; you cannot use WEP in conjunction with WPA or WPA2 (factual error); and using IPsec VPN, or SSL makes any WEP configuration redundant – so the whole paragraphe isn’t needed.

I can go on. The PCI DSS is full of clichés, unjustified requirements, and unjustifiable requirements. It also lacks detail where it might help. I’m pretty sure that TJX, the company that lost 45 million of the customers’ cards details, has successfully passed its PCI DSS audits – as it is required as a Level 1 merchant. Which highlights issues with both audits and the standard. It’s much better not to have a standard than to have a useless one.