Spyware Sucks
“There is no magic fairy dust protecting Macs" – Dai Zovi, author of “The Mac Hacker’s Handbook"

Ransomware and lazy coders

April 29th 2006 in Uncategorized

I’ve just been reading this article about the latest “ransomware” to hit the streets:

I’m sure Kapersky will forgive me for quoting the sections pertinent to this post:

“I think we have an interesting development going on here, I think there are two different types of ransomware.   Real ransomware, which encrypts your data or does other nasty stuff.  And malware which claims to do all sorts of nasty stuff but actually doesn’t. It’s bluffing, like bluff poker.

Ransomware has gotten quite some media attention and now criminals are trying to simply bluff people into giving up their money, instead of having to write difficult code.”

Writing difficult code… its a good point.  Its amazing how much stuff out there nowadays is being created by script kiddies using various tools to generate their wares.  There was a virus generator around for a while (not sure if it it still is) and a rootkit generator as well.  But, when push comes to shove, those script kiddies ain’t that good – without the generators they use they wouldn’t be able to do what they’re doing.

The capabilities of malware, and of malware writers, have been a high point of focus for me lately.  Its been said that if we lock things down in one way the bad guys will simply find a way around our defences.  But, when I read things like the Kapersky article it reminds me that a lot of the stuff out there that won’t adapt to new defences.

The quick money.. the easy money.. that’s what the vast majority of bad guys are after.  Sure there are “professionals” out there (popular sentiment placing them in Russia and other eastern bloc countries) who write very sophisticated malware that can be extremely difficult to remove, and a small percentage of such malware is able to get through our firewalls, but what percentage of the bad guys out there have such abilities? 

It has been said that if we introduce a particular security feature, then the bad guys will see that feature and bypass it anyway.  I’ve been thinking about the sentiment over the past few days.  I’ve come to realise its a pervasive mindset, but its one that I’m finding hard to settle in my mind as ok.   Are we correct to *not* block 95% of the bad stuff via outbound filtering simply because 5% may get through anyway?  If we do block that 95%, how long will it take before that it adapts and neutralises our measures?  Will it adapt at all?

I can understand how forcing the bad guys to increase their level of sophistication is a bad thing – as the bad guys get better at what they do, and bypass more and more of our security measures, then things get harder and harder for us in the battle to win.  But, at the same time, without that crossing of swords we wouldn’t have seen the security improvements that we now have the benefit of – a lot of software either would not come to be, or would not have been improved.

Comments are closed.

Error message when you start Internet Explorer 6 on a Windows XP-based computer: “Runtime Error! Program: C:\Program Files\Internet Explorer\IEXPLORER.EXE”http://support.microsoft.com/default.aspx?scid=kb;en-us;916245
(I am wondering if the above should refer to iexplore.exe, not iexplorer.exe – there is malware that uses an executable called iexplorer.exe, but that doesn’t seem to be the target of this article despite the reference to running […]

Previous Entry

A brief article has just gone live at the Handlers Diary at the SANS Internet Storm Centre with by-line “Relay Reject Woes”http://isc.sans.org/diary.php?storyid=1299
Pity that poor guy putting all that time and effort into fighting the spam-bots. 
The article brings to mind my experiences about 6 years ago; I’d just started taking care of a server running […]

Next Entry