Monthly Archives: March 2012

USPS Click-N-Ship abused in malware spam

This campaign begins with an email that looks like this:

The email indicates that you have been charged a random amount of money to have a shipping label created. In this case, we were charged $47.44. Because we haven’t really ordered a shipping label, we might be upset to be charged, and click the “USPS Click-N-Ship” link that APPEARS to take you to “”.

In reality, there are more than eight hundred destination webpages on more than one hundred sixty (160) websites were advertised in emails that we saw in the UAB Spam Data Mine that use this template, but none of them go to the United States Postal Service.

A single destination would have many subdirectories, all created by the hacker, that contained the link. For example, this Czech website:

1 | | /1xmg2qrr/index.html
11 | | /9hEetc63/index.html
5 | | /CgeknEwU/index.html
14 | | /FP817PwV/index.html
9 | | /hQLv8GxT/index.html
1 | | /LRt1KuAY/index.html
13 | | /qedwZQiv/index.html
1 | | /rSqvJdhP/index.html

The spam messages use a variety of subjects. The ones we saw yesterday were:

count | subject | sender_domain
479 | USPS postage labels order confirmation. |
433 | Your USPS postage charge. |
428 | USPS postage labels receipt. |
403 | Your USPS postage labels charge. |
384 | Your USPS shipment postage labels receipt. |
346 | USPS postage labels invoice. |
322 | Your USPS delivery. |
319 | USPS postage invoice. |
(8 rows)

This was a very light campaign, compared to many that we have seen recently. We received more than half of these emails in a single 15 minute span ending at 7:15 AM our time – which would be 8:15 AM on the US East Coast. We have the theory that the new spam campaign, with a never-before-seen malware sample, is sent at the beginning of the East Coast day as a way to get maximum infections in places like New York City and Washington DC.

The most common websites, all with their own “random-looking” subdirectories were:
count | machine
598 |
208 |
150 |
143 |
139 |
138 |
127 |
126 |
126 |
118 |
113 |
112 |
112 |
102 |

(The rest of the list is at the end of this article…)

A Sample Run

Each day in the UAB Computer Forensics Research Laboratory, students in the MS/CFSM program produce a report shared with the government called the “Emerging Threats By Email” report. They take a prevalent “new threat” in the email from that day and document it’s action, in part by infecting themselves with the malware! Here’s a sample run through I did this morning using the techniques followed in our daily report.

We begin by visiting a website advertised in the spam. In this case, I chose: ( /BSg1hNCZ/index.html (400 bytes)

These “email-advertised links” each call javascript files from a variety of other sites. In this example run, visiting the site caused us to load Javascript from the URL below. ( / xTnfi7mG/js.js (81 bytes)

This javascript file sets the “document location” for the current browser
window to be “″ with a path of showthreat.php
?t = 73a07bcb51f4be71. This is a Black Hole Exploit kit server, which causes the rest of the infection to be continued.)

This is the location my run gave this morning . . . yesterday morning’s run used a different Black Hole Exploit Kit location: (20,110 bytes) (14,740 bytes) (dropped calc.exe 151,593 bytes)
MD5 = 44226029540cd2ad401c4051f8dac610
VirusTotal (16/42)

The next two files are dropped because of the Java execution of “Pol.jar”.

At the time of the UAB Emerging Threats by Email report on Friday morning March 29th, the Virus Total detections for this malware were “2 of 42″. More than 20 hours later the detection is still only “19 of 42″. ( /WBoTANuY/hBhT7.exe (323,624 bytes)
MD5 = 276dbbb4ae33e9e202249b462eaeb01e
VirusTotal (19/42) ( /sNxQTzEK/bHk6KE.exe (323,624 bytes)
MD5 = 276dbbb4ae33e9e202249b462eaeb01e
VirusTotal (19/42)

The “Zeus file” (the 323,624 byte one) copies itself into a newly created randomly named directory within the current user’s “Application Data” directory. In the current run, it disguised itself with a “Notepad” icon, claiming to be “Notepad / Microsoft Corporation” in it’s properties. The file was named peix.exe (but that’s random also.) The file does an “in place update” so that my MD5 modified without changing the filename. My new MD5 of this morning was:


Which gives a current VirusTotal detection of (14/42):

AntiVir = TR/Crypt.XPACK.Gen
Avast = Win32:Spyware-gen [Spy]
AVG = Zbot.CO
BitDefender = Gen:Variant.Kazy.64187
DrWeb = Trojan.PWS.Panda.1947
F-Secure = Gen:Variant.Kazy.64187
GData = Gen:Variant.Kazy.64174
Kaspersky = Trojan-Dropper.Win32.Injector.dxrh
McAfee = PWS-FADB!98202808DEA5
Microsoft = PWS:WIn32/Zbot.gen!AF
NOD32 = Win32/Spy.Zbot.AAN
Norman = W32/Kryptik.BKR
Rising = Trojan.Win32.Generic.12BDDB90
VIPRE = Trojan.Win32.Generic.pak!cobra

Most of those definitions just mean “Hey! This is Bad! Don’t Run It!”

Antivirus companies don’t use the same names for most of this stuff as cybercrime investigators. So, for instance, in the Microsoft Lawsuit last week, they described criminals involved with three malware families = Zeus, SpyEye, and IceIX. All of these would show a “Zbot” or “Kazy” detection in the group above. PWS means “Pass Word Stealer.” “pak”, “XPACK”, and “kryptic” just mean that the malware is compressed in a way that implies it is probably malicious.

The bottom line is that this very successful malware distribution campaign has tricked people into installing something from the broader Zeus family (whether Zeus, SpyEye, or IceIX doesn’t really matter to the consumer). Once compromised, that computer is going to begin sharing personal financial information with criminals, and allowing remote control access to the computer from anywhere in the world to allow further malicious activity to occur.

This is the kind of malware that was featured on NBC’s Rock Center with Brian Williams recently, and that was at the heart of the civil action taken by Microsoft, FS-ISAC, and NACHA that lead to the seizure of many domain names and some servers controlled by Zeus Criminals.

Click to learn more about UAB’s Center for Information Assurance and Joint Forensics Research or to learn about UAB’s Masters Degree in Computer Forensics & Security Management.

other destinations

98 |
96 |
88 |
85 |
84 |
82 |
78 |
77 |
74 |
70 |
67 |
67 |
65 |
62 |
58 |
52 |
52 |
45 |
44 |
44 |
44 |
41 |
41 |
40 |
39 |
38 |
37 |
37 |
34 |
33 |
33 |
30 |
21 |
20 |
16 |
16 |
12 |
11 |
11 |
10 |
10 |
10 |
10 |
10 |

More community TFS build extensions documentation

As part of the on-going effort in documentation I have recently published more documentation for the TFS build extension project activities

  • AssemblyInfo
  • CodeMetric (updated) and CodeMetricHistory
  • File
  • Twitter

More community TFS build extensions documentation

As part of the on-going effort in documentation I have recently published more documentation for the TFS build extension project activities

  • AssemblyInfo
  • CodeMetric (updated) and CodeMetricHistory
  • File
  • Twitter

Visual Studio Live @ Las Vegas Presentations – Tips and Tricks on Architecting Windows Azure for Costs

Unfortunately I wasn’t able to go and speak in Visual Studio Live @ Las Vegas as it was scheduled, due to an illness that made it impossible for me to travel, and stay in bed for a few days.

But even if I wasn’t there I would like to share with you some of the points on this topic “Tips and Tricks on Architecting Windows Azure for Costs”.

Tips & Tricks On Architecting Windows Azure For Costs
View more presentations from Nuno Godinho
The Key points to achieve this are:
  • Cloud pricing isn’t more complex than on-premises, it’s just different
  • Every component has it’s own characteristics, adjust them to your needs
  • Always remember that Requirements impact costs, choose the ones that are really important
  • Always remember that Developers and the way things are developed impact costs, so plan, learn and then code.
  • Windows Azure pricing model can improve code quality, because you pay what you use and very early can discover where things are going out of plan
  • But don’t over-analyze! Don’t just block because things have impacts, because even today the same things are impacting you, the difference is that normally you don’t see them that quickly and transparently, so “GO FOR IT”, you’ll find it’s really worth it.

In some next posts I’ll go in-depth into each one of those.

Special thanks for Maarten Balliauw for providing a presentation he did previously that I could work on.

Visual Studio Live @ Las Vegas Presentations – Architecture Best Practices in Windows Azure

Unfortunately I wasn’t able to go and speak in Visual Studio Live @ Las Vegas as it was scheduled, due to an illness that made it impossible for me to travel, and stay in bed for a few days.

But even if I wasn’t there I would like to share with you some of the points on this topic “Architecture Best Practices in Windows Azure”.

Here are 10 key Architecture Best Practices in Windows Azure:

  1. Architect for Scale
  2. Plan for Disaster Recovery
  3. Secure your Communications
  4. Pick the right Compute size
  5. Partition your Data
  6. Instrument your Solution
  7. Federate your Identity
  8. Use Asynchronous and Reduce Coupling
  9. Reduce Latency
  10. Make Internal Communication Secure

In some next entries I’ll go in-depth into each one of those.

Windows To Go

One of my favourite enterprise features that Microsoft is adding to Windows 8 is Windows To Go, which lets you provision a desktop on a USB flash drive and take it with you to boot on any hardware that meets the usual Windows 8 requirements. An IT department can build a desktop image, with applications installed (perhaps some of the intranet apps that you wouldn’t let your staff install on their home PC), and even domain join it before passing it to someone who needs to travel light, or who wants to be able to do some sensitive work on their personal laptop (the one that’s full of spyware and crap because their kids have had the ability to install anything – you know the one – it’s got so many browser toolbars that any web page is only an inch or two tall!). You can even secure it with BitLocker, without requiring a TPM chip in the hardware that’s going to host it.

Speaking of that host hardware, as I said, so long as it would support Windows 8 and will boot from USB, then you’re good to go. You won’t have access to any internal drives in that hardware (unless you’re also the administrator of that machine), but you will be able to use additional devices that you’ve plugged into its other USB ports, for example. When you use Windows To Go on a host PC for the first time, it’s going to do some plug’n’play detection (which may take a few minutes), then continue to boot. Every new bit of hardware is going to be stored in a profile, so the next time you use the same host it’s going to boot much faster (about as fast as you would expect from an internal drive).

Windows To Go isn’t, as a recent TechTarget mailing so cleverly pointed out, the answer to all your “Consumerisation of IT” dreams – they astutely observed that Windows To Go won’t run on an iPad. Running Windows from a USB flash drive on a device that has no USB port is apparently beyond Microsoft – shame on them! ;-)

As an additional security measure, if you need to exit in a hurry (I like to imagine myself using Windows To Go behind enemy lines while I’m on some kind of secret mission – I don’t know why!), then you can just pull the drive out and the machine will freeze. If you don’t push it back into the same USB port within 60 seconds then the machine will reboot. If you knocked it out by accident (because the guy entering the internet cafe wasn’t actually a SPECTRE assassin hot on your heels), then you can plug it back in and carry on – if you were playing a video at the time, for example, it’ll take under a second to continue playback.

So to recap, as the IT guy, you can give somebody a Windows 8 instance (which you trust) that they can boot on their own hardware (which you don’t trust!), and you can continue to manage that instance like you would any other domain computer. You can give them software that you wouldn’t let them install on an untrusted computer without all the expense of giving them a trusted computer that you’ve configured. Just as importantly, your user can do important work stuff on the shiny new laptop that they bought for themselves without having to give it to you so that you can configure it and take away their admin rights. It’s a fantastic step in the right direction where “Bring Your Own Device/Computer” (BYOD/BYOC) is concerned.

With Windows 8 just in Consumer Preview (and Windows Server 8 in Beta) at present, all the details aren’t fully released about this feature yet, so some of this may not be 100% acurate at the time you read this:

You need at least a 32GB (my test image has Windows 8, Office 2010, Windows Live Essentials and a bunch of files on it and it still has 15GB free). The drive should be USB 3.0, although it’s going to work when plugged into a USB 2.0 port. These flash drive aren’t aren’t especially cheap at the moment, and they don’t all work as you’d hope…

When OEMs build drives, they have firmware that includes (among other things) a Removable Media Bit. The RMB is the thing that tells Windows whether the drive is “fixed” or “removable” (it defines the seperation in Windows Explorer). The trouble is that if you get one where the RMB is set to “removable” then Windows won’t do certain things with it. It won’t let you partition the drive, so you can’t use BitLocker; it won’t run Windows Update (including standalone WU packages); it won’t let you download apps from the Microsoft Store, and I dare say there are other things that I haven’t come up against yet. With some drives you can flip the value of the RMB, but on the Kingston DT Ultimate G2 32GB that I have, you can’t (I asked Kingston about this and told them why it was an issue – they’re going to bear it in mind for future products).

The upshot is that while you may be able to get Windows To Go to work today, you might not be able to do everything with it, and you might want to exercise caution before buying a load of drives, even if someone says that it works with a particular model.

All that said, if you want to give it a go, there are step-by-step instructions on the TechNet wiki, and a very informative video from the 2011 BUILD conference. Also, Ars Technica has an step-by-step with a slightly different method, using the WAIK and a single partition, so you can do it on a “removable” drive (although you can tweak the TechNet steps to do that too).

Before I forget (and because this is one of the things that I was asked at the TechDays UK IT Camp this week), you are going to be activating Windows via AD or a key management server, hence my pointing out right at the start of this post that this is an enterprise feature.

DNS Changer: Countdown clock reset, but still ticking

Operation Ghost Click

Last November, the main website headline was “DNS Malware: Is Your Computer Infected?”. The story detailed the arrest of six Estonian criminals who had infected more than 4 million computers with malware that changed Domain Name Server settings on the impacted computers. The impact of this change was that when a user typed an address in their web browser, or even followed a link on the web page, instead of asking their Internet Service Provider’s DNS server where they should go to reach the computer that had that name, they would ask a DNS server run by the criminals.

Most of the time, the traffic still went to the correct address. But at any time of the criminals’ choosing, they could replace any website with content created or provided by the criminals. This allowed them to do things like place an advertisement for an illegal pharmaceutical website selling Viagra on a website that should have been showing an advertisement paid for by a legitimate advertiser.

The case, called “Operation Ghost Click” was the result of many security professionals and researchers working together with law enforcement to build a coordinated view of the threat. The University of Alabama at Birmingham was among those thanked on the FBI website.

DNS Servers and ISC

This case had one HUGE technical problem. If the criminals’ computers were siezed and turned off, all of the four million computers that were relying on those computer to “find things” on the Internet by resolving domain names to numeric IP addresses for them would fail. They wouldn’t just “default back” to some pre-infection DNS setting, they would just stop being able to use the Internet at all until someone with some tech-savvy fixed the DNS settings on those computers.

Because of this, the court order did something unprecedented. Paul Vixie, from the Internet Systems Consorium, a tiny non-profit in California that helps to keep name services working right for the entire world, was contracted to REPLACE the criminals’ DNS Servers with ISC DNS Servers that would give the right answer to any DNS queries they received. Vixie wrote about his experience with this operation in the CircleID blog on Internet Infrastructure on March 27th.

The problem, as Vixie, and other security researchers such as Brian Krebs, have related is that the court order was supposed to be a temporary measure, just until the Department of Justice managed to get everyone’s DNS settings set back the way they were supposed to be. Back in November, the court decided March 9th would be a good day to turn off the ISC DNS servers.

But are you STILL infected?

Unfortunately, the vast majority of the 4 million compromised computers have not been fixed. On March 8th the court agreed to give them an extension until July 9th. (Krebs has a copy of the court order here)

But how do you know if YOU are still infected?


When I visit the website “DNS-OK.US” I get a green background on the image (shown above) which tells me that my computer is not using a DNS server address that formerly belonged to an Estonian cybercriminal. (The website is available in several other languages as well.)

The tech behind this is that the website is checking to see if you resolve your DNS by using an IP address in the following ranges: – – – – – –

If you ARE, then you need to assign a NEW DNS SERVER ADDRESS.

The DNS Changer Working Group has a CHECKUP page and a DNS CLEANUP page to explain this process to technical people. Any “computer savvy” person should be able to follow their guidelines to get the job done.

Good luck!

Gary Warner
Center for Information Assurance and Joint Forensics Research at the University of Alabama at Birmingham.
Learn more about our Masters Degree in Computer Forensics and Security Management.

Open Source Microsoft–Build MVC, WebAPI, Razor, and WebPages

Scott Guthrie has announced on his blog that as of this very moment, ASP.NET MVC, ASP.NET WebAPI, and WebPages with Razor syntax have all been open sourced on CodePlex at That’s huge news. Oh and the ASP.NET Web Stack can be repo’d using TFS, SubVersion, Mercurial, and newly added Git.

So, you may be thinking “This sounds cool. But, what does it mean for me?” It means your awesome. It means that you can now take your favorite features and patches to their framework and submit it back to the team for review. It means you can use their framework when it is eventually ported over to Mono and other open-source platforms. It means, you’ll eventually be able to run ASP.NET wherever you’d like.

Be sure to check it out and provide feedback to the team. If you’re not sure what type of feedback to provide, choose from the following:

  • “The ASP.NET team just knocked it out of the park with this: Go OSS!”
  • “ScottGu and his team delivered yet again.”
  • “Who said that Microsoft can’t release software using an open source license?”
  • “Congrats to the ASP.NET team for, yet again, exceeding expectations!”

Your choice. In the meantime, great job Microsoft!

SQL Server # Storing Hierarchical Data – Parent Child n’th level # TSQL


Today, I would like to explain one way in which we can store the HIERARCHICAL data in SQL tables. A general table structure which people come up to store this kind of data is -


Where, EmployeeID id the UniqueID alloted to every new employee record inserted into the table and ManagerID is the EmployeeID of the immediate manager of the employee. Keeping in mind that Manager is also an employee.

Problem Statement

This table structure very well serves the purpose as long as we have 1-Level hierarchy. However, if the hierarchy is of n’th level, the SELECT statement to fetch the records becomes more complex with this kind of table structure. Suppose, we want to fetch the complete TREE of a particular employee, i.e. list of all the employees who are directly or indirectly managed by a particular employee. How to do it……..?

Thanks to CTE’s for making the life a bit easier – as using them in a recursive manner, we can get the work done. Please follow this msdn link to see an implementation using recursive CTE.

Suggested Table Structure


Here, I have just included a new column [PATH]. It is of VARCHAR(MAX) type. I have taken it as VARCHAR(MAX) just to make sure the field is long enough to store the complete path. But one can assign appropriate size as per their system’s requirement.

The basic idea of the [path] column is to store the complete hierarchical path of any employee separated by a delimiter as under –


Calculating the new path is very simple. It’s just, {New Path} = {Parent Path} + {Self ID} + {Delimiter}

Now, suppose if I want to fetch all the employees who are directly or indirectly working under EmployeeID = 2, I can use the below tsql –

SELECT 1 EmployeeID,NULL ManagerID, '1' [Path]
SELECT 2 EmployeeID,1 ManagerID, '12' [Path]
SELECT 3 EmployeeID,1 ManagerID, '13' [Path]
SELECT 4 EmployeeID,2 ManagerID, '124' [Path]
SELECT 5 EmployeeID,4 ManagerID, '1245' [Path]
  [Path] LIKE '%2%'

We can use a simple logic to even find out the level of the Employee –

  (LEN([Path]) - LEN(REPLACE([Path],'',''))) - 2 [Level]
  [Path] LIKE '%2%'


2 is subtracted from the formula as the length of delimiter for Level-0 is 2.


Hope, this simple trick could save a lot of time for the ones who find themselves lost playing with the hierarchical data.

Unit testing in VS11Beta and getting your tests to run on the new TFSPreview build service

One of my favourite new features in VS11 is that the unit testing is pluggable. You don’t have to use MSTest, you can use any test framework that an adaptor is available for (at the release of the beta this meant the list of framworks on Peter Provost’s blog, but I am sure this will grow).

So what does this mean and how do you use it?

Add some tests

First it is worth noting that you no longer need to use a test project to contain your MSTest, you can if you want, but you don’t need to. So you can

  1. Add a new class library to your solution
  2. Add a reference to Microsoft.VisualStudio.TestTools.UnitTesting and create an MStest test
  3. Add a reference to xUnit (I used NuGet to add the reference) and create an XUnit test
  4. Add a reference to XUnit extensions (NuGet again) and add a row based xUnit test
  5. Add a reference to nUnit (you guessed it – via NuGet) and create a nUnit test

All these test frameworks can live in the same assembly.

Add extra frameworks to the test runner

By default the VS11 test runner will only run the MStest test, but by installing the runner for Visual Studio 11 Beta and NUnit Test Adapter (Beta) either from the Visual Studio gallery or via the Tools –> Extension Manager (and restarting VS) you can see all the test are run


You can if you want set it so that every time you compile the test runner triggers (Unit Testing –> Unit Test Settings –> Run Test After Build). All very nice.


Running the tests in an automated build

However, what happens when you want to run these tests as part of your automated build?

The build box needs to have have a reference to the extensions. This can be done in three ways. However if you are using the new TFSPreview hosted build services, as announced at VS Live, only one method, the third, is open to you as you have not access to the VM running the build to upload files other than by source control.

By default, if you create a build and run it on the hosted build you will see it all compiles, but only the MStest test is run


The fix is actually simple.

  1. First you need to download the runner for Visual Studio 11 Beta and NUnit Test Adapter (Beta) .VSIX packages from Visual Studio Gallery.
  2. Rename the downloaded files as a .ZIP file and unpack them
  3. In TFSPreview source control create a folder under the BuildProcessTemplates for your team project. I called mine CustomActivities (the same folder can be used for custom build extensions hence the name, see Custom Build Extensions for more details)
  4. Copy the .DLLs from the renamed .VSIX files into this folder and check them in. You should have a list as below

  5. In the Team Explorer –> Build hub, select the Actions menu option –> Manage Build Controllers. Set the Version control path for  custom assemblies to the new folder.


You do not need to add any extra files to enable xUnit or nUnit tests as long as you checked in the runtime xUnit and nUnit assemblies from the Nuget package at the solution level. This should have been default behaviour with NuGet in VS11 (i.e. there should be a package folder structure in source control as shown in source explorer graphic above)

You can now queue a build and you should see all the tests are run (in my case MStest, XUnit and nUnit). The only difference from a local run is that the xUnit row based tests appear as separate lines in the report


So now you can run tests for any type on a standard TFSPreview hosted build box, a great solution for many projects where just a build and test is all that is required.

Recent Comments