Category Archives: 13189

Western Digital Green vs Black Drive Comparison

In a recent post I described my new blistering fast Windows 8 Server, which includes a parts list.  This server features a 120GB SDD SATA III 6.0Gb/s drive for the operating system and uses a single 2TB Western Digital Green SATA III 6.0Gb/s drive (WD20EARX-00PASB0) for VM and data storage.




It has been suggested by some of my readers that the WDC Green drive will not provide suitable performance compared to a WDC Black SATA III drive.  They also wondered what the true power savings is between the Green and the Black drive.  The Green drive uses less power by spinning at slower RPMs (variable ~5400 RPM vs 7200 RPM for the Black).




I decided to purchase a Western Digital Caviar Black SATA III 6.0Gb/s drive (WD2002FAEX-007BA) to run benchmarks against and compare the two drives side-by-side using HD Tune Pro 5.00 and Microsoft Exchange Server Jetstress 2010 (64 bit).




I ran each set of tests for the Green drive, then replaced it with the Black drive and ran the same set of tests on my new server.  I also ran the the tests while the server was plugged into a P3 Kill A Watt Electricity Load Meter and Monitor to accurately measure power consumption by the kilowatt-hour for comparison.




HD Tune Pro Benchmarks

The following are the benchmark test results for both drives.  The Green drive is on the left and the Black is on the right.




Benchmark Results

The Black drive delivers 17.9% better average transfer speed.  The access time was 17.6ms for the Green vs. 12.0ms for the Black.  I was surprised to see that CPU usage was much higher on the Green (6.0%) vs the Black (2.4%).







File Benchmark Results

The File Benchmark test measures read/write transfer speed using a 500MB file in 4KB blocks.  The Black drive achieved 11.5% better performance using 4KB sequential access and 28.2% better using 4KB random access.







Random Access Results

The Random Access test measures the performance of random read or write operations with varying data sizes (512 bytes – 1MB).  Again, the Black drive performed better across the board with an average 31.2% improved performance.  It also offers much better access times.




It’s notable that the Green drive performed this test nearly silently, while the Black drive sounded like a Geiger Counter at Fukushima.  Neither of these drives feature AAM (Automatic Acoustic Management) so this does not impact the results (and cannot be adjusted).







Other Test Results

This benchmark runs a variety of tests which determine the most important performance parameters of the hard drive.  The Black drive offers 35.3% better random seek and 18.3% better sequential read performance.  It also has better transfer speeds from its cache.  Both drives feature a 64MB cache.







Exchange JetStress

I ran Exchange 2010 JetStress on each drive to get an accurate IOPS profile for Exchange 2010 SP2 use.  JetStress was configured for a two-hour test using a single 1TB database and one thread.



  • The Green drive achieved 47.396 IOPS with 10.751ms latency.
  • The Black drive achieved 64.57 IOPS with 15.180 latency.



I’m not sure why the Black drive’s latency was higher than the Green, given the benchmark tests above, but I ran that test twice and got the same results each time.  Even so, the Black drive delivered 26.6% more IOPS.







Power Analysis

Green Drive1.10 KW at 27.5 hours

Energy use per hour = (1.1 KWH)/(27.5 hours) = 0.04 KWH per hour of use
Energy use per day = (0.04 KWH/hour)(24 hours/day) = 0.96 KWH over a full day
Cost per day = (0.96 KWH)(18.5 cents/KWH) =  17.8 cents per day

Energy use per year = (0.96 KWH/day)(365 days/year) = 350 KWH/year
Cost per year = (350 KWH/year)(18.5 cents/KWH) = $64.82 per year.



350 KWH = ~700 lbs of greenhouse gas to the atmosphere per year.


Black Drive0.72 at 14.75 hours

Energy use per hour = (0.72 KWH)/(14.75 hours) = 0.049 KWH per hour of use
Energy use per day = (0.049 KWH/hour)(24 hours/day) = 1.18 KWH over a full day
Cost per day = (1.18 KWH)(18.5 cents/KWH) =  21.83 cents per day

Energy use per year = (1.18 KWH/day)(365 days/year) = 431 KWH/year
Cost per year = (431 KWH/year)(18.5 cents/KWH) = $79.74 per year.



431 KWH = ~860 lbs of greenhouse gas to the atmosphere per year.



Result: The WDC Green drive uses 18.8% less energy than the Black drive.







Conclusion

It’s obvious from the test results above that the Western Digital Caviar Black drive performs better than the Green drive.  At the time of this writing the Green drive costs $139 and the Black is $249.  That’s a 44% premium for a drive that performs on average 24% better.



In real-life observations I don’t really see that much difference in performance between the two at this time.  However, this Hyper-V server has twice as much RAM as my last server so it will potentially be hosting many more VMs (and will have a higher IO load).  For this reason I decided to keep the Black drive, even though it costs more, it’s a bit noisier when it’s working hard and uses more energy.  I like muscle cars, too.  :)



If you plan to do RAID, I would most definitely recommend the Black drive because it spins at a consistent 7200 RPM.  Reports say that the variable RPMs on the Green drive can cause read/write errors.



I hope you find this information useful.


Blistering Fast Windows Server – Parts List and Video

Walk with me now, as we take a stroll down Geek lane.  :)








I decided it’s time to replace my old Hyper-V server at home with a new one that’s faster and can run more VMs.  I’ve decided again to build it myself from OEM parts so I can get exactly what I want at a right price.  This article contains my parts list and my reasons for choosing what I did.  Hopefully, this will help you with your own home lab.
I host my private cloud network on a Windows Server 2008 R2 Hyper-V host server.  Hyper-V is perfect for my environment because it allows me to run workgroup applications (Exchange Edge Transport and IIS) directly on the host, as well as host my virtual domain servers.

My current Hyper-V server is an AMD x64 dual core rig with 16GB RAM and two SATA drives, one for the OS and another for VMs.  I built it about 3 years ago when I was on the Windows Server 2008 TAP and it has served me well.  But with Windows Server 8 and Exchange 15 right around the corner, I wanted to be sure I had the capabilities of running these new versions.

My Design Requirements
As with most customers, I have competing requirements for this new server:
  • Minimum of 4 cores
  • Windows Server 8 capable.  Hyper-V for Windows 8 requires hypervisor-ready processors with Second Level Address Translation (SLAT), as reported by Microsoft at BUILD.
  • 32GB of fast DDR3 RAM
  • Must support SATA III 6Gb/s drives
  • Must have USB 3.0 ports for future portable devices
  • Must be quiet.  This server is sitting next to me in my office (aka, the sunroom) and I don’t want to hear it at all.
  • Low power requirements
  • Small form factor
  • Budget: ~$1,000 USD
My RAM requirements drove most of this design.  Since this would be based on a desktop motherboard (server mobos are too big and ECC RAM is too expensive), I first looked for 4x8GB (32GB) DDR3 RAM.  Then I looked for a small mobo that would accept that much RAM, then a processor for that mobo.
 
Here’s my parts list, including links to where I purchased each item and the price I paid:
Part Number
Description
Price
Source
Intel Core i5-2400S Sandy Bridge 2.5GHz (3.3GHz Turbo Boost) LGA 1155 65W Quad-Core Desktop Processor Intel HD Graphics 2000 BX80623I52400S
$193.00
Amazon

Intel BOXDH67BLB3 LGA 1155 Intel H67 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard
$85.99
NewEgg
Komputerbay 32GB DDR3 (4x 8GB) PC3-10600 10666 1333MHz DIMM 240-Pin RAM Desktop Memory 9-9-9-25
$225.00
Amazon
OCZ Agility 3 AGT3-25SAT3-120G 2.5″ 120GB SATA III MLC Internal Solid State Drive (SSD)
$129.99
NewEgg
Western Digital Caviar Green WD20EARX 2TB 64MB Cache SATA III 6.0Gb/s 3.5″ Internal Hard Drive
$114.99
NewEgg
ASUS 24X DL-DVD Burner SATA II
$19.99
NewEgg
AeroCool M40 Cube Computer Case – Micro ATX, LCD Display, 2x 5.25 Bays, 3x 3.5 Bays, 4x Fan Ports, Black
$79.99
TigerDirect
Antec EA-380D Green 80 PLUS BRONZE Power Supply
$44.99
NewEgg
ENERMAX UC-8EB 80mm Case Fan
$9.99
NewEgg
nMEDIAPC ZE-C268 3.5″ All-in-one USB Card Reader with USB 3.0 Port
$16.99
NewEgg
Rosewill RX-C200P 2.5″ SSD / HDD Plastic Mounting Kit for 3.5″ Drive Bay
$4.99
NewEgg


Total:  $925.91


I was a little worried about the Komputerbay RAM.  I’ve never heard of them before, but they offer a lifetime warranty and 32GB DDR3 1333 (PC3 10666) RAM was $54 cheaper than what I could find at NewEgg.  In the end I’m very pleased with my decision.
I chose different sources for the best price.  NewEgg is my go-to vendor for most items.  They charge sales tax in California, but I have a ShopRunner account that gives me free 2-day shipping on all these items.  Amazon was the smart choice for the bigger ticket items since they don’t charge tax and I could get them delivered with a 30 day free trial of Prime 2-day shipping.  Not to mention the fact that I had a $500 Amazon gift card that I won at TechEd 2011 from my good friends at Vision Solutions!  TigerDirect was the only source for this great AeroCool micro ATX cube computer case.
All the items were delivered the same day and started putting it together that night.  Careful assembly took about 90 minutes and everything went together perfectly. 
It’s a Geek Christmas!

All the parts freed from their cardboard prisons

The only other item I added was a dual port Intel PRO/1000 MT Server Adapter that I already had.  I also used L-bend right angle SATA cables instead of the two that came with the Intel motherboard, due to the short clearance between the PSU and the back of the drives (I knew this going in).
The innovative AeroCool M40 micro ATX case opens up like a book for easy access.  The power supply, hard drives and DVD drive(s) are in the top half and everything else is down below.  It includes a nearly silent 120mm front fan and has room for one more on the top rear section and two 80mm fans on the bottom rear section.  I added a single silent 80mm fan on the bottom to push warm air out.  The case temperature has never gone above 26.4C and it’s completely silent.
View from above showing the Antec PSU, the 3.5″ and 5.25″ drive cages and the unused PSU cabling

View from the hinged side, showing motherboard placement

I’m using the OCZ 120GB SATA III SSD drive for the operating system and pagefile, Windows Server 2008 R2 Enterprise for now.  I’ll upgrade the server to Windows Server 8 when it goes RTM.  In the meantime, I’ll build and test beta versions as VMs.  I have to say that this SSD drive was one of the best choices for my new system.  It’s blistering fast!  Windows Server 2008 R2 SP1 installed in just 6 minutes!!  Take a look at the video below to see that it takes only 20 seconds to get to a logon screen from a cold start, and half of that time is the for the BIOS POST!

The Intel I5 4-core Sandy Bridge processor has amazing graphics built in.  I’m able to run Windows Server 2008 R2 with the Aero theme at 1920×1080 HD resolution with no difference in performance.  It’s possible to overclock this system, but it’s plenty fast for me and I value stability over speed.  I love the fact that it draws only 65W!  This not only saves electricity, it keeps the case cool which lowers the cooling requirements.
The bottom half with the case split open. The I5-2400s CPU came with this huge low profile CPU cooler.

As a desktop motherboard, the Intel DH67BL motherboard came with drivers that did not work out of the box with Windows Server 2008 R2.  I downloaded the latest drivers from Intel and most installed fine.  The only items I had trouble with were the built-in Intel 82579V Gigabit network adapter and the integrated Intel HD Graphics drivers.  Intel “crippled” the NIC driver installer so that it won’t install on a server platform.  See this article which explains how to re-enable it.   The video driver installed most of the way, but the installer crashed when trying to register a DLL.  It was able to install again fine after a restart.
I also used a Western Digital Green 2TB SATA III drive for storage of my Hyper-V VMs.  I’ve always used Western Digital drives and I’ve never had a problem with them.  The WD Green line saves power, runs cool and quiet, and delivers 6 Gb/s performance.
Photo of the completed server.  I placed a DVD on top to for scale.

This is by far the fastest sever I’ve ever worked on, bar none.  I’m extremely happy with it.  I haven’t bothered running any benchmarks* on it – I just know that it’s fast enough for my needs and has plenty of RAM so I can run more VMs.
I hope this article helps you to build your own home lab server.   Please let me know if you have any questions.

* There are lies, damn lies, and benchmarks.

Fix for DCOM 10009 Errors in Exchange 2010 SP1

You may notice DistributedCOM 10009 errors in the Windows Server 2008 R2 System Event Log whenever you run any of the following Exchange 2010 SP1 cmdlets:

  • Get-OWAVirtualDirectory
  • Get-WebServicesDirectory
  • Get-ActiveSyncVirtualDirectory




The DCOM 10009 error reads as follows:



Log Name:      System
Source:        Microsoft-Windows-DistributedCOM
Date:          7/1/2011 10:16:11 AM
Event ID:      10009
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      CAS01.domain.com
Description:
DCOM was unable to communicate with the computer CAS02.domain.com using any of the configured protocols.
This happens because of an security context error when invoking an RPC call to the remote CAS server.  The fix is to direct the RPC Runtime to ignore delegation failures.  This can be done by configuring the registry on both the source and target machines, but is more easily done using Group Policy.



To configure Ignore Delegation Failures manually:

  • Run REGEDIT on the source computer
  • Navigate to HKLM\Software\Policies\Microsoft\Windows NT\Rpc
  • Create a new DWORD value called IgnoreDelegationFailure with the value of 1
  • Restart the computer
  • Repeat for each Exchange 2010 SP1 Client Access Server

 To configure this setting using Group Policy:

  • Open the Group Policy Management Console
  • Edit the Group Policy Object (GPO) that applies to the Exchange 2010 SP1 servers.  I usually use the Default Domain Policy.
  • Navigate to Computer Configuration > Policies > Administrative Templates > System > Remote Procedure Call
  • Double-click Ignore Delegation Failure.
  • Enable the policy and set the Ignoring Delegation Failure setting to ON.
  • Restart the Exchange 2010 SP1 Client Access Servers

This DCOM 10009 error does not seem to affect Windows Server 2008 servers, only Windows Server 2008 R2.

Turn your server into an iSCSI SAN with Microsoft iSCSI Software Target 3.3

Microsoft released Microsoft iSCSI Software Target 3.3, which turns your Windows Server 2008 R2 server into an iSCSI target.  This free component provides storage (disks) over a TCP/IP network to clients using an iSCSI initiator software, such as the Microsoft iSCSI Software Initiator Version 2.08 (also free) for Windows computers.  There’s also a, iSCSI client inside the target package.

iSCSI targets provide centralized, software-based and hardware-independent iSCSI disk subsystems in storage area networks (SANs).



iSCSI Software Target software has been around for several years for Microsoft Windows Storage Server.  Now they’ve made it available for Windows Server 2008 R2. 



Here’s how to use it:

  • Download the Microsoft iSCSI Software Target 3.3 on your Windows Server 2008 R2 server, and double-click it to expand the package and run the installer page.
  • Click iSCSI Software Target (x64) in the installer to run the installation wizard.
  • Run the Microsoft iSCSI Software Target application in the Adminstrative Tools menu.
  • Right-click iSCSI Targets and select Create iSCSI Target.
  • Click Next and enter a name and description for the target (for example, VHDTarget1).  Then click Next.
  • Enter the iSCSI Qualified Name (IQN) for the target.  The IQN is usually in the form, iqn.<year-month>.<server FQDN>:<target name>.  For example:
iqn.2011-06.server1.expta.com:VHDTarget1
  • Supply a description, then click Next and Finish.
  • Right-click Devices and select Create Virtual Disk.
  • Click Next and enter the path for the new VHD, then click Next again.
  • Enter a size for the new VHD in MB and click Next.
  • Enter a description and click Next.
  • On the Access screen, click Add and select the target name you created (i.e., VHDTarget1).
  • Click Next and Finish.
  • Right-click the virtual disk and select Disk Access > Mount Read/Write.
You can now connect your iSCSI clients to the new target.

Lync Server 2010 Installation Fails: Prerequisite installation failed: Wmf2008R2

Ran into a new issue today installing Lync Server 2010 on a freshly baked Windows Server 2008 R2 SP1 server.  During the Setup Lync Server Components step of the installation, it failed with an error, “Prerequisite installation failed: Wmf2008R2“.




WMF is the Windows Media Format Runtime.  All Front End Servers and Standard Edition servers where conferencing will be deployed must have the Windows Media Format Runtime installed.  WMF is required to run the Windows Media Audio (.wma) files that the Call Park, Announcement, and Response Group applications play for announcements and music.

The issue here is that the WMF package version has changed for Windows Server 2008 R2 SP1.  The workaround is to run the following command from an elevated CMD prompt, as shown: 
%systemroot%\system32\dism.exe /online /add-package /packagepath:%windir%\servicing\Packages\Microsoft-Windows-Media-Format-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.mum /ignorecheck



Once the package is updated, restart the computer when prompted, and continue with the installation.  The Lync setup check will detect that the a proper version is installed and will proceed.

More Alphabet Soup – MCITP: Virtualization Administrator

Last Friday I passed the 70-699 TS: Windows Server 2008 R2, Desktop Virtualization exam.  I don’t know why it took me so long to get around to this one, but with this exam I now hold the MCITP: Windows Server 2008 R2, Virtualization Administrator credential.





I add this to my other MCITP certificationsMCITP: Enterprise Administrator, MCITP: Enterprise Messaging Administrator, and MCITP: Enterprise Messaging Administrator 2010.  All this makes for a very busy looking business card, but it’s worth the hard work!

How to Remove Windows 7 / Server 2008 R2 Service Pack 1 Backup Files

So you’ve updated your Windows 7 and Windows Server 2008 R2 computers with Service Pack 1 (SP1), and everything is running great.  You’ve tested your applications and haven’t run into any compatibility issues, so now you’d like to delete the Service Pack 1 backup files.  Here’s how to do it.
Note: The Service Pack Backup Files allow you to uninstall SP1, rolling the operating system back to RTM.  Once the backup files are deleted, you can no longer roll the system back.  Make sure you have given enough time to ensure that the system is behaving properly with SP1 before deleting the backup files.
The process of deleting the Service Pack backup files is the same in both Windows 7 and Windows Server 2008 R2.  Deleting the SP1 backup files will reclaim about 540MB on the system drive.
  • Click the Start button and type cleanup in the search bar to run the Disk Cleanup utility.
  • Scroll through the list of Files to Delete, and select Service Pack Backup Files, as shown below:

  • Click OK to delete the Service Pack 1 backup files.  This will take a few moments.
I typically run a disk defragmentation cycle after the SP1 backup files have been removed, since this is a fairly large amount of data to remove.

Windows Server 2008 R2 and Windows 7 SP1 Releases to Manufacturing Today


The Microsoft Windows Server Team announced today that Service Pack 1 for Windows Server 2008 R2 and Windows 7 was released to manufacturing (RTM) today.  Along with numerous bug fixes and security improvements, SP1 offers two significant new features: Dynamic Memory and RemoteFX.

Dynamic Memory pools all the memory available on a physical host and then dynamically distributes available memory, as it is needed, to virtual machines running on that host.  With Dynamic Memory Balancing, virtual machines will be able to receive new memory allocations, based on changes in workload, without a service interruption.  This is particularly useful in VDI implementations.

RemoteFX lets you virtualize the Graphical Processing Unit (GPU) on the server side and deliver  rich media and 3D user experiences for VDI clients.

Service Pack1 for Windows Server 2008 R2 and Windows 7 will be available to current customers of the Windows Volume Licensing program, as well as MSDN and TechNet subscribers on February 16, 2011.  On February 22, both will be available to all customers through Windows Update and will also come preinstalled on new servers ordered.

Problems installing UcmaRedist.msi on Windows Server 2008 or R2


You may have problems installing UcmaRedist.msi (the Microsoft Office Communications Server 2007 R2, Microsoft Unified Communications Managed API 2.0 Core Redist 64-bit) on Windows Server 2008 or 2008 R2.  I ran across this myself when installing the Microsoft Office Communications 2007 R2 Web Service Provider for Lync Server 2010.

You receive the following error:
Microsoft Office Communications Server 2007 R2, Microsoft Unified Communications Managed API 2.0 Core Redist 64-bit installation requires Microsoft .NET Framework 3.5. Installation can not continue.
This happens if you have the .NET Framework 4.0 installed.  Uninstall both of the .NET Framework 4.0 components in Programs and Features, restart the server, and install UcmaRedist.msi.

Dynamic Memory and RemoteFX in Windows Server 2008 R2 SP1

When I was at TechEd in New Orleans I got a chance to talk with Vijay Tewari, Principal Program Manager for the Microsoft Virtualization Team, about Dynamic Memory in the upcoming Service Pack 1 for Windows Server 2008 R2. 

In case you’re not familiar with Dynamic Memory, this allows you to specify a minimum and maximum amount of RAM that a Hyper-V guest can use.  The VM will start with the minimum amount of RAM you sepcify and the host server will automatically reallocate additional RAM to the VM as needed, up to the maximum amount you have specified.  Dynamic Memory will also automatically reduce the RAM allocated when it is no longer needed.  Pretty sweet!  This provides higher density of VMs on a Hyper-V host since memory can be oversubscribed.  Keep in mind, though, that memory oversubscription can have a big performance impact if Hyper-V is forced to page RAM out to the pagefile.  Still, this has big advantages especially for VDI deployments.

The other “big thing” in Windows Server 2008 R2 SP1 is RemoteFX.  This technology came ito being when Microsoft purchased Calista Technologies in 2008.  RemoteFX allows the VMs on a Hyper-V host to access the host’s Graphics Processor Unit (GPU) for superior video output in the guest. This allows remote workers to enjoy the same rich user experience over a network as with a locally executing desktop.  Remote clients only need to support the color depth required to view the output, so you can provide advanced GPU capabilities to all your remote clients using a single GPU on the Hyper-V host.

RemoteFX is a feature that you enable on the Hyper-V host, not the VMs.  Once the RemoteFX feature has been installed a new option to enable the RemoteFX is available within the settings of the guest VM.  This means that even though you’ve enabled RemoteFX on the host, resources are only allocated for the guests you choose.

RemoteFX will require a new RDP client that supports the new capabilities, which should be available in the same release timeframe.  RemoteFX will also work with Remote Desktop Gateway deployments.  Microsoft recommends 200MB of graphics RAM per VM that uses RemoteFX.

The public beta for Windows Server 2008 R2 SP1 is expected to be released by the end of July 2010.  The same service pack is used for both Windows Server 2008 R2 and Windows 7, simplifying deployment.