Category Archives: 13698

Poll: Which new Hyper-V lab server build would you be more likely to buy?

I am preparing to create my 5th generation Super-Fast Hyper-V Lab Server build. As usual, I will create a parts list, photos, videos, and tips about the build on this blog, but I need your help.




I normally stick to a small Micro ATX form factor which currently supports a maximum of 32GB RAM. I currently run this build at home and I’m happy that it doesn’t take much room and uses less power. 32GB RAM is enough to run 6-7 medium/large servers at once 24×7.



Some IT Pros have asked for a build that supports 64GB RAM so they can run more or larger VMs. A 64GB build requires me to use a traditional ATX form factor motherboard with more DIMM slots. This will use more power and will cost about $900 more.



I realize cost is more of factor than size to most folks, but this website shows a comparison of ATX vs. Micro-ATX case sizes if you’re not aware. The microATX case I usually go with is the same form factor as the “barebones” case shown on the website.



I created the poll below so I can determine which build you would like me to go with for my 5th generation server. I really appreciate your input.





Which new Hyper-V server build would you be more likely to buy?













I will be speaking at the IT/Dev Connections conference September 15-19 in Las Vegas. There, I will be hosting two sessions, “Build Your Own Super-Fast Exchange Lab for Under $2,000!” and an open mic forum entitled “Ask the Exchange Experts,” a Q&A session about Exchange and Office 365 migration tips and tricks with fellow MVP Tony Redmond.



I will be bringing my latest Hyper-V lab server build to the lab session and will provide tips on how to build, manage, and use the server to advance your IT career. I hope to see you there!


4th Generation Hyper-V 2012 R2 Server for Around $1,200 USD – Parts List and Video!

In honor of the release of Windows Server 2012 R2, I’ve updated my latest server build using the latest components. You can use this home Hyper-V server to create your own private cloud, prototype design solutions, test new software, or run your own network like I do . Nothing provides a better learning tool than hands-on experience!



My last build used a third-generation Intel I5-3470S Ivy Bridge Quad-Core CPU. My G4 build uses a fourth-generation Intel I5-4570S Haswell Quad-Core CPU and a larger faster 360GB SSD to run active Hyper-V virtual machines. The new components result in a super-fast 7.5 second boot time!



My Design Requirements

This design is a little less cost-focused so I can use the latest Intel processor, faster SSD drives, and a sleek high-performance micro-ATX case. These new components currently add about $200 to the base $1,000 price, but as usual for high-end technology, those costs will go down.  You can probably build it for less even now.

  • Minimum of 4 cores
  • Windows Server 2012 R2 capable. Hyper-V for Windows Server 2012 R2 requires hypervisor-ready processors with Second Level Address Translation (SLAT).
  • 32GB of fast DDR3 RAM
  • Must support SATA III 6Gb/s drives
  • Must have USB 3.0 ports for future portable devices
  • Low power requirements
  • Small form factor
  • Budget: Around $1,200 USD

The processor I chose is the new Intel I5-4570S Haswell Quad-Core CPU. Even though all four cores run at a quick 2.9 GHz, it only uses 65W. The beautiful aluminum heatsink and fan included with the processor keep the CPU running at a cool 25° Celsius (77° F) at room temperature.



As before in my previous builds, RAM requirements drove most of this design. Memory is single most important component in a Hyper-V host. Pairing up a super-fast processor with quick reliable RAM is the key to a good design.



Gigabyte Motherboard – Durable enough to cut a steak on it! J

Overclocking is not longer only used by gearheads and has moved to the mainstream. Most desktop motherboards include self tuning overclocking to get every gram of power out of their rig. I don’t use any of these features, even though they’re available. I prefer stability over speed – and this server is plenty fast enough!



I’ve also found that while all SSD are fast, some are faster. Drives with high IOPS provide a noticeably faster computer especially during bootup and long drive operations, like copying ISOs and VHDXs.



This build is more stylish than previous builds, using a sleek high quality Rosewill Slim MicroATX case. Most µATX cases are designed for desktops and, as such, they usually have small 250W-300W power supplies. The included Rosewill 300W µATX power supply works just fine for my build since all the components have low power requirements. Peak power requirements for this build is only 186W, giving me plenty of power to spare. This PSU is also designed to keep the case cool by exhausting warm air at the back along with another built-in 80mm on top of the case.



I ordered everything from Amazon because they had the lowest prices. And with Amazon Prime it was all delivered in just two days. Gotta love that! You can even join Prime for free for 30 days and cancel if you want after you get your gear.



Here’s the entire parts list for this server:



QuantityItemDescription
1

Intel Core i5-4570S Quad-Core Desktop Processor 2.9 GHZ 6MB Cache- BX80646I54570S

This is a 4th generation Haswell Intel processor. It includes the newest Intel HD graphics and runs at a very low 65W. 3 year limited warranty.
1Gigabyte GA-B85M-D3H LGA 1150 Intel B85 HDMI SATA 6Gbps USB 3.0 Micro ATX DDR3 1600 Intel Motherboards GA-B85M-D3H

I chose this LGA 1150 Micro ATX motherboard over Intel because it has 4x SATA 6Gb/s and 2x SATA 3Gb/s connectors. It also uses the Intel B85 Express chipset, has an UEFI BIOS, has 2x PCI and 2x PCI-Express slots, and USB 3.0 ports. 3 year limited warranty.
2Corsair Vengeance 16GB (2x8GB) DDR3 1600 MHz (PC3 12800) Desktop Memory (CMZ16GX3M2A1600C10)

1.5V 240-pin dual channel 1600MHz DDR3 RAM with built-in heat spreaders. Lifetime warranty. 10-10-10-27 CAS Latency. Great RAM at a great price. Each package contains 2x 8GB DIMMs (16GB). Be sure to buy two packages.
1Kingston Digital 120GB SSDNow V300 SATA 3 2.5 (7mm height) with Adapter Solid State Drive 2.5-Inch SV300S37A/120G

120GB SATA 6Gb/s (SATA 3) SSD used for the Windows Server 2012 R2 operating system. 85,000 IOPS 4KB random read / 55,000 IOPS 4KB random write. 3 year warranty.
1Corsair Force Series GS Red 360GB (6Gb/s) SATA 3 SF2200 controller Toggle SSD (CSSD-F360GBGS-BK)

360GB SATA 6Gb/s (SATA 3) SSD used for active VMs (the VMs I normally have running, like a domain controller, Exchange servers, Lync servers, etc.). Toggle NAND for up to 90K IOPS random write speed. 3 year limited warranty.
1
2.5-inch SSD/Hard Drive to 3.5-inch Bay Plastic Tray Mount Adapter Kit

Plastic mounting kit for 2.5″ SSD drives. Holds two SSD drives, stacked on top of each other in the left drive bay.
1WD Green 2 TB Desktop Hard Drive: 3.5 Inch, SATA III, 64 MB Cache – WD20EZRX

2TB Western Digital Green (low power) SATA 6Gb/s (SATA 3) drive. Used for storing ISOs, seldom used VMs, base images, etc. I usually configure this drive to sleep after one hour to save even more power. 2 year warranty.
1Lite-On Super AllWrite 24X SATA DVD+/-RW Dual Layer Drive – Bulk – IHAS124-04 (Black)

Great quality DVD burner. It’s cheap, too. I connect this to one of the SATA2 ports on the motherboard. 1 year limited warranty.
1TRENDnet 32-Bit Gigabit Low Profile PCI Adapter, Retail (TEG-PCITXRL)

The Gigabyte motherboard includes one gigabit NIC. It’s best practice to add another gigabit NIC for Hyper-V so you can separate host and VM traffic.
1C&E CNE11445 SATA Data Cable (2pk.)

I need 4x SATA cables for this build. The Gigabyte motherboard comes with two black 18″ SATA cables. Flat (not L shaped) connectors work best for this build. FYI there’s no technical difference between SATA2 and SATA3 cables.
2StarTech 6in 4 Pin Molex to SATA Power Cable Adapter (SATAPOWADAP)

The micro ATX PSU in the Rosewill case has four power connectors for drives, which is just enough — 2x SATA and 2x Molex connectors. Use these adapters to convert the two Molex connectors to SATA. Be sure to buy two.
1Rosewill Slim MicroATX Computer Case with ATX12V Flex 300W Power Supply, Black/Silver R379-M

Sleek mirror-finished micro ATX case with removable drive bay cage for easy access. Includes quiet 300W PSU, 80mm cooling fan on top, 2x front USB 2.0, and audio ports. Excellent quality.



It took about 90 minutes to assemble everything and take these pictures. The following slideshow shows how I put it all together. Click the slideshow to open the hi-res slideshow in a new page.






The first thing you’ll need to do after building your server is install the Windows Server 2012 R2 operating system. This will take a total of about 8 minutes from DVD. Amazing!

Windows Server 2012 R2 will install default drivers for all the server components. Next, you’ll want to update the BIOS to the latest version and install the optimized drivers available for some components. The Gigabyte GA-B85M-D3H motherboard includes a utilities and drivers disk. Pop the disk in and run setup.exe in <DVD Drive>:\Utility\GIGABYTE\AppCenter.  This will install the Gigabyte AppCenter utility on Windows Server 2012 R2.

Use AppCenter to download and install the latest drivers and utilities. AppCenter can be accessed using the icon in the notification area near the clock. Select Live Update and choose the following updates:

First half of the utilities and updates to install.

Second half of the updates to install.

It will take a few minutes to download and install the software and updates. You may need to restart a couple of times to complete the installation. Live Update in AppCenter makes it a lot easier to install the necessary utilities and drivers to keep your hardware up to date.

Installing utilities and updates.
My motherboard shipped with version F4 of the BIOS. At the time of this article, the latest BIOS version is F7. The @BIOS utility in AppCenter was unable to download the latest version for some reason, so I went to http://www.gigabyte.com/products/product-page.aspx?pid=4567#bios and downloaded the F7 BIOS manually, then used the @BIOS utility to install it from the file.

Updating and flashing the BIOS.
Now you can run Windows Disk Management to initialize, format, and label your Corsair 360GB SSD and Western Digital 2TB drives. Be sure to check my article about Windows Server 2012 deduplication to increase your Hyper-V server density. Now you’re ready to install the Hyper-V role and start making VMs!

Here’s a short video of the beast in action!




I’ll be doing a demo of this home Hyper-V server at the MVP Showcase at the MVP Summit, November 17th, 2013.  If you’re an MVP and will be going to the Summit, please drop by the MVP Showcase to see the server in action.

As usual, if you have any questions or comments please leave them below. I hope you enjoy reading about these server builds and take the opportunity to make this investment in your career.



Cisco Offers Free Nexus 1000V Integrated Switch for Hyper-V

Hyper-V 3.0 on Windows Server 2012 offers a new feature called an extensible virtual switch.  This feature allows you to replace the Windows integrated virtual switch in Hyper-V with a third-party switch, such as the Cisco 1000V.  You can get a quick overview of Hyper-V extensible virtual switches here.

The Cisco 1000V virtual switch provides many advanced capabilities to Hyper-V VMs such as advanced switching (private VLANs, ACLs, PortSecurity, and Cisco vPath), security, monitoring, and manageability.  Best of all it’s free to download here!


The following information comes from Cisco’s Cisco Nexus 1000V Switch for Microsoft Hyper-V website:

Features and Capabilities

The Cisco Nexus 1000V Switch for Microsoft Hyper-V:
  • Offers consistent operational experience across physical, virtual, and mixed hypervisor environments
  • Reduces operational complexity through dynamic policy provisioning and mobility-aware network policies
  • Improves security through integrated virtual services and advanced Cisco NX-OS features


The following table summarizes the capabilities and benefits of the Cisco Nexus 1000V Switch for Microsoft Hyper-V.
CapabilitiesFeaturesOperational Benefits
Advanced SwitchingPrivate VLANs, Quality of Service (QoS), access control lists (ACLs), portsecurity, and Cisco vPathGet granular control of virtual machine-to-virtual machine interaction.
SecurityDynamic Host Configuration Protocol (DHCP) Snooping, Dynamic Address Resolution Protocol Inspection, and IP Source GuardReduce common security threats in data center environments.
MonitoringNetFlow, packet statistics, Switched Port Analyzer (SPAN), and Encapsulated Remote SPANGain visibility into virtual machine-to-virtual machine traffic to reduce troubleshooting time.
ManageabilitySimple Network Management Protocol, NetConf, syslog, and other troubleshooting command-line interfacesUse existing network management tools to manage physical and virtual environments

The Cisco Nexus 1000V won the Best of Microsoft TechEd 2013 award in the Virtualization category.



If you’re interested in learning more about the Nexus 1000V extensible switch, I encourage you to view the following 2 hour session on CiscoLive365: BRKVIR-2017. – The Nexus 1000V on Microsoft Hyper-V: Expanding the Virtual Edge (2013 London).  Free registration is required.  Bennial also posted the PowerPoint slide deck for this session on ScribD here.



UPDATED Blistering Fast Hyper-V 2012 Server – Parts List and Video!

Over a year ago I wrote an article detailing how to build a Blistering Fast Windows Server for about $1,000 USD.  At that time “Windows Server 8″ hadn’t even been released yet, but I wanted to build a server that would work with “future generations” of Hyper-V.  The article proved to be extremely popular and paved the way for many fellow technologists to build their own lab servers.



Now that Windows Server 2012 has been out for a while I wanted to update that article to incorporate newer technologies, like 3rd generation Intel processors and faster DDR3 RAM.  I also made some tweaks to my initial server over the year, adding another SSD drive for active VMs and enabling sleep mode on my physical storage hard drive to save more power.  I’m including those items in this build, while maintaining the same price point as over a year ago.



Lessons Learned

I modified a few things since I built the original lab server I documented in January 2012.  Here are the lessons I learned:

  • If RAM is king, IO is queen.  The two most important things for a Hyper-V 2012 server are RAM (VM capacity) and IO (VM performance).  IO becomes even more important as you add more concurrently running VMs, which you can easily do with 32GB of RAM!
  • SSD = IO. My original design used a single SSD for the operating system and binaries.  I soon learned that VM performance was pretty poor running off a traditional mechanical hard drive, even though I was using a fast SATA III 6Gbps drive.  I ended up buying another 250GB SSD drive to host my active VMs.
  • CPU isn’t as important as I thought.  It’s important to have enough cores to share with your VMs, but most of the time my CPU is idling at 10% utilization even with 8 VMs running simultaneously.
  • Deduplication is amazing! You can increase the VM density on an SSD drive using Windows Server 2012’d built-in deduplication feature.
  • You can never have enough SATA III ports.  My first build used an Intel motherboard with two SATA III 6Gbps and two SATA II 3Gbps ports.  I ended up having to buy another SATA III controller when I added the other SSD drive.  Better to have at least 4 SATA III ports to begin with.



My Design Requirements

This build has an emphasis on cost.  Even though my budget is the same as the earlier build, I have to make it work with two SSD drives instead of one.

  • Minimum of 4 cores
  • Windows Server 2012 capable.  Hyper-V for Windows 8 requires hypervisor-ready processors with Second Level Address Translation (SLAT).
  • 32GB of fast DDR3 RAM
  • Must support SATA III 6Gb/s drives
  • Must have USB 3.0 ports for future portable devices
  • Low power requirements
  • Small form factor
  • Budget: Under $1,000 USD

As before, the RAM requirements drove most of this design.  Interestingly, I found that the newer technologies (3rd generation Intel Core I5 Ivy Bridge and DDR3 1600 RAM) actually cost less than the 2nd gen I5 and DDR3 1066 RAM in my last build.

Unlike last year’s build, I discovered that Amazon usually has the lowest price for everything.  This makes it a  lot easier to order and receive since all the components come from one place.  This should also make it easier for my European friends since they can source it all from Amazon, as well.  Another big bonus is that I have Amazon Prime which gives me free 2-day shipping on all the components.  I could even choose to spend $3.99 more to get it next day!  I love this service!

Here’s the entire parts list for this server:



Quantity Item Description
1   Intel Core i5-3470S Quad-Core Processor 2.9 GHz 6 MB Cache LGA 1155 – BX80637I53470S

This is a 3rd generation Ivy Bridge Intel processor. It includes Intel HD 2500 graphics and runs at a low 77W. 3 year limited warranty.
1   AS Rock PRO4-M LGA1155 Intel H77 Quad CrossFireX SATA3 USB3.0 A V GbE MATX Motherboard H77

I chose this LGA 1155 Micro ATX motherboard over Intel because it has 4x SATA3 and 2x SATA2 connectors. It also uses the Intel H77 chipset, supports RAID 1, 5 and 10, has 4 PCI-Express slots, USB 3.0, and has a great BIOS. See the video below. 3 year limited warranty.
2   Corsair Vengeance 16GB (2x8GB) DDR3 1600 MHz (PC3 12800) Desktop Memory (CMZ16GX3M2A1600C10)

240 pin dual channel RAM with built-in heat spreaders.  Lifetime warranty.  Latency is 10-10-10-27.  Each package contains 2x 8GB sticks (16GB).  Be sure to buy two packages.
1   Kingston SSDNow V200 128GB Bundle SV200S3B7A/128G

SATA3 SSD used for the Windows Server 2012 operating system. The package includes the drive and SATA3 cable, an external enclosure, and cables. 3 year warranty.
1   Samsung MZ-7TD250BW 840 Series Solid State Drive (SSD) 250 GB Sata 2.5-Inch

SATA3 SSD used for active VMs (the VMs I normally have running, like a domain controller, Exchange servers, Lync servers, etc.). Super-fast drive. 3 year limited warranty.
1 Kingwin 2.5 Inch to 3.5 Inch Internal Hard Disk Drive Mounting Kit

Metal mounting kit for 2.5″ SSD drives. Holds two SSD drives, stacked on top of each other.


1   WD Green 2 TB Desktop Hard Drive: 3.5 Inch, SATA III, 64 MB Cache – WD20EARX

2TB Western Digital Green (low power) SATA3 drive. Used for storing ISOs, seldom used VMs, base images, etc. I usually configure this drive to sleep after one hour to save even more power. 2 year warranty.
1   Lite-On Super AllWrite 24X SATA DVD+/-RW Dual Layer Drive – Bulk – IHAS124-04 (Black)

Great quality DVD burner. It’s cheap, too. I connect this to one of the SATA2 ports on the motherboard. 1 year limited warranty.
1   SATA Data Cable (2pk.)

I need 4x SATA3 cables for this build. The ASRock motherboard comes with a black one and the Kingston 128GB SSD comes with another read one.
1   Rosewill 40-In-1 USB 2.0 3.5-Inch Internal Card Reader with USB Port / Extra Silver Face Plate (RCR-IC001)

This is just a handy cheap addition. It slides into the floppy drive tray of the case and adds another USB 2.0 connector, SD card reader, and lots of other reader slots to the front of the computer.
1   APEX TX-381-C Black Steel Micro ATX Tower Computer Case USB/Audio/Fan

Mini ATX tower case for Micro ATX motherboards, like the ASRock. It includes a carrying handle and 2x USB 2.0 ports and audio jacks under a small door on top of the case. It comes with a fairly quiet 80mm rear case fan and clear instructions.
1   Rosewill Stallion Series 400W ATX 12V v2.2 Power Supply RD400-2-SB

Dual 12V rails. Nearly silent 120mm fan and mesh cable sleeving. Includes 4x SATA power connectors and 1x PCI-Express. 1 year limited warranty



Click the video below to hear a description of the parts I ordered for this beast:








It took about 90 minutes to assemble everything and take these pictures. The following slideshow shows how I put it all together:








Once assembled, I updated the BIOS online (very cool – see the video below) and installed Windows Server 2012 Datacenter Edition.  Installation took only 4 minutes, 50 seconds!  Amazing.



Windows Server 2012 recognized all but two of the computer’s components, but some required updating so Windows Server can use their advanced capabilities.  Do NOT install the drivers using the setup program on the included ASRock H77 Pro-4M DVD.  The ASRock setup programs will BSOD the server since they are written for a different OS.  Instead, open Device Manager, right-click the following devices, and update the driver software using the ASRock DVD.



Here are the devices that need to be updated, in this order:


System devices
  • Xeon(R) processor E3-1200 v2/3rd Gen Core processor DRAM Controller – 0150
  • PCI Express Root Complex (Becomes “PCI bus”. Requires a restart)
  • Intel(R) H77 Express Chipset LPC Controller – 1E4A (Requires a restart)
  • Intel(R) 7 Series/C216 Chipset Family SMBus Host Controller – 1E22
  • Intel(R) 7 Series/C216 Chipset Family PCI Express Root Port 8 – 1E1E (Requires a restart)
  • Intel(R) 7 Series/C216 Chipset Family PCI Express Root Port 6 – 1E1A
  • Intel(R) 7 Series/C216 Chipset Family PCI Express Root Port 1 – 1E10

Universal Serial Bus controllers
  • Standard Enhanced PCI to USB Host Controller (Becomes “Intel(R) 7 Series/C216 Chipset Family USB Enhanced Host Controller – 1E26″)
  • Standard Enhanced PCI to USB Host Controller (Becomes “Intel(R) 7 Series/C216 Chipset Family USB Enhanced Host Controller – 1E2D”)

Other devices
  • Unknown device  (Becomes “Intel(R) Smart Connect Technology Service”)

Sound controllers
  • High Definition Audio Device (Becomes “Realtek High Definition Audio”)
  • High Definition Audio Device (Becomes “Intel(R) Display Audio”)

Network adapters
  • Realtek PCIe GBE Family Controller

IDE ATA/ATAPI controllers
  • Standard SATA AHCI Controller (Becomes “Intel(R) 7 Series/C216 Chipset Family SATA AHCI Controller”. The DVD drive will probably change drive letters after this update.)
  • Standard SATA AHCI Controller (Becomes “Asmedia 106x SATA Controller”.  This one is tricky.  Restart and press F8 to boot in Safe Mode. Restart again into normal mode. You will now see new “ATA Channel 0″ and “ATA Channel 1″ controllers.)

Display adapters
  • Microsoft Basic Display Adapter (Becomes “Intel(R) HD Graphics”.  The screen flashes during installation.)

Install Intel Management Engine Components from the ASRock DVD
  • Run <DVD Drive>:\Drivers\ME\Intel\(v8.1.2.1318_1.5M)\Setup.exe
  • Accept the Intel Manageability Engine Firmware Recovery Agent license agreement
  • Check for updates. This takes a few minutes.
  • This will fix the unknown PCI Simple Communications Controller device.

I also recommend that you update the Samsung SSD 840 firmware, which includes better TRIM support:
  • Download and install the Samsung Magician 4 software.
  • Click Firmware Update and Update. Reboot to finish the firmware upgrade.



Finally, run Windows Disk Management to initialize, format and label your Samsung 250GB SSD and Western Digital 2TB drives.




Here’s a video of the Windows Server 2012 Hyper-V server in action:








I hope this article, slideshow and videos are helpful to you in your quest to build the perfect Hyper-V lab server.  This is a great investment in your IT career!



Special thanks to my ExtraTeam colleague, Aman Ayaz.  It was his need for a new Hyper-V lab server (and his Visa card) that made this article possible.  :)






Windows Server 2012 Deduplication is Amazing!

The following article describes how to use Windows Server data deduplication on an Solid State Drive (SSD) that holds active Hyper-V virtual machines.



Coloring Outside the Lines Statement:

This configuration is not supported by Microsoft.  See Plan to Deploy Data Deduplication for more information.  Use these procedures at your own risk. That said, it works great for me.  Your mileage may vary.



A while back I decided to add another 224GB SATA III SSD to my blistering Windows Server 2012 Hyper-V server for my active VMs.  The performance is outstanding and it makes the server dead silent.  I moved my primary always-on HyperV VM workloads to this new SSD:

  • Domain Controller on WS2012
  • Exchange 2010 multi-role server on WS2012
  • TMG server on WS2008 R2

These VMs took 134GB, or 60%, of the capacity of the drive which was fine at the time.  Later, I added a multi-role Exchange 2013 server which took up another 60GB of space.  That left me with only 13% free space, which didn’t leave much room for VHD expansion and certainly not enough to host any other VMs.  Rather than buy another larger and more expensive SSD, I decided to see how data deduplication performs in Windows Server 2012.



Add the Data Deduplication Feature

Data Deduplication is a feature of the File and Storage Services role in Windows Server 2012.  It’s not installed by default, so you need to install it using the Add Roles and Features Wizard (as above) or by using the following PowerShell commands:


PS C:\> Import-Module ServerManager
PS C:\> Add-WindowsFeature -Name FS-Data-Deduplication
PS C:\> Import-Module Deduplication


Next, you need to enable data deduplication on the volume.  Use the File and Storage Services node of Server Manager and click Volumes.  Then right-click the drive you want to configure for deduplication and select Configure Data Deduplication, as shown below:

Configuring Data Deduplication on Volume X:
So far, this is how you normally configure deduplication for a volume.  You would normally configure deduplication to run on files older than X days, enable background optimization, and schedule throughput optimization to run on at specified days and times.  It’s pretty much a “set it and forget it” configuration.

From here on I’m going to customize deduplication for my Hyper-V SSD.

In the Configure Data Deduplication Settings for the SSD, select Enable data deduplication and configure it to deduplicate files older than 0 days. Click the Set Deduplication Schedule button and uncheck Enable background optimization, Enable throughput optimization, and Create a second schedule for throughput optimization.

Enable Data Deduplication for Files Older Than 0 Days

Disable Background Optimization and Throughput Optimization Schedules
Click OK twice to finish the configuration.  What we’ve done is enabled data deduplication for all files on the volume, but deduplication will not run in real-time or on a schedule.  Note that these deduplication schedule settings are global and affect all drives configured for deduplication on the server.

You can also configure these data deduplication settings from PowerShell using the following commands:
PS C:\> Enable-DedupVolume X:
PS C:\> Set-Dedupvolume X: -MinimumFileAgeDays 0
PS C:\> Set-DedupSchedule -Name “BackgroundOptimization”, “ThroughputOptimization”, “ThroughputOptimization-2″ -Enabled $false
This configuration mitigates the reason why Microsoft does not support data deduplication on drives that host Hyper-V VMs.  Mounted VMs are always open for writing and have a fairly large change rate.1  This is the reason Microsoft says, “Deduplication is not supported for files that are open and constantly changing for extended periods of time or that have high I/O requirements.

In order to deduplicate the files and recover substantial disk space you need to shutdown the VMs hosted on the volume and then run deduplication manually with this command:
PS C:\> Start-DedupJob –Volume X: –Type Optimization
This manual deduplication job can take some time to run depending on the amount of data and the speed of your drive.  In my environment it took about 90 minutes to deduplicate a 224GB SATA III SSD that was 87% full.  You can monitor the progress of the deduplication job at any time using the Get-DedupJob cmdlet.  The cmdlet shows the percentage of progress, but does not return any output once the job finishes.

You can also monitor the job using Resource Monitor, as shown below:

Process Monitor During Deduplication
Here you can see that the Microsoft File Server Data Management Host process (fsdmhost.exe) is processing the X: volume.  When the deduplication process completes, the X: volume queue length will return to 0.

Once deduplication completes you can restart your VMs, check the level of deduplication, and how much data has been recovered.  From the File and Storage Services console, right-click the volume and select Properties:

Properties of Deduplicated SSD Volume
Here we can see that 256GB of raw data has been deduplicated to 61.5GB on this 224GB SSD disk – a savings of 75%!!!  That leaves 162GB of raw disk storage free.  I could easily create or move additional VMs to this disk and run the deduplication job again.

The drive above now actually holds more reconstituted data than the capacity of the drive itself with no noticeable degradation in performance.  It currently hosts the following active Hyper-V VMs:

  • Domain Controller on WS2012
  • Exchange 2010 multi-role server on WS2012
  • TMG server on WS2008 R2
  • Exchange 2013 multi-role server on WS2012
  • Exchange 2013 CAS on WS2012
  • Exchange 2013 Mailbox Server on WS2012
Caveats:
  • Because real-time optimization is not being performed, the VMs will grow over time as changes are made and data is added. The manual deduplication job would need to be run as needed to recover space.
  • Since the SSD actually contains more raw duplicated data than the drive can hold, I’m unable to disable deduplication without moving some data off the volume first.
  • Even though more VMs can be added to this volume, you have to be sure that there is sufficient free space on the volume to perform deduplication.
For even more information about Windows Server 2012 data deduplication, I encourage your to read Step-by-Step: Reduce Storage Costs with Data Deduplication in Windows Server 2012!

I hope you find this article useful in your own deployments and I’m interested to know what your experience is.  Please leave a comment below!


How to Convert Hyper-V VHD Disks to VHDX

Windows Server 2012 Hyper-V offers a new virtual disk type called VHDX.  VHDX virtual disks have many benefits, including larger maximum disks up to 64TB, protection against data corruption, and improved alignment of the virtual hard disk format to work well on large sector disks.  See http://technet.microsoft.com/en-us/library/hh831446.aspx for more information about the VHDX disk type.



You can convert existing older format VHD disks to the new VHDX format using the Hyper-V Manager console.  This process will create a new VHDX disk and copy the data from the existing VHD to the new disk.  At the end of the procedure you will have two disks, the original VHD disk and a new VHDX disk with the same contents.  You can safely delete the original VHD disk once you have confirmed that the new VHDX disk is fully functional.



Here are the steps to convert an existing VHD disk to a VHDX disk:

  • Shut down the VM that is accessing the disk, if necessary.  You cannot convert a disk that is in use.
  • Open the Hyper-V VM settings, navigate to the hard drive you wish to convert, and click the Edit button, as shown below:


  • The Edit Virtual Hard Disk Wizard will start.  Select Convert from the Choose Action page and click Next.


  • Select the VHDX disk format and click Next.


  • Choose whether the new disk should be fixed size or dynamically expanding.  Note that this gives you the opportunity to change disk types from the previous disk type.  Click Next.
  • Select the name and location for the new VHDX disk and click Next.
  • Review the summary and click Finish to create the new disk.  This may take a few minutes depending on the size of the VHD and the speed of your hard drive(s).  A 30GB VHD converted in less than two minutes on my SSD drive.  The size of the new VHDX disk will be slightly larger than the original VHD disk.




  • The last step is to mount the new VHDX disk to the Hyper-V VM.  Note the new VHDX extension.




Once you have started up your VM with the new VHDX disk you can safely delete the old VHD disk.  There are no other configurations necessary.

Western Digital Green vs Black Drive Comparison

In a recent post I described my new blistering fast Windows 8 Server, which includes a parts list.  This server features a 120GB SDD SATA III 6.0Gb/s drive for the operating system and uses a single 2TB Western Digital Green SATA III 6.0Gb/s drive (WD20EARX-00PASB0) for VM and data storage.




It has been suggested by some of my readers that the WDC Green drive will not provide suitable performance compared to a WDC Black SATA III drive.  They also wondered what the true power savings is between the Green and the Black drive.  The Green drive uses less power by spinning at slower RPMs (variable ~5400 RPM vs 7200 RPM for the Black).




I decided to purchase a Western Digital Caviar Black SATA III 6.0Gb/s drive (WD2002FAEX-007BA) to run benchmarks against and compare the two drives side-by-side using HD Tune Pro 5.00 and Microsoft Exchange Server Jetstress 2010 (64 bit).




I ran each set of tests for the Green drive, then replaced it with the Black drive and ran the same set of tests on my new server.  I also ran the the tests while the server was plugged into a P3 Kill A Watt Electricity Load Meter and Monitor to accurately measure power consumption by the kilowatt-hour for comparison.




HD Tune Pro Benchmarks

The following are the benchmark test results for both drives.  The Green drive is on the left and the Black is on the right.




Benchmark Results

The Black drive delivers 17.9% better average transfer speed.  The access time was 17.6ms for the Green vs. 12.0ms for the Black.  I was surprised to see that CPU usage was much higher on the Green (6.0%) vs the Black (2.4%).







File Benchmark Results

The File Benchmark test measures read/write transfer speed using a 500MB file in 4KB blocks.  The Black drive achieved 11.5% better performance using 4KB sequential access and 28.2% better using 4KB random access.







Random Access Results

The Random Access test measures the performance of random read or write operations with varying data sizes (512 bytes – 1MB).  Again, the Black drive performed better across the board with an average 31.2% improved performance.  It also offers much better access times.




It’s notable that the Green drive performed this test nearly silently, while the Black drive sounded like a Geiger Counter at Fukushima.  Neither of these drives feature AAM (Automatic Acoustic Management) so this does not impact the results (and cannot be adjusted).







Other Test Results

This benchmark runs a variety of tests which determine the most important performance parameters of the hard drive.  The Black drive offers 35.3% better random seek and 18.3% better sequential read performance.  It also has better transfer speeds from its cache.  Both drives feature a 64MB cache.







Exchange JetStress

I ran Exchange 2010 JetStress on each drive to get an accurate IOPS profile for Exchange 2010 SP2 use.  JetStress was configured for a two-hour test using a single 1TB database and one thread.



  • The Green drive achieved 47.396 IOPS with 10.751ms latency.
  • The Black drive achieved 64.57 IOPS with 15.180 latency.



I’m not sure why the Black drive’s latency was higher than the Green, given the benchmark tests above, but I ran that test twice and got the same results each time.  Even so, the Black drive delivered 26.6% more IOPS.







Power Analysis

Green Drive1.10 KW at 27.5 hours

Energy use per hour = (1.1 KWH)/(27.5 hours) = 0.04 KWH per hour of use
Energy use per day = (0.04 KWH/hour)(24 hours/day) = 0.96 KWH over a full day
Cost per day = (0.96 KWH)(18.5 cents/KWH) =  17.8 cents per day

Energy use per year = (0.96 KWH/day)(365 days/year) = 350 KWH/year
Cost per year = (350 KWH/year)(18.5 cents/KWH) = $64.82 per year.



350 KWH = ~700 lbs of greenhouse gas to the atmosphere per year.


Black Drive0.72 at 14.75 hours

Energy use per hour = (0.72 KWH)/(14.75 hours) = 0.049 KWH per hour of use
Energy use per day = (0.049 KWH/hour)(24 hours/day) = 1.18 KWH over a full day
Cost per day = (1.18 KWH)(18.5 cents/KWH) =  21.83 cents per day

Energy use per year = (1.18 KWH/day)(365 days/year) = 431 KWH/year
Cost per year = (431 KWH/year)(18.5 cents/KWH) = $79.74 per year.



431 KWH = ~860 lbs of greenhouse gas to the atmosphere per year.



Result: The WDC Green drive uses 18.8% less energy than the Black drive.







Conclusion

It’s obvious from the test results above that the Western Digital Caviar Black drive performs better than the Green drive.  At the time of this writing the Green drive costs $139 and the Black is $249.  That’s a 44% premium for a drive that performs on average 24% better.



In real-life observations I don’t really see that much difference in performance between the two at this time.  However, this Hyper-V server has twice as much RAM as my last server so it will potentially be hosting many more VMs (and will have a higher IO load).  For this reason I decided to keep the Black drive, even though it costs more, it’s a bit noisier when it’s working hard and uses more energy.  I like muscle cars, too.  :)



If you plan to do RAID, I would most definitely recommend the Black drive because it spins at a consistent 7200 RPM.  Reports say that the variable RPMs on the Green drive can cause read/write errors.



I hope you find this information useful.


Blistering Fast Windows Server – Parts List and Video

Walk with me now, as we take a stroll down Geek lane.  :)








I decided it’s time to replace my old Hyper-V server at home with a new one that’s faster and can run more VMs.  I’ve decided again to build it myself from OEM parts so I can get exactly what I want at a right price.  This article contains my parts list and my reasons for choosing what I did.  Hopefully, this will help you with your own home lab.
I host my private cloud network on a Windows Server 2008 R2 Hyper-V host server.  Hyper-V is perfect for my environment because it allows me to run workgroup applications (Exchange Edge Transport and IIS) directly on the host, as well as host my virtual domain servers.

My current Hyper-V server is an AMD x64 dual core rig with 16GB RAM and two SATA drives, one for the OS and another for VMs.  I built it about 3 years ago when I was on the Windows Server 2008 TAP and it has served me well.  But with Windows Server 8 and Exchange 15 right around the corner, I wanted to be sure I had the capabilities of running these new versions.

My Design Requirements
As with most customers, I have competing requirements for this new server:
  • Minimum of 4 cores
  • Windows Server 8 capable.  Hyper-V for Windows 8 requires hypervisor-ready processors with Second Level Address Translation (SLAT), as reported by Microsoft at BUILD.
  • 32GB of fast DDR3 RAM
  • Must support SATA III 6Gb/s drives
  • Must have USB 3.0 ports for future portable devices
  • Must be quiet.  This server is sitting next to me in my office (aka, the sunroom) and I don’t want to hear it at all.
  • Low power requirements
  • Small form factor
  • Budget: ~$1,000 USD
My RAM requirements drove most of this design.  Since this would be based on a desktop motherboard (server mobos are too big and ECC RAM is too expensive), I first looked for 4x8GB (32GB) DDR3 RAM.  Then I looked for a small mobo that would accept that much RAM, then a processor for that mobo.
 
Here’s my parts list, including links to where I purchased each item and the price I paid:
Part Number
Description
Price
Source
Intel Core i5-2400S Sandy Bridge 2.5GHz (3.3GHz Turbo Boost) LGA 1155 65W Quad-Core Desktop Processor Intel HD Graphics 2000 BX80623I52400S
$193.00
Amazon

Intel BOXDH67BLB3 LGA 1155 Intel H67 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard
$85.99
NewEgg
Komputerbay 32GB DDR3 (4x 8GB) PC3-10600 10666 1333MHz DIMM 240-Pin RAM Desktop Memory 9-9-9-25
$225.00
Amazon
OCZ Agility 3 AGT3-25SAT3-120G 2.5″ 120GB SATA III MLC Internal Solid State Drive (SSD)
$129.99
NewEgg
Western Digital Caviar Green WD20EARX 2TB 64MB Cache SATA III 6.0Gb/s 3.5″ Internal Hard Drive
$114.99
NewEgg
ASUS 24X DL-DVD Burner SATA II
$19.99
NewEgg
AeroCool M40 Cube Computer Case – Micro ATX, LCD Display, 2x 5.25 Bays, 3x 3.5 Bays, 4x Fan Ports, Black
$79.99
TigerDirect
Antec EA-380D Green 80 PLUS BRONZE Power Supply
$44.99
NewEgg
ENERMAX UC-8EB 80mm Case Fan
$9.99
NewEgg
nMEDIAPC ZE-C268 3.5″ All-in-one USB Card Reader with USB 3.0 Port
$16.99
NewEgg
Rosewill RX-C200P 2.5″ SSD / HDD Plastic Mounting Kit for 3.5″ Drive Bay
$4.99
NewEgg


Total:  $925.91


I was a little worried about the Komputerbay RAM.  I’ve never heard of them before, but they offer a lifetime warranty and 32GB DDR3 1333 (PC3 10666) RAM was $54 cheaper than what I could find at NewEgg.  In the end I’m very pleased with my decision.
I chose different sources for the best price.  NewEgg is my go-to vendor for most items.  They charge sales tax in California, but I have a ShopRunner account that gives me free 2-day shipping on all these items.  Amazon was the smart choice for the bigger ticket items since they don’t charge tax and I could get them delivered with a 30 day free trial of Prime 2-day shipping.  Not to mention the fact that I had a $500 Amazon gift card that I won at TechEd 2011 from my good friends at Vision Solutions!  TigerDirect was the only source for this great AeroCool micro ATX cube computer case.
All the items were delivered the same day and started putting it together that night.  Careful assembly took about 90 minutes and everything went together perfectly. 
It’s a Geek Christmas!

All the parts freed from their cardboard prisons

The only other item I added was a dual port Intel PRO/1000 MT Server Adapter that I already had.  I also used L-bend right angle SATA cables instead of the two that came with the Intel motherboard, due to the short clearance between the PSU and the back of the drives (I knew this going in).
The innovative AeroCool M40 micro ATX case opens up like a book for easy access.  The power supply, hard drives and DVD drive(s) are in the top half and everything else is down below.  It includes a nearly silent 120mm front fan and has room for one more on the top rear section and two 80mm fans on the bottom rear section.  I added a single silent 80mm fan on the bottom to push warm air out.  The case temperature has never gone above 26.4C and it’s completely silent.
View from above showing the Antec PSU, the 3.5″ and 5.25″ drive cages and the unused PSU cabling

View from the hinged side, showing motherboard placement

I’m using the OCZ 120GB SATA III SSD drive for the operating system and pagefile, Windows Server 2008 R2 Enterprise for now.  I’ll upgrade the server to Windows Server 8 when it goes RTM.  In the meantime, I’ll build and test beta versions as VMs.  I have to say that this SSD drive was one of the best choices for my new system.  It’s blistering fast!  Windows Server 2008 R2 SP1 installed in just 6 minutes!!  Take a look at the video below to see that it takes only 20 seconds to get to a logon screen from a cold start, and half of that time is the for the BIOS POST!

The Intel I5 4-core Sandy Bridge processor has amazing graphics built in.  I’m able to run Windows Server 2008 R2 with the Aero theme at 1920×1080 HD resolution with no difference in performance.  It’s possible to overclock this system, but it’s plenty fast for me and I value stability over speed.  I love the fact that it draws only 65W!  This not only saves electricity, it keeps the case cool which lowers the cooling requirements.
The bottom half with the case split open. The I5-2400s CPU came with this huge low profile CPU cooler.

As a desktop motherboard, the Intel DH67BL motherboard came with drivers that did not work out of the box with Windows Server 2008 R2.  I downloaded the latest drivers from Intel and most installed fine.  The only items I had trouble with were the built-in Intel 82579V Gigabit network adapter and the integrated Intel HD Graphics drivers.  Intel “crippled” the NIC driver installer so that it won’t install on a server platform.  See this article which explains how to re-enable it.   The video driver installed most of the way, but the installer crashed when trying to register a DLL.  It was able to install again fine after a restart.
I also used a Western Digital Green 2TB SATA III drive for storage of my Hyper-V VMs.  I’ve always used Western Digital drives and I’ve never had a problem with them.  The WD Green line saves power, runs cool and quiet, and delivers 6 Gb/s performance.
Photo of the completed server.  I placed a DVD on top to for scale.

This is by far the fastest sever I’ve ever worked on, bar none.  I’m extremely happy with it.  I haven’t bothered running any benchmarks* on it – I just know that it’s fast enough for my needs and has plenty of RAM so I can run more VMs.
I hope this article helps you to build your own home lab server.   Please let me know if you have any questions.

* There are lies, damn lies, and benchmarks.

Exchange 2010 support for host-based failover clustering and migration




Some Exchange-supported virtualization platforms, such as Hyper-V and VMware include features that support the clustering or portability of guest virtual machines across multiple physical root machines.  Examples of host-based failover clustering and migration include Hyper-V Live Migration and VMware ESX vMotion.



Microsoft support for host-based failover clustering and migration virtualization with Database Availability Groups (DAGs) depends on the Exchange 2010 service pack level.  Per the Exchange 2010 System Requirements:



With Exchange 2010 RTM:

Microsoft doesn’t support combining Exchange high availability solutions (such as DAGs) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers. DAGs are supported in hardware virtualization environments, provided the virtualization environment doesn’t employ clustered root servers, or the clustered root servers have been configured to never failover or automatically move mailbox servers that are members of a DAG to another root server.

With Exchange 2010 SP1 (or later) deployed:

Exchange server virtual machines (including Exchange Mailbox virtual machines that are part of a DAG), may be combined with host-based failover clustering and migration technology, as long as the virtual machines are configured such that they will not save and restore state on disk when moved, or taken offline. All failover activity must result in a cold boot when the virtual machine is activated on the target node. All planned migration must either result in shutdown and cold boot, or an online migration that makes use of a technology like Hyper-V Live Migration. Hypervisor migration of virtual machines is supported by the hypervisor vendor; therefore, you must ensure that your hypervisor vendor has tested and supports migration of Exchange virtual machines. Microsoft supports Hyper-V Live Migration of these virtual machines.


In summary, Exchange 2010 SP1 or better supports hypervisor migrations such as Hyper-V Live Migration and VMware ESX vMotion for DAG member servers.  Host-based failover cluster migrations, such as Hyper-V Quick Migration, is supported only if the virtual Exchange DAG server is restarted immediately after the quick migration completes.  Exchange 2010 RTM is not supported with either migration technology.  RTM only supports the native Exchange high availability features present in DAGs.



Other Exchange Server 2010 roles (CAS, Hub Transport, Edge Transport, and Unified Messaging) fully support host-based failover clustering and migration because they do not employ native Exchange high-availability solutions.



For a list of the virtualization platforms supported by Exchange, visit the Windows Server Virtualization Validation Program website.

Fixing Time Errors on VMware vSphere and ESX Hosts

Time synchronization across a Windows domain is very important.  If a member server’s clock varies more than 5 minutes from other domain servers, Kerberos tickets will fail.  This causes random authentication errors for users and/or applications which are sometimes difficult to troubleshoot.



Normally, time is synchronized in a Windows domain using the domain hierarchy.  The domain controller holding the PDC Emulator FSMO role is normally configured to get time from an authoritative NTP time source, and syncs time with all the other DCs in the domain.  The domain clients in each site sync time from the DCs in their local site, maintaining a relatively close synchronization of time across the domain.



Virtual machines are no different than physical computers and normally sync time using the same domain hierarchy.  Lately, however, I’ve seen VMs running on VMware vSphere boot up with random time differences from the domain.  I’ve seen this problem with three different clients lately, so I figured this might be a pervasive enough issue to blog about.



The trouble happens when the VMware vSphere, ESX or ESXi host does not have an accurate source of time, or time “drifts” due to an inaccurate system clock module.  vSphere and ESX hosts run a proprietary operating system and are not domain member servers, therefore they do not participate in domain hierarchy time synchronization. 



Most companies that use VMware hosts use vCenter to manage these hosts and their VMs.  Often, the servers that run vCenter are domain member computers and administrators think that since the vCenter syncs time with the domain, the hosts and VMs do, too.  Not true.  You need to configure the vSphere or ESX hosts to sync time from an accurate time source, otherwise the VM guests may start up with the wrong time – this can happen even if time synchronization between the virtual machine and the ESX server in VMware Tools is not enabled.







Here’s how to configure your vSphere or ESX hosts to get time from an authoritative source.

  • Logon to vCenter and select your vSphere or ESX host.
  • Click the Configuration tab and then Time Configuration under the Software heading.  Notice that the time on the vSphere host does not match the domain time shown on the Windows client running vCenter .




  • Click Properties in the top left of the Configuration tab.  This opens the Time Configuration window.




  • Click the Options button and add a new NTP server that is the accurate source of time.  I recommend using the PDC emulator, since it should already be configured as an authoritative time source. 




  •  Select the checkbox to Restart NTP service to apply changes and click OK twice to close the Time Configuration window.  You will see that the vSphere/ESX host now has the correct time and is configured to use dc01.companyabc.com as its time server.





You may need to restart the VM guests running on that VMware host to have them sync time with the domain.  The Windows Time service will not correct the time on the VMs if it varies too much from domain time.  All domain computers sync time when they start up on the domain, regardless of how far out of sync they were.



I have not seen this type of behavior with Hyper-V, only vSphere, ESX and ESXi hosts.