All posts by expta

New Remote Desktop Connection Manager 2.7 Released






Microsoft released a new version of Remote Desktop Connection Manager (RDCMan) 2.7 to the public today.



RDCMan is a central place where you can organize, group, and manage your various Remote Desktop connections. This is particularly useful for system administrators, developers, testers, and lab managers who maintain groups of computers and connect to them frequently. I probably spend more time in RDC Manager than any other application during the day.



The previous version 2.2 was last released in May 2010, so this is a very welcome update. Previous versions lacked some functions and caused excessive CPU utilization on some computers, especially those with Nvidia GPUs. RDCMan was written by Julian Burger, one of the principal developers on the Windows Live Experiences team.



RDCMan 2.7 version is a major feature release. New features include:



  • Virtual machine connect-to-console support.
  • Smart groups.
  • Support for credential encryption with certificates.
  • Windows 8 remote action support (charms, app commands, switch tasks, etc).
  • Support for Windows 8, Windows 8.1 / Windows Server 2012, Windows Server 2012 R2.
  • Log Off Server now works properly on all versions.

Important Upgrade Notes: You should know that when you upgrade, RDCMan will be unable to read any saved encrypted passwords. You will need to re-enter your saved encrypted passwords after installation.


The workaround is to set the “Store password as clear text” checkbox in RDCMan 2.2 for preexisting groups and/or servers. When you upgrade to version 2.7, RDCMan will read the existing passwords and will encrypt them. “Store passwords as plain text” is no longer an option in version 2.7.






Source: Expta

How to Enable RelayState in ADFS 2.0 and ADFS 3.0

RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server. It is used by Google Apps and other SAML 2.0 resource providers.



If RelayState is not enabled in AD FS, users will see something similar to this error after they authenticate to resource providers that require it:



The Required Response Parameter RelayState Was Missing



For ADFS 2.0, you must install update KB2681584 (Update Rollup 2) or KB2790338 (Update Rollup 3) to provide RelayState support. ADFS 3.0 has RelayState support built in. In both cases RelayState still needs to be enabled.



Use the following steps to enable the RelayState parameter on your AD FS servers:



  • For ADFS 2.0, open the following file in Notepad: 

%systemroot%inetpubadfslsweb.config

  • For ADFS 3.0, open the following file in Notepad:

%systemroot%ADFSMicrosoft.IdentityServer.Servicehost.exe.config



  • In the microsoft.identityServer.web section, add a line for useRelyStateForIdpInitiatedSignOn as follows, and save the change:

<microsoft.identityServer.web>    …    <useRelayStateForIdpInitiatedSignOn enabled=”true” />    …</microsoft.identityServer.web>

  • For ADFS 2.0, run IISReset to restart IIS.

  • For both platforms, restart the Active Directory Federation Services (adfssrv) service.
If you’re using ADFS 3.0 you only need to do the above on your ADFS 3.0 servers, not the WAP servers.






Source: Expta

Turning a Disaster Recovery Test into a Disaster






I recently assisted a customer with a disaster recovery test for Exchange 2013 that went very wrong. I’m sharing what happened here in case the same unfortunate series of events happen to you so you know how to recover from it, or better yet maybe prevent it in the first place.



The customer’s Exchange 2013 environment consists of a three node DAG, two nodes in the primary datacenter and another in the DR datacenter. The DAG is configured for DAC mode. The customer wisely wanted to test the DR failover procedures so they know what to expect in case the primary datacenter goes offline.

The failover process went smoothly. The SMTP gateways and Exchange 2013 servers in the primary datacenter were turned off and the DAG was forced online in the DR datacenter. Internal and external DNS was then updated to point to the DR site. CAS connectivity and mail flow was tested successfully from all endpoints – life was good. The customer wanted to leave it failed over to the DR site for a few hours to confirm there were no issues. 

Now it was time to fail back. The documentation says to confirm that the primary datacenter is back online and there’s full network connectivity between the Exchange servers in both sites. Then login to each DAG member in the primary site and run “cluster node /forcecleanup” to ensure the servers are ready to be rejoined to the DAG.

But the customer scrolled past the part about where to run the command and ran it on the only node in the DR site. This essentially wiped the cluster configuration from the only node that held it. Instantly, the cluster failed and all the databases went offline. Since no other cluster nodes were online there was nothing to fail back to.

We fixed it by turning on the two DAG members in the primary site and starting the DAG in that site. That brought the databases online, but they were not up to date. We used the Windows Failover Cluster Manager console to evict the DR node and then add it back in. After AD replicated we saw that replication between all three nodes was working and the databases came up to date from Safety Net. We didn’t even need to reseed any of the database copies. Disaster averted.

So how did this happen and what can be done to prevent it?

Human nature is to skip large blocks of text and read for the steps that need to be done. This is especially true when you’re fairly comfortable with the steps or you’re under pressure. For this reason, I keep my procedures pretty concise with maybe a sentence or two explaining why this step or procedure is being done.

In this case, the customer scrolled past the text explaining where to run the command and just ran it from the wrong server.

Here are my suggestions for creating disaster recovery documentation.

  • Know your audience. You need to make an assumption about who will be reading the DR documentation. Will it be the same people who manage the infrastructure in the primary site? Maybe not, if this is a true disaster. Make sure you write the documentation for the right audience. Avoid acronyms that unfamiliar users may not know, or at least spell if out once and then add the acronym the first time you use it. For example, Client Access Server (CAS).
  • Keep your DR procedures concise. People skip walls of text. Murphy’s Law says that DRs happen at the worst times and people don’t want to read a bunch of background information that’s not pertinent to the task at hand. In a real disaster there will probably be a lot of other things going on and management asking for status. You might want to write your procedures like a cookie recipe. You don’t need to be a chef to follow a recipe, but you do need to know how to fix it if something in the recipe goes wrong. Provide links in the documentation that reference TechNet concepts, as needed.
  • Highlight important steps. Use highlighting to call out important steps in the procedures, but don’t overdo it. Too much highlighting will make it difficult to read. You can highlight using color or simple blocks of text, such as:
Important: The following procedures should be run from SERVER1.
  • Make sure the steps read top to bottom. Don’t bounce around in the document or refer to previous steps unless it’s something like, “Repeat for all other client access servers.” Avoid procedures like, “Cut the blue wire after cutting the red wire.” Try not to allow page breaks between important steps, if possible.
  • Use targeted commands, when possible. If a command can be targeted to a specific object it won’t run if the object is unavailable. For example, the command “cluster node SERVER1 /forcecleanup” will run only if SERVER1 is up, rather than assuming the user is running it from the correct server. This particular suggestion would have prevented the unexpected outage in my example.



Source: Expta

Microsoft Ignite – One Conference to Rule Them All!




Yesterday morning the on The Official Microsoft Blog announced the name for their new enterprise technology conference – Microsoft Ignite. This conference, called MUTEE (Microsoft Unified Technology Event for Enterprises by some folks, promises to be everything to everyone. It replaces TechEd North America, as well as all the specialty conferences held by those product teams over the year – MEC, the Lync Conference, the SharePoint Conference, MMS, etc.



Plus Office 365, of course.

It’s finally here — One enterprise conference with infinite possibilities.

For the first time ever, Microsoft Ignite brings together our best and brightest for a single, remarkable enterprise tech conference. Meet the minds that make it happen. For the first time under one roof, Microsoft Ignite gives you unprecedented access to hundreds of Microsoft technology and business leaders. Join us in Chicago.

In October I was invited to join the Microsoft Roundtable to provide feedback on this new conference. Microsoft was there to listen, not be heard. They were particularly interested in hearing our feedback on MEC (the Microsoft Exchange Conference), which is very highly regarded by both attendees and within Microsoft. MEC brought everything together in a perfect balance – mid-level and deep-dive sessions on Exchange (and Office 365), a tremendous sense of community, and attendees and product group members who are very passionate about this product.



My feedback was primarily about community, the depth of the sessions, and the level of participation that small sessions provide. I really think this is where MEC shines and I hope that Microsoft is able to pull off the same sort of vibe at Ignite.



By combining all these conferences into a single event, Microsoft expects 20,000(!) attendees at Ignite in Chicago. The expectation is to have 300-400 attendees per session, which is far too large to be “intimate”. Microsoft is planning to have a lot of gathering areas for impromptu “chalk talks” and collaboration.



The conference center in Chicago is HUGE and should easily accommodate that many attendees. I hope that the sessions for each product are close to each other. It would be difficult to navigate long distances, both vertically and horizontally, if the sessions are spread out.



I have attended every TechEd since 2004 and I’ve always gotten great value out of these conferences. My take-aways and participation have changed over the years, but I still get a ton of information and collaboration from the community here. I look forward to the same thing going forward. I sit on The Krewe board of directors as Vice-President and know a lot about the value of community that conferences like this bring. The Krewe Facebook page continues to be a resource for TechEd and Krewe alumni, where members exchange questions, advice, and their views on our industry. I encourage you to check it out.



Overall, I’m hopeful that Microsoft Ignite will be able to pull off their ambitious goal of combining the dedicated technology conferences and TechEd into one mega conference, while maintaining the community and collaboration that smaller conferences like MEC and the SharePoint Conference were able to attain.



Microsoft Ignite – Come for the technology. Stay for the community.



#IWasMEC  —  #IamIgnite





Source: Expta

Best Practices for Configuring Time in a Virtualized Environment


I frequently work with customers who are having trouble with time synchronization in their virtualized environment (whether they know it or not). Accurate time is immensely important in a Windows domain since the primary authentication protocol is Kerberos. Kerberos uses time-based ticketing and if the time is off 5 minutes or more between computers, random authentication errors and other problems occur.



Time synchronization normally occurs automatically in a Windows domain, but things can get pretty screwed up in a virtualized environment when the VMs are configured to sync from a host with inaccurate time.



The following are my best practices for configuring and managing time in a virtualized environment:

  • Configure the Domain Controller holding the PDC Emulator FSMO role to synchronize time from an accurate time source. Run the following two commands from an elevated CMD prompt:

w32tm /config /manualpeerlist:pool.ntp.org /syncfromflags:manual /reliable:yes /update

net stop w32time && net start w32time

  • Use pool.ntp.org as your external time source, as shown above. This is a load balanced set of time servers located around the world and will return the best server for your geographic location. You may instead want to get time from an internal source, In this case, change the w32tm command as required. You can specify multiple peers by enclosing them in quotes separated by commas (i.e., /manualpeerlist:”source1,10.0.0.1″). Your PDC Emulator needs User Datagram Protocol (UDP) port 123 access to get time from the target, so configure your firewall accordingly.

  • Disable time synchronization for all domain-joined VMs. How you do this depends on your virtualization platform. In VMware ESX it depends on the version you’re running. In Hyper-V you do this by disabling Time Synchronization in Hyper-V Integration Services of the VM, as shown below.

    Note that while I have always advised doing this, Microsoft has recently updated their guidance to match (at least for domain controllers). See TechNet article, Running Domain Controllers in Hyper-V. I recommend doing this for all VMs.
  • Ensure your VM host is configured to get accurate time. If you run VMware vSphere or ESX you must configure the host to get time from an external time source. VMware has a nasty habit of syncing time to VMs even though you’ve told it not to. See my article, Fixing Time Errors on VMware vSphere and ESX Hosts. If you’re running Hyper-V you should also configure the host to get accurate time. If the host is a member of the domain it should sync with the domain hierarchy, so you’re set. If the host is in a workgroup, configure it to get Internet Time from pool.ntp.org, as shown below. Note that domain-joined computers do not have the Internet Time tab.
  • Restart the Windows Time service on all domain computers to synchronize time with the domain hierarchy. The Windows Time service is responsible for syncing time in the network. The computer’s time should automatically update to match the Domain Controller time a few seconds after restarting the service. Use the following command to reset the service:
net stop w32time && net start w32time
    If the time difference is more than a 5 minutes, you may find that the computer will not update its time. You may need to reset the time manually, then restart the Windows Time service to get it into sync.
Please refer to the excellent TechNet article, How the Windows Time Service Works for more detailed about how time synchronization works in a computer network.



Source: Expta

Making the Case for Documentation


The Clouds are moving fast and it’s sometimes difficult to catch up with changes in functionality and new features. Customers rely on accurate documentation about how things work to make important business decisions. This is made all the more difficult when documentation is confusing or, worse, flat out wrong.



Case in point is the documentation around Shared Mailboxes. The Exchange Online Limits service description around shared mailboxes is very precise:



Shared Mailboxes have a 10GB limit for all plans, except Office 365 Enterprise K1 and Office 365 Government K1, which do not support shared mailboxes. It also specifies the following caveats in regards to shared mailboxes:

A user must have an Exchange Online license in order to access a shared mailbox. Shared mailboxes don’t require a separate license. However, if you want to enable In-Place Archive for a shared mailbox, you must assign an Exchange Online Plan 1 or Exchange Online Plan 2 license to the mailbox. If you want to enable In-Place Hold for a shared mailbox, you must assign an Exchange Online Plan 2 license to the mailbox. After a license is assigned to a shared mailbox, the mailbox size will increase to that of the licensed plan.

In-Place Archive can only be used to archive mail for a single user or entity for which a license has been applied. Using an In-Place Archive as a means to store mail from multiple users or entities is prohibited. For example, IT administrators can’t create shared mailboxes and have users copy (through the Cc or Bcc field, or through a transport rule) a shared mailbox for the explicit purpose of archiving.

The purpose of imposing these limits is to prevent a customer from abusing shared mailboxes, such as licensing one mailbox and then giving a “free” shared mailbox to everyone else in the company to save licensing costs. There are other limitations for shared mailboxes, such as the inability to access them using ActiveSync, that also makes them unsuitable as regular mailboxes.



As my ExtraTeam colleague, Chris Lehr, documents in his article, “Exchange Online Shared Mailboxes – Licensing, quota and compliance,” the reality is quite different. Here’s a summary of his findings:

  1. Shared mailboxes have a 50GB limit, not 10GB as per the documentation.
  2. You can put shared mailboxes on Litigation Hold or In-Place hold without licensing them, contrary to the documentation.
  3. If you put a shared mailbox on In-Place Hold, the Admin Console shows it’s configured, but the Management Shell says it’s not. In-Place hold does work, however.

With this in mind, why would you burn a license on a shared mailbox? Clearly the documentation is wrong or something is screwed up in the service.



All of this illustrates the need for clear, concise, and above all accurate documentation.


Unfortunately, Microsoft decided to lay off the Exchange technical writers in the latest round of cuts last month. Read Tony Redmond’s article, “Microsoft layoffs impact Exchange technical writers – where now for documentation?” for his take on this. The presumption is that all Exchange documentation can be done cheaper in China, where most Office 365 development is done. While you’re at it, check out “The best Exchange documentation update ever?


This is another sad loss for the Exchange community but an even bigger loss for customers. How can they make good business decisions based on bad documentation?

My recommendation for shared mailboxes is to follow the official documentation for planning. You never know when Office 365 may actually enforce those limits or features.




Source: Expta

Scheduled Task to Update Your Federation Trust

Microsoft published an article this morning about keeping your federation trust up-to-date. This is really important if you are in a hybrid configuration or if you are sharing free/busy information between two different on-premises organizations using the Microsoft Federation Gateway as a trust broker. Microsoft periodically updates the certificates used by the Microsoft Federation Gateway and updating your federation trust keeps these certs up-to-date.



Exchange 2013 SP1 and later automatically updates the federation trust. If you’re running at least this version of Exchange 2013 (and you should), you’re good to go. If you’re an Exchange 2013 RTM/CU1/CU2/CU3 customer who hasn’t upgraded yet, read on…



In the article, Microsoft provides a command to run on one of your Exchange 2010 servers that creates a Scheduled Task to update the federation trust daily. This script only works on Exchange 2010. If you have a pure Exchange 2013 pre-SP1 environment, you can use this command to create a scheduled task:

Schtasks /create /sc Daily /tn FedRefresh /tr “%SYSTEMROOT%System32WindowsPowerShellv1.0powershell.exe -command “. $ENV:ExchangeInstallPathbinRemoteExchange.ps1; Connect-ExchangeServer -auto -ClientApplication:ManagementShell;$fedTrust = Get-FederationTrust;Set-FederationTrust -Identity $fedTrust.Name -RefreshMetadata” /ru System

Note that this version will also work on Exchange 2010 servers and also works in the rare occasion where PowerShell is not located on the C: volume.



Source: Expta

How to Perform an Extended Message Trace in Office 365

You can use Message Trace from the Exchange Admin Center in the Office 365 Portal to trace emails through Exchange Online. You can trace messages based upon a number of criteria including email address, date range, delivery status, or message ID.



To perform a Message Trace, click Mail Flow in the EAC and select Message Trace, then enter the trace criteria. The high-level results will output to a new browser window.



High-Level Message Trace Output

Click the “pencil” icon to see more details on the selected item.



Detailed Message Trace Output

A standard message trace is useful for basic message tracing. It answers the question, “Did the message get delivered?”, but that’s about it. If you want to see all the real details of message transport you need to perform extended message tracing.



The trick to perform an extended message trace using the EAC is you have to choose a Custom date range of 8 days or more. You will then see additional options for the trace at the bottom of the form. Note that Exchange Online keeps logs for the last 90 days.



Extended Message Trace Options


Click the checkbox for Include message events and routing details with report, otherwise the report will only include a few more details than a regular trace: origin_timestamp, sender_address, recipient_status, message_subject, total_bytes, message_id, network_message_id, original_client_ip, directionality, connector_id, and delivery_priority. It also won’t show each hop through Exchange online.



Note that including message events and routing details will result in a larger report that takes longer to process, so you will probably want to scope the message trace down to a particular sender or recipient. The following details will be included in the report: date_time, client_ip, client_hostname, server_ip, server_hostname, source_context, connector_id, source, event_id, internal_message_id, message_id, network_message_id, recipient_address, recipient_status, total_bytes, recipient_count, related_recipient_address, reference, message_subject, sender_address, return_path, message_info, directionality, tenant_id, original_client_ip, original_server_ip, and custom_data.



You have the option to choose the message direction (Inbound, Outbound, or All) and the original client IP address, if desired. You can also specify the report title and a notification email address. Note that the email address must be one for an accepted domain in your tenant. The mailbox does not have to be in the cloud.



The search will take some time, depending on the search criteria you entered and the volume of email. You can click View pending or completed traces at the top of the Message Trace form to view the status of the extended trace. When it completes you can click the link to Download this report or, if you configured the search to send a notification, click the report link in the notification email.






The extended message trace output is a CSV file that you can save and open in Excel. Here’s the best way to view it in Excel:

  • Select cell A1 and press Shift-Ctrl-End to highlight all the cells.
  • Click Insert > Table and click OK.
  • Click View Freeze Panes > Freeze Top Row.
  • Select the entire worksheet and then double-click the line between columns A and B to autosize the all the columns in the table.
Auto size the columns in Excel
You will then have an extended trace report showing all the transport details of the messages that match your search criteria. This report can be filtered by clicking the drop down arrows on the title row.

If you plan to save the report, be sure to save it as an Excel Workbook (*.xlsx) or you will lose the formatting.




Source: Expta

EXPTA Gen5 Windows 2012 R2 Hyper-V Server for Around $1,000 USD – Parts Lists and Videos!


I’m very pleased to announce the release of my 5th generation Windows Server 2012 R2 Hyper-V lab server, the Gen5!




You can use this home server to create your own private cloud, prototype design solutions, test new software, and run your own network like I do. Nothing provides a better learning tool than hands-on experience!



This is faster and more powerful than my 4th generation server and costs about $200 less!





My Design Requirements


This design is the best of all worlds – super-fast performance with higher SSD capacity at less cost. My core design criteria:

  • Windows Server 2012 R2 Hyper-V capable. Hyper-V for Windows Server 2012 R2 requires hypervisor-ready processors with Second Level Address Translation (SLAT).
  • Minimum of 4 cores
  • 32GB of fast DDR3 RAM
  • Must support fast SATA III 6Gb/s drives
  • Must have USB 3.0 ports for future portable devices
  • Low power requirements
  • Must be quiet
  • Small form factor
  • Budget: Around $1,000 USD

In the land of virtual machines, I/O is king. SSDs provide the biggest performance gains by far. You can invest in the fastest processor and RAM available, but if you’re waiting on the disk subsystem you won’t notice much in performance gains. That’s why I focus on hyper-fast high-capacity SSDs in this build. Thankfully, SSDs have gotten bigger, faster, and cheaper over time. I’m going with brand new Crucial MX100 SATA3 SSDs in the Gen5 – one 256GB SSD for the OS and another 512GB SSD for active VMs. These drives provide up to 90,000 IOPS for random reads and up to 85,000 IOPS for random writes.


The second most important factor in Hyper-V server design is capacity. Memory, and to a smaller degree CPU, drives how many VMs you can run at once. Because I want a small form-factor, I need to go with a MicroATX motherboard and the maximum amount of memory that can be installed on these Intel-based motherboards is 32GB RAM. I chose 32GB Corsair XMS3 DDR3 RAM for this build. This is 1.5V PC-1333 RAM with a low Cas 9 latency and 9-9-9-24 timing. The single package includes four matched 8GB 240-pin dual-channel DIMMs.



The processor I chose is the new Intel I5-4590S Haswell-R Quad-Core CPU. Even though all four cores run at a quick 3.0 GHz, it still uses only 65W. It can be overclocked to 3.7 GHz, but it’s already plenty fast enough. The beautiful Intel aluminum heatsink and fan included with the processor keeps the CPU running cool and quiet without the need for exotic liquid cooling or extra fans. This processor includes integrated Intel HD Graphics 4600, so there’s no need for discrete video adapter.



I chose the ASRock B85M PRO4 Micro-ATX motherboard for the Gen5. I’ve used ASRock for previous builds and I think they produce some of the best motherboards available. This LGA 1150 mobo provides 4x SATA3 6Gbps ports (enough for all the drives in the Gen5) plus 2x SATA2 3Gbps ports. It also features the Intel B85 chipset, USB 3.0 and USB 2.0 headers, HDMI/DVI/VGA outputs, and an Intel I217V Gigabit NIC (which requires some tweaking – see my build notes below).



For mass storage I chose the tried-and-true Western Digital WD Blue 1TB SATA3 hard disk and a Samsung SH-224DB/RSBS 24X SATA DVD±RW drive. I use the WD Caviar Blue drive to store ISOs and VM base images. You can get a larger 2TB or 3TB version of the same drive for a few bucks more, but 1TB is plenty for most needs. Even so, I enable Windows Server 2012 R2 disk deduplication on all my drives to reduce the storage footprint. To save power, I configure Windows power settings to turn off the drive after 10 minutes of non-use.




All these components reside in a cool IN-WIN BL631.300TBL MicroATX Slim Desktop case. This is a new chassis to me and I’m quite impressed. It’s smaller and lighter than the Rosewill Gen4 case and the build quality is great. Heavy gauge steel and no sharp edges. It includes a 300W power supply, which is more than enough. The total estimated power required for the Gen5 is normally 171W, and 191W with all drives running at the same time. The front panel has 4x USB 2.0 ports, audio outputs, and cool blue power light. I only wish the front USB ports were USB 3.0. I’ve actually found that it’s a lot more convenient to use a 6.5′ USB 3.0 A-Male to A-Female Extension Cable which I route up to my workspace, anyway.





Parts List


Here’s the complete parts list for the Gen5 including the necessary drive bay converter, cables, and adapters. As usual, I link to Amazon because they nearly always have everything in stock, their prices are very competitive, and Amazon Prime gets you free two-day shipping! If you don’t have Amazon Prime you can sign up here for a free 30-day trial and cancel after you’ve ordered the parts, if you want.



This time I’m including a handy “Buy from Amazon.com” button which allows you to put all the items into your cart with one click. That makes it easy to see the current price of all the items at once. Note that Amazon’s prices do change depending on inventory, promotions, etc. At the time I purchased these parts, the total came out to $1045.89 USD with free two-day shipping.






Item Description
 
In-Win Case BL631.300TBL MicroATX Slim Desktop Black 300W 1×5.25 External Bays USB HD Audio
Sleek Micro ATX case with removable drive bay cage for easy access. 1x external 5.25″ drive bay and 2x internal 3.5″ drive bays. Includes quiet 300W PSU, 4x front USB 2.0 and audio ports. Great build quality and smooth folded edges. 3 year limited warranty.
 
Intel Core i5-4590S Processor (6M Cache, 3.70 GHz) BX80646I54590S
This is a 4th generation LGA 1150 Haswell-R Intel processor and includes Intel HD Graphics 4600. Runs at 3.0 GHz, but can be overclocked up to 3.70 GHz. Requires only 65W! Includes Intel aluminum heat sync and silent fan. 3 year limited warranty.
 
Corsair XMS3 32GB (4x8GB) DDR3 1333 MHz (PC3 10666) Desktop Memory (CMX32GX3M4A1333C9)
1.5V 240-pin dual channel 1333MHz DDR3 SDRAM with built-in heat spreaders. Low 9-9-9-24 Cas Latency. Great RAM at a great price. Package contains 4x 8GB DIMMs (32GB). Lifetime warranty.
 
ASRock LGA1150/Intel B85/DDR3/Quad CrossFireX/SATA3 and USB 3.0/A&GbE/MicroATX Motherboard B85M PRO4
I chose this LGA 1150 Micro ATX motherboard because it has 4x SATA 6Gb/s and 2x SATA 3Gb/s connectors. It uses the Intel B85 Express chipset, has 1x PCI-E 3.0 slot, 1x PCI-E 2.0 slot, 2x PCI slots, HDMI/DVI/VGA outputs, USB 3.0 and 2.0 ports, and an Intel I217V Gigabit NIC (see below). It also has a great UEFI BIOS (see video). 3 year limited warranty.
 
Crucial MX100 256GB SATA 2.5″ 7mm (with 9.5mm adapter) Internal Solid State Drive CT256MX100SSD1
256GB SATA 6Gb/s (SATA III) SSD used for the Windows Server 2012 R2 operating system. New Marvell 88SS9189 controller with Micron Custom Firmware. MLC delivers up to 85,000 IOPS 4KB random read / 70,000 IOPS 4KB random write. 3 year warranty.
 
Crucial MX100 512GB SATA 2.5″ 7mm (with 9.5mm adapter) Internal Solid State Drive CT512MX100SSD1
512GB SATA 6Gb/s (SATA III) SSD used for active VMs (the VMs I normally have running, like a Domain Controller, Exchange servers, Lync servers, etc.). MLC delivers up to 90K IOPS 4KB random read / 85K IOPS 4KB random write speed. Mwahaha! 3 year limited warranty.
 
WD Blue 1 TB Desktop Hard Drive: 3.5 Inch, 7200 RPM, SATA 6 Gb/s, 64 MB Cache – WD10EZEX
Best selling 1TB Western Digital Caviar Blue SATA 6Gb/s (SATA III) drive. Used for storing ISOs, seldom used VMs, base images, etc. I usually configure this drive to sleep after 10 minutes to save even more power. 2 year warranty.
 
Samsung SH-224DB/RSBS 24X SATA DVD±RW Internal Drive
Great quality 24x ±RW DVD burner. It’s cheap, too. Even though it’s SATA2, I connect this to one of the SATA3 ports on the motherboard for no particular reason. 1 year limited warranty.
 
SABRENT 3.5-Inch to SSD / 2.5-Inch HDD Bay Drives Converter (BK-HDDH)
Metal mounting kit for 2.5″ SSD drives. One mounting kit holds up to two SSD drives, stacked on top of each other.
 
StarTech 6in 4 Pin Molex to SATA Power Cable Adapter (SATAPOWADAP)
The IN-WIN’s 300W power supply has three SATA power connectors for drives, which is one short of what we need. Use this adapter to convert one of the two Molex connectors to SATA.
 
C&E CNE11445 SATA Data Cable (2pk.)
We need 4x SATA cables for this build. The ASRock motherboard comes with two black SATA cables and the Samsung DVD burner comes with another red SATA cable, so I need one more. This two-pack is cheaper than some single cables and who doesn’t need an extra SATA cable anyway. Flat (not L shaped) connectors work best for this build. FYI there’s no technical difference between SATA2 and SATA3 cables.



Click the video below for a description of my 5th Generation Hyper-V Lab server. Sorry Apple device users, the videos and slideshow below use Flash. :(






Here’s a video demonstrating the blistering fast boot speed of this server:







Build Notes


Pictures speak louder than words. Here’s a slideshow showing how I assembled the Gen5 server with detailed photos where needed.








Once the components are put together you need to configure the UEFI BIOS before you can install Windows Server 2012 R2. Here’s a helpful video showing how to update and configure the ASRock’s UEFI BIOS:








Sweet! Now it’s time to install Windows Server 2012 R2, which takes about 8 minutes from DVD. Amazing!




How to install the Intel I217V NIC Driver




After you install the OS we need to update the drivers, but there’s a problem. Intel doesn’t want you to use their desktop-class I217-V gigabit network adapter in Windows Server, so they cripple the drivers so they won’t install on anything better than Windows 8.1. This is chicken poop, as far as I’m concerned, and shame on them! Lucky for you, I’ve done the hard work to remove this obstacle.

  • Run the following from an elevated CMD prompt:

bcdedit -set loadoptions DISABLE_INTEGRITY_CHECKS
bcdedit -set TESTSIGNING ON

  • Reboot the server.
  • Download the latest network driver from the Intel Download Center.You’ll want the PROWinx64.exe file for Windows 8.1 x64.
  • Download the updated e1d64x64.inf driver file from my website.
  • Run the PROWinx64.exe file to extract the drivers and run the Intel(R) Network Connections Install Wizard. Do not click Next yet.
  • Right-click the Windows icon in the Taskbar and run %TEMP%. This will open File Explorer to the Temp folder used by Windows.
  • Open the RarSFX0 folder and drill down into the PRO1000Winx64NDIS64 folder.
  • Copy the e1d64x64.inf file you downloaded from my website to this folder, overwriting the existing file.
  • Now continue the Intel Network Connections Install Wizard to complete the installation of the new driver.
  • You will see a security warning that the updated INF file is not digitally signed. Click Install this driver software anyway.



  • The driver will install and the Intel adapter will be enabled.
  • Run the following from an elevated CMD prompt:

bcdedit -set loadoptions ENABLE_INTEGRITY_CHECKS
bcdedit -set TESTSIGNING OFF

  • Reboot the server and you’re done. Whew! Thanks a lot, Intel!!

Now you can install the other software and utilities from the ASRock motherboard DVD. The installer itself won’t work because it’s written for Windows 8, so just drill into the Drivers folder using File Explorer. I recommend installing the following software:

  • Intel Chipset Device Software (DriversINFIntel(v9.4.0.1026)
  • Intel Management Engine Components (DriversMEIntelv9.5.14.1724_5M)
  • Intel Graphics Driver (DriversVGAIntel(v15.33.1.64.3277)
  • Intel Rapid Storage Technology (DriversRapid Storage TechnologyIntel(v12.8.0.1016))
  • RealTek Audio Drivers (DriversAudioREALTEK(7004))
  • Marvell MSU V4 (DriversSATA3Marvell(v4.1.0.2013))
  • ASRock Restart to UEFI (UtilitiesRestartToUEFIASRock)
  • ASRock A-Tuning Utility (UtilitiesA-TuningASRock)
After you’ve installed the configuration utilities you should see that there are no unknown devices in Device Manager. It’s time to install the Hyper-V role and start building out your home lab!



I’ll be presenting a session on building and managing this Hyper-V server at IT/Dev Connections in Las Vegas on September 17, 2014. There will be lots of great content delivered by MCMs, MVPs, and other independent experts. I really hope you can make it! Please contact me for a special discount code.



As always, if you have any questions or comments please leave them below. I hope you enjoy reading about these server builds and take the opportunity to make this investment in your career.



Source: Expta