Windows Server 2003 R2 and Clustering

Rod Fournier and I had one of our geek talks recently, and it also came up in our clustering class in Denver. It also came up in a conference call last week. What does R2 offer when it comes to clustering?


The answer is, nothing.


Let me expand on this because it really isn’t true. While R2 offers absolutely nothing new for server clustering, it does offer many benefits that can improve the performance and reliability of clustering, and it also adds a new resource type.


So, let’s try this again. What does R2 offer when it comes to clustering?


Improved DFS. DFS improvements allow for scheduling of traffic, throttling of traffic, and utilizes compression across WAN links. Also, DFS offers the ability to store and forward changes in response to WAN failures. Since it is possible to run DFS roots on a server cluster, this can impact your current environment.


Improved Print Management Console. The new console provides a better view of the overall printer topology (yes, you can see all of the printers in the org from a single interface) and improvements in the MMC (now version 2.1) provides increased support for remote resources like printers. One feature, which I have not played with yet, is that administrators are supposed to be able to kill the spooler for a single printer without impacting all of the currently spooled print jobs for all other printers on the same server. This is great news to organizations that cluster critical printers.


NFS. With Services for Unix built into R2, now a new clustering resource is available; NFS.


File Server Resource Manager. The new file resource manager is going to make my life easier. I can see that now. With the FSRM, administrators will have more granular quota capabilities by managing per volume, per folder, or per share. Also added is a new file screening tool which allows administrators to disallow certain file types from being stored, such as mp3 files. To make the deal even better for FSRM, new reporting capabilities are built in. Not example a great deal for clustering overall, but when using R2 to cluster file servers, these benefits are pretty nice ones to have at your finger tips.


Storage Manager. The new storage manager allows administrators of R2 servers to manage and administer SANs. For example, with this tool, if the storage vendor supports Microsoft’s APIs, an administrator can perform discovery on devices, provision storage, allocate storage, and manage multi pathing configurations. Yes! Yes! Yes! No more calling the storage team for every little thing when it comes to configuring my LUNs for clustering. How nice this dream is…


OK, so, nothing in R2 directly impacts Windows Server 2003 server clustering, but the changes do make life better for those services and resources that are on the server cluster.

Longhorn and Exchange 12 – 64 bit or is 32 bit out there?

This topic kind of caught me off guard a couple of weeks ago. I remember hearing very clearly (OK, I read it from several very reputably sources) that Exchange 12 would only be offered in 64 bit. I also remember hearing that Vista would be available in both 32 bit and 64 bit. The one that was fuzzy to me is regarding Longhorn server. I have heard that it would only be offered in 64 bit, but that was semi-wrong.


OK, so, to set it straight as of today:


  1. Longhorn Server will be offered in 32 bit and 64 bit when it first releases. The R2 version will be 64 bit only.
  2. Exchange 12 will be offered in 32 bit for demonstration, evaluation, and educational purposes. It will be fully featured.
  3. Exchange 12 will be offered in 64 bit for production environments and will be the only version fully supported for production.

OK, that is very odd to me. Why would Microsoft bring out 32 bit versions of Longhorn and Exchange 12 when it would mean an extra investment in maintenance of code bases and extra development efforts? The issue seems to be based on virtual machines.


Virtual Server 2005 and Virtual Server 2005 R2 can be run on 64 bit hardware, but they can only emulate 32 bit guest environments. So, after pushing organizations to use virtualization for testing, and after a significant investment in virtualization for training purposes, we have a problem. Well, Microsoft has a problem.


Until Virtual Server is able to run 64 bit guests, there appears to be a big road block.


Stay tuned for the next change. :)

LeftHand Networks – 256 TB LUN

What an awesome way for a geek to spend an evening.


Rod Fournier and I met with LeftHand Networks in Boulder, Colorado last night for about 3 hours reviewing their iSCSI technology. It is easy to manage, it is very fast (but it will be incredible once 10GigE is out) with its multiple pathing capabilities.


I left there a very happy camper, and I didn’t even get any schwag.


Why was I so happy? Well, I am glad you asked that. We found that LeftHand Networks can create and will support a single LUN (yep, one LUN) up to 256 Terabytes in size (yes, it is many, many TB in size). What else made me so happy? Their storage modules include a 2U unit that holds 6 TB of raw storage. Yep, 6 TB in 2U. Yes, really!


So, we ask them to demo the ability to create really large partitions. They created a 40 TB LUN for us, and it took about 5 minutes for it to format and be available. So, I am still happy.


OK, let’s cluster it. Whoa, wait… its over 2 TB, so it is GPT, not an MBR disk. While it says it is a basic disk (which it is), it is GPT, so it can’t be used. In Windows Server 2003 Enterprise (and Datacenter), you can’t use GPT drives for clustering. You can use them in single server implementations, but not as shared (yeah, they are not really shared – it is a shared nothing model) disks in a cluster. Crap. So what next?


Brainstorm time… As we bounced ideas off of each other, I came up with an idea. I am not sure if it will really work, but it makes some sense. We did a quick test by creating a single node cluster using the demo guys notebook (he was running 2003 Enterprise) and it worked there. The idea is to mount the large drive in a smaller (and supported for clustering) drive. The cluster service will control access to the smaller drive and thus control access to the larger drive that is mounted to it. So, we tested it by doing this:


  1. Create a small disk for the cluster of 1 GB just to have something that can be controlled and managed by the cluster service and expose it to both nodes of the clsuter.
  2. Create a drive over 2 TB using GPT (we did a 6 TB) and expose it to both nodes of the cluster.
  3. Mount the 6 TB drive in the 1 GB drive as a folder.
  4. Configure the 1 GB drive in clustering as a physical disk resource.
  5. The result? It worked. You could easily access the 6 TB drive through the new physical disk resource via the cluster service.

The next questions are:


  1. Will it really work with multiple nodes?
  2. Will Microsoft support it?

I can’t see Microsoft supporting this, but we shall see what happens.


 


Update – an answer: Thanks to some great response, we have decided that this just won’t reliably work. After all, at the failover to another node, there is nothing that will make sure the large drive will get its cache properly flushed. So, the new focus will have to be on using a 3rd party app to provide dynamic disks to Microsoft clustering, like Veritas.