Adding disks into a cluster using PowerShell

With Windows 2008 R2, we now have the option to use PowerShell when you want to look at things in the cluster from a command line along with the cluster.exe command. If you want to add a disk to a cluster using PowerShell, there are several different options.

In my previous article, I explained how to add a disk in Windows 2008. This still applies to 2008 R2, but if you’re more comfortable using PowerShell, here are a couple of ways to do this. I am a novice PowerShell user so my examples are just some of the ways of accomplishing this task. Of course there are other ways to do it, so feel free to leave comments with your examples.

If you’ve got a disk that shows up as an available disk, adding a disk through PowerShell is very straight forward. Here’s how we can check to see if there are any available disks for cluster:

PS C:\> Get-ClusterAvailableDisk

Cluster    : MyCluster
Name       : Cluster Disk 2
Number     : 6
Size       : 17425367040
Partitions : {X:}
Id         : 0xB6F579CA
                                                                                   

For this disk, the easiest way to add it to the cluster would be to use the following command:

PS C:\> Get-ClusterAvailableDisk | Add-ClusterDisk

Name                          State                         Group                         ResourceType
----                          -----                         -----                         ------------
Cluster Disk 2                OnlinePending                 Available Storage             Physical Disk

This command would add all disks from the Get-ClusterAvailableDisk output into the Available Storage group in your cluster using the default (terrible) naming convention for cluster disks. This is a nice little command to quickly add disks to the cluster. However, if you’re in a situation where the disk is NOT showing up in the Get-ClusterAvailableDisk output, like in a multi-site cluster, we’ll need to work a little harder to add the disk into the cluster. Previously, I showed how this was done using cluster.exe so we can apply these same concepts to PowerShell. First, we’ll create the empty resource:

PS C:\> Add-ClusterResource -Group "Available Storage"

cmdlet Add-ClusterResource at command pipeline position 1
Supply values for the following parameters:
Name: Disk X:
ResourceType: Physical Disk

Name                          State                         Group                         ResourceType
----                          -----                         -----                         ------------
Disk X:                       Offline                       Available Storage             Physical Disk

In this example, the Add-ClusterResource command prompted me for the missing parameters for the command. I manually specified the Disk X: value and the Physical Disk resource type. I can avoid having to manually enter these by specifying the -Name and -ResourceType values in the command:

PS C:\> Add-ClusterResource -Name "Disk X:" -ResourceType "Physical Disk" -Group "Available Storage"
	
Name                          State                         Group                         ResourceType
----                          -----                         -----                         ------------
Disk X:                       Offline                       Available Storage             Physical Disk

So at this point, I have an empty disk resource with no parameters to identify the disk:

PS C:\> Get-ClusterResource "Disk X:" | Get-ClusterParameter

Object                        Name                          Value                         Type
------                        ----                          -----                         ----
Disk X:                       DiskIdType                    5000                          UInt32
Disk X:                       DiskSignature                 0x0                           UInt32
Disk X:                       DiskIdGuid                                                  String
Disk X:                       DiskRunChkDsk                 0                             UInt32
Disk X:                       DiskUniqueIds                 {}                            ByteArray
Disk X:                       DiskVolumeInfo                {}                            ByteArray
Disk X:                       DiskArbInterval               3                             UInt32
Disk X:                       DiskPath                                                    String
Disk X:                       DiskReload                    0                             UInt32
Disk X:                       MaintenanceMode               0                             UInt32
Disk X:                       MaxIoLatency                  1000                          UInt32
Disk X:                       CsvEnforceWriteThrough        0                             UInt32
Disk X:                       DiskPnpUpdate                 {}                            ByteArray


I would then need to issue the following command in order to set the DiskPath value and query the output:

PS C:\> Get-ClusterResource "Disk X:" | Set-ClusterParameter DiskPath "X:"

PS C:\> Get-ClusterResource "Disk X:" | Get-ClusterParameter

Object                        Name                          Value                         Type
------                        ----                          -----                         ----
Disk X:                       DiskIdType                    5000                          UInt32
Disk X:                       DiskSignature                 0x0                           UInt32
Disk X:                       DiskIdGuid                                                  String
Disk X:                       DiskRunChkDsk                 0                             UInt32
Disk X:                       DiskUniqueIds                 {}                            ByteArray
Disk X:                       DiskVolumeInfo                {}                            ByteArray
Disk X:                       DiskArbInterval               3                             UInt32
Disk X:                       DiskPath                      X:                            String
Disk X:                       DiskReload                    0                             UInt32
Disk X:                       MaintenanceMode               0                             UInt32
Disk X:                       MaxIoLatency                  1000                          UInt32
Disk X:                       CsvEnforceWriteThrough        0                             UInt32
Disk X:                       DiskPnpUpdate                 {}                            ByteArray

At this point, I would bring the disk online and the cluster will then perform its magic to translate the DiskPath into the DiskSignature and other properties of the disk. Much like cluster.exe, I can use PowerShell to online the disk using the Start-ClusterResource command:

PS C:\> Start-ClusterResource "Disk X:"

Name                          State                         Group                         ResourceType
----                          -----                         -----                         ------------
Disk X:                       Online                        Available Storage             Physical Disk

PS C:\> Get-ClusterResource "Disk X:" | Get-ClusterParameter

Object                        Name                          Value                         Type
------                        ----                          -----                         ----
Disk X:                       DiskIdType                    0                             UInt32
Disk X:                       DiskSignature                 0xB6F579CA                    UInt32
Disk X:                       DiskIdGuid                                                  String
Disk X:                       DiskRunChkDsk                 0                             UInt32
Disk X:                       DiskUniqueIds                 {16, 0, 0, 0...}              ByteArray
Disk X:                       DiskVolumeInfo                {1, 0, 0, 0...}               ByteArray
Disk X:                       DiskArbInterval               3                             UInt32
Disk X:                       DiskPath                                                    String
Disk X:                       DiskReload                    0                             UInt32
Disk X:                       MaintenanceMode               0                             UInt32
Disk X:                       MaxIoLatency                  1000                          UInt32
Disk X:                       CsvEnforceWriteThrough        0                             UInt32
Disk X:                       DiskPnpUpdate                 {0, 0, 0, 0...}               ByteArray

Much like using the DiskPath value with cluster.exe, the cluster identifies the mount point specified in the DiskPath property value and then updates the cluster disk resource properties.

As this is PowerShell, we can combine the creation of the resource, setting of the private properties and the online of the resource all in one big, ugly command:

PS C:\> Add-ClusterResource -Name "Disk X:" -ResourceType "Physical Disk" -Group "Available Storage" |Set-ClusterParameter DiskPath "X:" ; Start-ClusterResource "Disk X:"

Name                          State                         Group                         ResourceType
----                          -----                         -----                         ------------
Disk X:                       Online                        Available Storage             Physical Disk

If you want more info on other available PowerShell commands for use with Failover clustering, I’d recommend reviewing this article which maps cluster.exe commands to their equivalent PowerShell commands.

RecoverPoint Cluster Enabler

One of these days, I do plan on writing about other non-EMC geographically dispersed solutions…I swear [;)]. I just need to find the time to give these other products a proper test run before writing my opinions about them so I can give them a fair assessment. Though finding the time to “play” is becoming increasingly difficult lately.

Anyway, reading Storagezilla’s blog reminded me that I left a little cliff hanger in one of my previous posts. There was a new addition to the EMC Cluster Enabler family, RecoverPoint Cluster Enabler (RP/CE) was introduced with the release of RecoverPoint 3.1 software. This product helps to integrate Microsoft Failover Clusters with RecoverPoint
continuous remote replication (CRR) technology.

This is done in pretty much the same fashion as SRDF/CE and MV/CE. We add a “Cluster Enabler (CeCluRes)” resource to each of the application groups and make the cluster’s physical disk resources in that group depend upon this resource. This prevents the disk from attempting to come online until the CE resource performs its magic under the covers to enable the image access on the remote array.

Much like MV/CE, RP/CE also only supports non-disk based quorum models. So you can only use MNS or MNS+FSW quorum models with RP/CE.

If you want more details about this solution, EMC has published the following whitepaper regarding RP/CE:

http://www.emc.com/collateral/software/white-papers/h5936-recoverpoint-cluster-enabler-wp.pdf

Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

This isn’t specific to multi-site clustering, but I’ve certainly had to use this many times when adding devices to my multi-site clusters. Adding disks to a multi-site Windows 2008 cluster is not as easy as it should be. In Windows 2008, Microsoft has added some new “logic” while adding disk resources to a cluster. In Windows 2008, when you attempt to “Add a disk” through the cluster administrator GUI, the cluster does a quick check on the available disks to ensure that the disks are present on all nodes of the cluster before presenting this as an available disk in the Cluster Administrator GUI. This can be bad for geo-clusters as the disks are unlikely read/write enabled on all sites, causing the cluster GUI to display an error message:

No disk suitable for cluster disks were found

You may also experience this same behavior when adding a disk resource to a 2008 cluster that you only want to have available to a single, or subset of nodes. This issue could also occur if you deleted a cluster disk resource from your multi-cluster and attempted to add it back in thru the cluster GUI. Because of this behavior, we need to work a little harder to add a disk into a cluster for these situations. To work around this issue, you have a couple of options. The first option would be to evict the offending node(s) from the cluster and then add the storage using the cluster administrator GUI. Yes, this might be a bit painful for some, but if your environment can handle evicting/adding nodes without impact, this is probably the easiest way to get these disks into the cluster.

After evicting the remote nodes, the cluster would then only check the disks from your local storage system on the local node and would see that the disks are viable for cluster use. Now using cluster GUI, when you attempt to add a disk, the error message no longer displays and you will now be presented with the options to add the disks into the cluster. Once you’ve added the disks into the cluster, you would then re-join the other nodes back into the cluster.

If evicting a node isn’t an option, you can manually add the disk into the cluster using cluster.exe commands. I wrote a little MSKB about how to do this for Windows 2000/2003 in MSKB 555312, and there are some slight differences in Windows 2008. Microsoft has renamed just about all of the cluster’s physical disk private properties for Longhorn so my KB isn’t quite accurate for 2008. To manually add a disk using cluster.exe in Windows 2008, you would do the following:

First, we create the empty resource with no private properties…this is the same first step as documented in 555312:

 

C:\>cluster res “Disk Z:” /create /type:”Physical Disk” /group:”Available Storage”

 

This creates a resource of the Physical Disk type in the group named “Available Storage” with no private properties. Next, my favorite secret hidden private property in 2000/2003 Drive has been renamed in Windows 2008. It has been renamed to DiskPath and it is no longer a hidden property, so it isn’t top secret anymore. If you look at the private properties of a physical disk resource you’ll see:

 

C:\>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T  Resource             Name                           Value
— ——————– —————————— ———————–
D  Disk Z:              DiskIdType                     5000 (0x1388)
D  Disk Z:              DiskSignature                  0 (0x0)
S  Disk Z:              DiskIdGuid
D  Disk Z:              DiskRunChkDsk                  0 (0x0)
B  Disk Z:              DiskUniqueIds                  … (0 bytes)
B  Disk Z:              DiskVolumeInfo                 … (0 bytes)
D  Disk Z:              DiskArbInterval                3 (0x3)
S  Disk Z:              DiskPath
D  Disk Z:              DiskReload                     0 (0x0)
D  Disk Z:              MaintenanceMode                0 (0x0)
D  Disk Z:              MaxIoLatency                   1000 (0x3e8)

 

So now I can use this DiskPath value and Windows will magically figure out all of the other gory private properties for my disk using the mount point I specify in the DiskPath parameter. Notice in the above output the DiskSignature, DiskUniqueIds and DiskVolumeInfo fields are empty after creating the “empty” physical drive resource. Now when I use the DiskPath parameter, Windows will magically figure out these fields based on the mount point info provided. I’ve mounted this disk as my Z: drive, so here’s my command using the DiskPath parameter: 

C:\>cluster res “Disk Z:” /priv DiskPath=”Z:”

 

At this point, you would bring the disk online in the cluster and it fills out the rest of the private property values for the disk. After bringing the disk online, when you look at the resource’s private properties, it shows: 

 

C:\>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T  Resource             Name                           Value
— ——————– —————————— ———————–
D  Disk Z:              DiskIdType                     0 (0x0)
D  Disk Z:              DiskSignature                  4198681706 (0xfa42cc6a)
S  Disk Z:              DiskIdGuid
D  Disk Z:              DiskRunChkDsk                  0 (0x0)
B  Disk Z:              DiskUniqueIds                  10 00 00 00 … (132 bytes)
B  Disk Z:              DiskVolumeInfo                 01 00 00 00 … (48 bytes)
D  Disk Z:              DiskArbInterval                3 (0x3)
S  Disk Z:              DiskPath
D  Disk Z:              DiskReload                     0 (0x0)
D  Disk Z:              MaintenanceMode                0 (0x0)
D  Disk Z:              MaxIoLatency                   1000 (0x3e8)

 

Notice that the DiskSignature, DiskUniqueIds and DiskVolumeInfo are now filled in for this disk. You’ll also notice that the DiskPath value has automatically been cleared…not sure why this occurs, but it seems that after the DiskPath value has resolved the other properties, the DiskPath is cleared. If you check the resource properties before bringing the disk online, you’ll see the DiskPath value set, but after bringing the cluster resource online, the DiskPath value is cleared and the signature, ID and volume fields are populated. 

I’ve also found that the DiskPath value has been improved upon over the previous Drive parameter regarding mount point volumes. In 2000/2003 when adding mount point volumes, you would’ve needed to specify the volume GUID in order to add a disk to the cluster using the Drive parameter, which was just ugly. It was hard enough for people to find a disk’s signature…no one other than us storage geeks would know how to find a volume GUID. So it was just easier to specify the Signature parameter for mount points.

In 2008, if I’ve got a disk mounted to my W:\Mount folder, instead of using the volume GUID or a signature, I can just use the absolute path using DiskPath. For example:

 

C:\>cluster res “Disk W:\Mount” /create /type:”Physical disk” /group:”Available Storage”

 

So I just created and empty Physical Disk resource named “Disk W:\Mount” in my “Available Storage” group. Now, I add the absolute path value using DiskPath

 

C:\>cluster res “Disk W:\Mount” /priv DiskPath=”W:\Mount”

Now when I bring this resource online, cluster will successfully modify the rest of the private properties for this volume:

C:\>cluster res “Disk W:\Mount” /priv

Listing private properties for ‘Disk W:\Mount’: 

T  Resource             Name                           Value
— ——————– —————————— ———————–
D  Disk W:\Mount         DiskIdType                     0 (0x0)
D  Disk W:\Mount         DiskSignature                  2460703213 (0x92ab59ed)
S  Disk W:\Mount         DiskIdGuid
D  Disk W:\Mount         DiskRunChkDsk                  0 (0x0)
B  Disk W:\Mount         DiskUniqueIds                  10 00 00 00 … (72 bytes)
B  Disk W:\Mount         DiskVolumeInfo                 01 00 00 00 … (48 bytes)
D  Disk W:\Mount         DiskArbInterval                3 (0x3)
S  Disk W:\Mount         DiskPath
D  Disk W:\Mount         DiskReload                     0 (0x0)
D  Disk W:\Mount         MaintenanceMode                0 (0x0)
D  Disk W:\Mount         MaxIoLatency                   1000 (0x3e8)

This is much easier than finding a signature value or volume GUID. If you prefer to use the old ways using the disk signature, this is still possible with 2008 but the Signature private property has been renamed to DiskSignature. For example, if you wanted to add the W:\Mount drive using it’s signature value, you would use a command similar to the following:

 

C:\>cluster res “Disk W:\Mount” /priv DiskSignature=0x92ab59ed

 

 

Now if this disk were a GPT disk instead of a MBR disk, you wouldn’t use the DiskSignature value since GPT disks do not need or rely on a disk signature. Instead, for GPT disks you would use the DiskIdGuid property instead of the DiskSignature value. For example:

C:\>cluster res “Disk W:\Mount” /priv DiskIdGuid={FD6DB7FC-AC1B-4EC3-B1B2-21D7F008A52E}

 

 

Yeah, it’s getting ugly again so DiskPath is certainly a more attractive option especially for GPT disks.

Using cluster.exe, we can successfully add disks into the cluster without having to verify that the disk is available on all nodes of the cluster.

 

New Feature in 2008 R2 – Cluster Shared Volumes

This week, Microsoft is announcing that Windows 2008 R2 will add a new feature to Failover Clusters called “Cluster Shared Volumes (CSV)”. This feature is being introduced so that they can support the Live Migration feature for Hyper-V. You can get more details about CSV and other 2008 R2 features in the following document:

Windows Server 2008 R2 (Beta) Reviewers Guide – http://download.microsoft.com/download/F/2/1/F2146213-4AC0-4C50-B69A-12428FF0B077/Windows_Server_2008_R2_Reviewers_Guide_(BETA).doc

They are giving sessions about this at WINHEC and TechEd EMEA this week. Unfortunately, I won’t be attending these, but it’ll be really interesting to see where this goes in the future. This certainly gives you an idea of where Microsoft might be heading for future releases of Windows. It seems to me that they’ll be heading towards a “shared everything” model in the future, stacking NLB and perhaps Compute cluster all in one massive cluster solution.

In talking with the cluster team, they don’t currently plan to support this feature in a multi-site cluster environment…it’s not yet clear why. Perhaps it has something to do with possible network latency between the nodes. I’ll post more about this as I get the details.

EMC Cluster Enabler Updates

It’s been a really busy month for us. This past month, EMC has released a new version of SRDF/CE, MV/CE and internally announced a new addition to the Cluster Enabler family. I don’t think it’s been publicly announced, so I’ll have to refrain from posting any details on the third item at this time.

 

SRDF/CE and MV/CE version 3.1 are now available for download on EMC’s Powerlink website. Here are the new features in these releases:

 

 

SRDF/CE v3.1 

  • Hyper-V “host clustering” support. Not really a new feature…this is more of a qualification effort to ensure that this works.
  • VMware support. SRDF/CE 3.1 supports ESX 3.0.2 and 3.5 update 1 or higher.
  • Re-namable sites. In 3.1, you can now customize the Site name values in the Cluster Enabler GUI.
  • Thin provisioning support. 3.1 supports using thin R1/R2 devices

  

MV/CE 3.1 

  • CLARiiON AX4-5 array support. Support added for the Clariion AX-5 arrays running FLARE 2.23 
  • MirrorView/Asynchronous support. Keep in mind that MV/A only supports a max of 50 consistency groups currently
  • Hyper-V “host clustering” support. Again, not really a new feature but now fully tested/qualified
  • Re-namable sites. It’s not documented, but this is also new feature in MV/CE 3.1.

  

More details about these new features can be found in the product guides and release notes.

SQL Strikes Again

I learned today that SQL 2008 will actually not install by default in many multi-site cluster solutions. Why? Well, during the SQL 2008 installation, it runs through a Configuration Checker where one of the tests the environment to ensure that the cluster is configured properly. In a geographically dispersed cluster (specifically in an SRDF/CE cluster) this check will fail with the error:

 

The cluster on this computer does not have a shared disk available. To continue, at least one shared disk must be available.

 

Doing a little bit of research on this might lead you to the MSKB article 955780 where it states the following:

 

When you install failover clustering in SQL Server 2008, the node on which failover clustering is installed must own the resource group and the shared disks in that group. If the disk resource is not owned by the local node, if the disk resource is a cluster quorum disk, or if the disk resource has dependencies, the failover clustering installation will fail.

 

What? If the disk has dependencies it will fail??? Why on earth would you have this limitation, Microsoft? This is certainly a valid cluster configuration so why would SQL care about the granular details of the cluster configuration? Other than affecting just about all multi-site clustered solutions, this would also affect any installations that would also want to use mount points.

 

Microsoft’s solution for mount points is to just not setup your cluster properly and let SQL handle setting up the disk dependencies for you. This will be the same solution if you want to setup SQL 2008 in a multi-site cluster, but it is mind boggling that they would force you to do something like this.

 

I’ve submitted feedback to Microsoft regarding this issue. Feel free to rate/comment about this issue here:

 

http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=366673

 

It’s not surprising that someone has already submitted this feedback regarding mount points also…where they gave the lame workaround to remove dependencies:

 

http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=331910

Multiple Subnets with Windows 2008 Clusters

I’ve been playing around with Microsoft Windows 2008 clusters for a while now trying to test out some of the new features and how they will affect multi-site clusters. One of the best new features (in my opinion) in 2008 clusters is the ability to have cluster nodes on different subnets. Microsoft does this by introducing a new OR dependency option for resources. In 2008 clusters, you have two options when making a resource depend on more than one resource:

  • AND. This indicates that both the resource in this line and one or more previously listed resources must be online before the dependent resource is brought online.
  • OR. This indicates that either the resource listed in this line or another previously listed resource must be online before the dependent resource is brought online.

So with the OR dependency, you can set up your cluster Network Name resources using an IP address from multiple subnets. Let’s take a look at this in action. Here’s my configuration:

Network Configuration

I’m using a subnet mask of 255.255.255.0 for all subnets. When you create your cluster or an application group, you are prompted to provide an IP address for each subnet. For example:

IP Setup

Here, you would enter a valid IP address for each subnet. Once this is complete, cluster will automatically setup these dependencies properly for you. Looking in the cluster GUI, you’ll see that only one of the IP resources will be online at a time as the other IP address is not valid for the node:

IP Address Offline

Looking at the dependencies, cluster automatically sets up this OR relationship for you:

OR Dependencies

 You might also notice that the other cluster IP address actually FAILS while coming online. This will generate an error in the system event log every time the group is moved.

Cluster Events

This is mildly annoying, but well worth it.

One issue that you will likely run into with multiple subnets is with the DNS replication. Upon failover to the other node, DNS records will be updated to point to the new IP address. While this is occurring, clients may not be able to connect to the cluster workload even though it is online. I’ve heard reports that this replication had taken up to an hour at some sites (yuck) so the cluster is effectively offline to the clients during this time.

If you’re setting this up in your environment, please leave me a comment and let me know how long this replication takes in your environment.

SQL 2008 Team – Please Add multi-subnet support!

I was informed that feedback for Microsoft product groups should be done through the http://connect.microsoft.com page. So I’ve submitted a request to add support for clusters running multiple subnets here:


https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=353894


Please feel free to join in and rate/add your comments to my submission. Let the SQL team know that this is something that they need to address.