Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

This isn’t specific to multi-site clustering, but I’ve certainly had to use this many times when adding devices to my multi-site clusters. Adding disks to a multi-site Windows 2008 cluster is not as easy as it should be. In Windows 2008, Microsoft has added some new “logic” while adding disk resources to a cluster. In Windows 2008, when you attempt to “Add a disk” through the cluster administrator GUI, the cluster does a quick check on the available disks to ensure that the disks are present on all nodes of the cluster before presenting this as an available disk in the Cluster Administrator GUI. This can be bad for geo-clusters as the disks are unlikely read/write enabled on all sites, causing the cluster GUI to display an error message:


No disk suitable for cluster disks were found


You may also experience this same behavior when adding a disk resource to a 2008 cluster that you only want to have available to a single, or subset of nodes. This issue could also occur if you deleted a cluster disk resource from your multi-cluster and attempted to add it back in thru the cluster GUI. Because of this behavior, we need to work a little harder to add a disk into a cluster for these situations. To work around this issue, you have a couple of options. The first option would be to evict the offending node(s) from the cluster and then add the storage using the cluster administrator GUI. Yes, this might be a bit painful for some, but if your environment can handle evicting/adding nodes without impact, this is probably the easiest way to get these disks into the cluster.


After evicting the remote nodes, the cluster would then only check the disks from your local storage system on the local node and would see that the disks are viable for cluster use. Now using cluster GUI, when you attempt to add a disk, the error message no longer displays and you will now be presented with the options to add the disks into the cluster. Once you’ve added the disks into the cluster, you would then re-join the other nodes back into the cluster.


If evicting a node isn’t an option, you can manually add the disk into the cluster using cluster.exe commands. I wrote a little MSKB about how to do this for Windows 2000/2003 in MSKB 555312, and there are some slight differences in Windows 2008. Microsoft has renamed just about all of the cluster’s physical disk private properties for Longhorn so my KB isn’t quite accurate for 2008. To manually add a disk using cluster.exe in Windows 2008, you would do the following:


First, we create the empty resource with no private properties…this is the same first step as documented in 555312:


 



C:\>cluster res “Disk Z:” /create /type:”Physical Disk” /group:”Available Storage”




 


This creates a resource of the Physical Disk type in the group named “Available Storage” with no private properties. Next, my favorite secret hidden private property in 2000/2003 Drive has been renamed in Windows 2008. It has been renamed to DiskPath and it is no longer a hidden property, so it isn’t top secret anymore. If you look at the private properties of a physical disk resource you’ll see:


 


C:\>cluster res “Disk Z:” /priv



Listing private properties for ‘Disk Z:':


T  Resource             Name                           Value
– ——————– —————————— ———————–
D  Disk Z:              DiskIdType                     5000 (0x1388)
D  Disk Z:              DiskSignature                  0 (0x0)
S  Disk Z:              DiskIdGuid
D  Disk Z:              DiskRunChkDsk                  0 (0x0)
B  Disk Z:              DiskUniqueIds                  … (0 bytes)
B  Disk Z:              DiskVolumeInfo                 … (0 bytes)
D  Disk Z:              DiskArbInterval                3 (0x3)
S  Disk Z:              DiskPath
D  Disk Z:              DiskReload                     0 (0x0)
D  Disk Z:              MaintenanceMode                0 (0x0)
D  Disk Z:              MaxIoLatency                   1000 (0x3e8)


 


So now I can use this DiskPath value and Windows will magically figure out all of the other gory private properties for my disk using the mount point I specify in the DiskPath parameter. Notice in the above output the DiskSignature, DiskUniqueIds and DiskVolumeInfo fields are empty after creating the “empty” physical drive resource. Now when I use the DiskPath parameter, Windows will magically figure out these fields based on the mount point info provided. I’ve mounted this disk as my Z: drive, so here’s my command using the DiskPath parameter: 




C:\>cluster res “Disk Z:” /priv DiskPath=”Z:”


 


At this point, you would bring the disk online in the cluster and it fills out the rest of the private property values for the disk. After bringing the disk online, when you look at the resource’s private properties, it shows: 


 


C:\>cluster res “Disk Z:” /priv


Listing private properties for ‘Disk Z:':


T  Resource             Name                           Value
– ——————– —————————— ———————–
D  Disk Z:              DiskIdType                     0 (0x0)
D  Disk Z:              DiskSignature                  4198681706 (0xfa42cc6a)
S  Disk Z:              DiskIdGuid
D  Disk Z:              DiskRunChkDsk                  0 (0x0)
B  Disk Z:              DiskUniqueIds                  10 00 00 00 … (132 bytes)
B  Disk Z:              DiskVolumeInfo                 01 00 00 00 … (48 bytes)
D  Disk Z:              DiskArbInterval                3 (0x3)
S  Disk Z:              DiskPath
D  Disk Z:              DiskReload                     0 (0x0)
D  Disk Z:              MaintenanceMode                0 (0x0)
D  Disk Z:              MaxIoLatency                   1000 (0x3e8)


 


Notice that the DiskSignature, DiskUniqueIds and DiskVolumeInfo are now filled in for this disk. You’ll also notice that the DiskPath value has automatically been cleared…not sure why this occurs, but it seems that after the DiskPath value has resolved the other properties, the DiskPath is cleared. If you check the resource properties before bringing the disk online, you’ll see the DiskPath value set, but after bringing the cluster resource online, the DiskPath value is cleared and the signature, ID and volume fields are populated. 


I’ve also found that the DiskPath value has been improved upon over the previous Drive parameter regarding mount point volumes. In 2000/2003 when adding mount point volumes, you would’ve needed to specify the volume GUID in order to add a disk to the cluster using the Drive parameter, which was just ugly. It was hard enough for people to find a disk’s signature…no one other than us storage geeks would know how to find a volume GUID. So it was just easier to specify the Signature parameter for mount points.


In 2008, if I’ve got a disk mounted to my W:\Mount folder, instead of using the volume GUID or a signature, I can just use the absolute path using DiskPath. For example:


 


C:\>cluster res “Disk W:\Mount” /create /type:”Physical disk” /group:”Available Storage”



 


So I just created and empty Physical Disk resource named “Disk W:\Mount” in my “Available Storage” group. Now, I add the absolute path value using DiskPath


 


C:\>cluster res “Disk W:\Mount” /priv DiskPath=”W:\Mount”




Now when I bring this resource online, cluster will successfully modify the rest of the private properties for this volume:



C:\>cluster res “Disk W:\Mount” /priv


Listing private properties for ‘Disk W:\Mount': 


T  Resource             Name                           Value
– ——————– —————————— ———————–
D  Disk W:\Mount         DiskIdType                     0 (0x0)
D  Disk W:\Mount         DiskSignature                  2460703213 (0x92ab59ed)
S  Disk W:\Mount         DiskIdGuid
D  Disk W:\Mount         DiskRunChkDsk                  0 (0x0)
B  Disk W:\Mount         DiskUniqueIds                  10 00 00 00 … (72 bytes)
B  Disk W:\Mount         DiskVolumeInfo                 01 00 00 00 … (48 bytes)
D  Disk W:\Mount         DiskArbInterval                3 (0x3)
S  Disk W:\Mount         DiskPath
D  Disk W:\Mount         DiskReload                     0 (0x0)
D  Disk W:\Mount         MaintenanceMode                0 (0x0)
D  Disk W:\Mount         MaxIoLatency                   1000 (0x3e8)




This is much easier than finding a signature value or volume GUID. If you prefer to use the old ways using the disk signature, this is still possible with 2008 but the Signature private property has been renamed to DiskSignature. For example, if you wanted to add the W:\Mount drive using it’s signature value, you would use a command similar to the following:

 

C:\>cluster res “Disk W:\Mount” /priv DiskSignature=0x92ab59ed


 


 


Now if this disk were a GPT disk instead of a MBR disk, you wouldn’t use the DiskSignature value since GPT disks do not need or rely on a disk signature. Instead, for GPT disks you would use the DiskIdGuid property instead of the DiskSignature value. For example:


C:\>cluster res “Disk W:\Mount” /priv DiskIdGuid={FD6DB7FC-AC1B-4EC3-B1B2-21D7F008A52E}


 


 


Yeah, it’s getting ugly again so DiskPath is certainly a more attractive option especially for GPT disks.


Using cluster.exe, we can successfully add disks into the cluster without having to verify that the disk is available on all nodes of the cluster.


 

15 thoughts on “Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands”

  1. Just bring it online in the cluster GUI. Alternatively, you could also bring it online using cluster.exe command:

    cluster res “Disk X:” /on

  2. Hello,

    When I try to bing my disks online in the cluster GUI or by using cluster.exe command:
    cluster res “Disk X:” /on

    I get an error and disks are still offline.

    Any help please ?

    Thanks

  3. Hi John,

    I followed the steps above but when I run the command to modify the DiskPath it does not populate the DiskVolumeInfo or DiskUniqueIds properties and therefore will not come online.

    Here is the output:
    C:\Windows\system32>cluster res “DiskS” /priv

    Listing private properties for ‘DiskS':

    T Resource Name Value
    — ——————– —————————— ———————–
    D DiskS DiskIdType 0 (0x0)
    D DiskS DiskSignature 547616101 (0x20a3f565)
    S DiskS DiskIdGuid
    D DiskS DiskRunChkDsk 0 (0x0)
    B DiskS DiskUniqueIds … (0 bytes)
    B DiskS DiskVolumeInfo … (0 bytes)
    D DiskS DiskArbInterval 3 (0x3)
    S DiskS DiskPath S:
    D DiskS DiskReload 0 (0x0)
    D DiskS MaintenanceMode 0 (0x0)
    D DiskS MaxIoLatency 1000 (0x3e8)

    Any ideas?

    Regards,
    Dave

  4. The DiskVolumeInfo and DiskUniqueIds values would only be populated once the resource comes online. If the disk resource doesn’t go online, these fields wouldn’t get populated.

    Your disk’s private properties show a DiskSignature value…was this something you added? If so, the cluster will use the signature value first before looking at the DiskPath setting. If you enter the wrong signature, you’d need to adjust this.

    I’d recommend deleting this resource and try adding it again, and only specify the DiskPath value to see if this works.

    If you’re still having issues, I’d recommend bringing this conversation to the Cluster forums to continue this discussion:

    http://social.technet.microsoft.com/forums/en-US/winserverClustering/threads/

  5. But how create the mount point for the new disk on a mount point drive that is currently in the local cluster?
    Example:
    4 node geo cluster.
    new disk is only available on 2 local nodes.
    Mount point drive is O:\
    I see no way to create the mount point on O without taking O out of the cluster.

  6. Bart,

    Microsoft made this as difficult as possible when using mount points. There are two ways around this. One, you can take the root disk out of the cluster as you suggest as this will certainly work.

    The other way is to add the mount point disk as a drive letter into the cluster and then add the disk to your group containing Disk O:. Once the disk is in the same cluster group, you will then be able to change the disk to a mount point on Disk O:. Its silly, but it does work.

    If anyone comes up with a better way, please let me know.

  7. Hi,

    Can any one let me how to exclude some of shared disk other than quorum. As while validation part cluster testing online offline of all available shared disks.

  8. John, Very good article. It helped me out. Just wanted to add that once you have added the disk (Node 1) when you fail over to Node 2 the initial failover fails to bring the newly attached disk online. This is due to the behind the scenes initialisation of drive letters. Manually changing the drive letter of the newly attached disk on Node 2 to the same drive letter and brining it online in Cluster Manager allows for failover to now occur.

  9. Hi all,

    we have four nodes geocluster.
    the group available storage is in error.
    all nodes are rebooted recently.
    What can i do?

    best regards

  10. Before you do any of this check to make sure the disk you are trying to add has taken over the pagefile.

    I was able to add my disk then noticed a large chunk of missing space. This was most likely the culprit.

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>