Cluster Enabler 3.0 Released

EMC recently released the latest version of SRDF/CE for MSCS. This new release has the following changes: 




  • New Name – The overall product line has been renamed to “Cluster Enabler” and the first product released in this line is SRDF/CE for MSCS. In the near future, MV/CE will be added to this family of products.


  • Redesigned GUI – SRDF/CE GUI has been completely re-written for this release. The GUI in 3.0 has been simplified and wizards have been streamline, making it easier to configure. The SRDF/CE GUI is more tightly integrated with Cluster Administrator and functions like changing Quorum model in SRDF/CE will also make these changes in MSCS (previous version forced you to manually make this change).CE 3.0 GUI
    CE 3.0 GUI


  • Multiple CE Cluster Management – The CE GUI lets you manage remote clusters and multiple clusters, similar to Cluster Administrator


  • Windows 2008 support added – Version 3.0 adds support for Windows 2008 clusters.



    • All quorum models supported


    • Multiple subnets supported


  • Windows 2000 support dropped – Good riddance


  • Support for 5773 code added – Minimum microcode for this release is 5×70.


  • Support for multiple Symms per cluster – You can now have multiple Symmetrix pairs in a single SRDF/CE cluster. Concurrent SRDF is now tolerated per the product guide.


  • Site mode changes – The default behavior is now called “Restrict Group Movement” and this is basically a combo of the old “No New Onlines” and “Local Override” settings from previous releases. Basically, groups are allowed to come online where a disk is RW while the RDF link is down, but if the disk is WD, the CE resource will fail to come online. The other option is “Automatic Failover” and this feature is essentially the same as the old “SRDF Override” setting. The “Failstop” option is no longer listed, but it is configurable via CLI. The “Forced Failover” and “Local Override” values have been removed.


  • Site Mode now set at the GROUP level – Previously, this setting was a cluster wide setting. Now, this value can be adjusted on the individual group level. This gives you greater flexibility on which groups you might consider putting at risk by enabling automatic failover during a site outage.


  • SRDF/CE resource/registry changes – Most settings for SRDF/CE are no longer stored in the SRDF/CE\Config hive in the registry. Instead, these settings are now stored as private properties of the SRDF/CE resources in the cluster.


  • Installation changes – CE installation now requires that MSCS be installed first before attempting to configure the cluster for CE…this is similar to the old “Convert MSCS to SRDF/CE” wizard. MSCS must be installed on at least one of the cluster nodes prior to configuring CE.

For more information, you can get a copy of the software, product guide and release notes on EMC’s Powerlink website.

6 thoughts on “Cluster Enabler 3.0 Released

  1. Wow!

    I have just spent a large amount of time searching the Microsoft site and forums trying to find answers to Windows Server 2008 cluster questions. Specifically, doing fail-over between sites that have separate EMC SANs.

    I saw your answer to someone else’s post and quite out-of-character I decided to click on you footer link to this blog.

    As an existing EMC customer, with no prior knowledge of this product, I’m blown away. It seems to the exact thing I’m after. I want cluster nodes, for file services, connected to separate CX3-80s to have HA when the SAN or whole site “goes away”.

    thanks!

  2. I’m trying to find the answer on PowerLink, but just quickly, does CE 3.0 work with Clariion (CX3-80) or just Symmetrix EMC hardware?

  3. Sorry to fill up your comments with 3 posts in a row (feel free to roll them into 1) ..

    Disappointed to report I have now found your other posts and seen that I can’t use this wonderful product with Clariion hardware. Add my company to the list that would like a MirrorView equivalent.

    Now I’m back to looking at a host based mirroring solution for the cluster similar to this comments in this post http://msmvps.com/blogs/jtoner/archive/2007/10/07/quorum-arbitration-in-a-geographically-dispersed-cluster.aspx

  4. Kevin,

    MirrorView/CE is coming in the near future. It’s currently in the finally stages of beta and should be released in the next month or two.

    -John

  5. >> MirrorView/CE is coming in in the next month or two.

    Great news! Good timing for me.

    I’m going to subscribe to the blog so that I can get a notification about the release, assuming you create a blog entry for this momentous occasion.

    Some questions I had, while thinking about how to write the CLI commands for MirrorView myself, related to the suggestion in another comment about making the cluster resource dependent on MirrorView cli script you create. Such that the commands to break the mirror and promote the secondary to primary occur before the second cluster node tries to mount the drive.

    1 – Will MirrorView/CE handle fail-over of a cluster without changing the mirroring? In other words, if there is nothing wrong with the SANs and you just want to work on the host, can the failover work such that the mirror is not broken? {should make for a much faster fail-over compared to a the whole “break-the-mirror-during-every-fail-over” process}

    2 – Will MirrorView/CE [assuming it breaks the mirror to present the secondary to the second host] recreate the mirror the other way so that the cluster can be failed back? Similar to question 1, if the cluster is just being failed so the primary host can be worked on, you don’t want the LUN mirror to get out of synch but you still want the host and SAN to be in the same site. What about if the SAN isn’t available to recreate that mirror? No more Cluster until the mirror is recreated and synched??

    Thanks
    Kevin

  6. Kevin,
    To answer yours qs above. Q1…as long as the MV link is up and stays up failover and failback is very simple.When you failover MV basically does a swap at the array level and changes made on the DR site will be sent back to the primary site so you can failback.This is assuming the MV link is always available.If your link goes down and you want to bring cluster up on DR site, there are some extra steps required to “force” the DR node up.In this scenario you’re failback process requires more intervention. (all documented in help files)

    Q2….you could just stop the cluster service on the DR host so it won’t be an option for your primary server to failover to. Again as long as your links are up MV will continuing replicating at the array level.

Leave a Reply

Your email address will not be published. Required fields are marked *