Fail-Over Clustering in Longhorn


This came across my desk the other day. Looks to be pretty good stuff. Moreinformation can be had at http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032271683&Culture=en-US


What you can expect:


Improved Cluster Setup, Setup is streamlined and simplified, Create an entire cluster in one seamless step, Thorough cluster testing to ensure your cluster will function properly,
All the power of a full cluster test suite in your hands to guarantee the actual cluster you are setting up will provide rock solid stability
Fully scriptable for automated deployments


A Cluster Migration Tool Will assist migration of a cluster configuration from one cluster to another
Rolling upgrade of Windows 2003 to Longhorn cluster. It will be a “Roll Forward” model. migration from win2k3 to Longhorn cluster will is not that simple under the hood as from win2k to win2k3.


All New Cluster Administrator Tool! Designed to be task based and easy to use. Less dials-n-knobs to worry about.


Expanded tool functionality for better manageability. Cluster Administrator graphical tool. Command line (cluster.exe) Fully scriptable with WMI. Enhanced WMI functionality over Windows 2003. Migration from legacy cluster debug logging (cluster.log) to Event Tracing for Windows (ETW)


The name “Virtual Server” is going to be replaced by “Virtual Instance” because Microsoft has an actual product “Virtual Server 2005” to avoid confusion.

Virtual Instance Share Scoping:
Just see the shares available through that Virtual Server, Removes user confusion when browsing Clusters, Ability to modify resource dependencies while resources are online, Facilitates scaling up disks while applications are online, Cluster VSS Writer for Backup & Restore


Network Name resource stays up if either IP Address resource A or B are up. Today both resource A and B have to be online for the Network Name to be available to users. Allows redundant resources and scoping impact to dependent services and applications.


Support for GUID Partition Table (GPT) disks, Allows support for larger then 2 TB partitions, GPT provides improved redundancy and recoverability.
Support for all platforms:  x86, x64, and Itanium
Support for Hardware Snapshot restores of Clustered Disks
Improved disk Maintenance Mode will allow giving temporary exclusive access to online clustered disks to other applications


Quorum enhancemenst: New best-of-both-worlds quorum model. Hybrid of Majority Node Set (MNS) logic and Shared Disk Quorum model. This model will replace both of the existing models
Scales from Small to large node clusters. Clusters with or without shared disks
Geographically dispersed clusters. Can achieve current “Classic” quorum or MNS quorum functionality. Shared quorum disk is optional. NO single point of failure. Can survive loss of the Quorum disk.


No More Single-Subnet Limitation!!!!
Allow cluster nodes to communicate across network routers. No more having to connect nodes with VLANs!
Configurable Heartbeat Timeouts. Increase to Extend Geographically Dispersed Clusters over greater distances. Decrease to detect failures faster and take recovery actions for quicker failover. Using new Quorum with 3 sites, “wiser decisions” about automatic failover is provided.


Integrated with new LH TCP/IP Stack. Full IPv6 Support. Client Access via IPv6. Tunnel IPv6 address resources for IPv4 compatibility. Inner-node communication with IPv6
No more legacy dependencies on NetBIOS, Ready for NetBIOS-less environments!!
Simplifying the transport of SMB traffic
Removes WINS and NetBIOS name resolution broadcasts, Standardizing name resolution on DNS !!


Pure Kerberos based authentication, No more legacy NTLM!!
Secure mutual authentication
Enhanced encryption, Better performance
Moved from datagram (UDP) protocols to secure TCP session oriented protocols
Auditing of Cluster Access: “ Who failed over this group…?”
Logged to Security Event Log, Can be bubbled up through security tools or remote event management such as MOM

Leave a Reply

Your email address will not be published. Required fields are marked *