This is becoming a pretty common question in my Exchange classes. Which should I use? Why one over the other?
My current recommendation is to use CCR whenever possible vs. SCC. Why? I am glad you asked that question.
High Availability, see my definition here, is all about risk mitigation. What we should be doing is identifying risks to our important/critical applications and finding ways to eliminate or at least mitigate the risks where economically feasible.
One of the major risks that I see with Exchange Server 2007, as well as previous versions of Exchange, is losing my production database because of a disk failure or my database becoming corrupted. In the case of a disk failure, I would normally restore my database, but that takes time, and very few people want to run a dial tone database while they recover. So, two Exchange Server 2007 technologies provide some protection against a lost database drive or a corrupted database. One is Local Continuous Replication (LCR). LCR, however, is a single server technology and does not provide the risk mitigation against an entire server loss that a cluster can provide. The second technology is to use Cluster Continuous Replication (CCR). CCR provides the one extra piece that a Single Copy Cluster (SCC) does not: it provides for loss of the database disk or corruption of the database.
Since CCR does not do a block by block copy like a SAN replication utility might, the likelihood of corruption passing from the production database to the passive copy is extremely low. Remember, the passive copy is receiving transactions and having them applied to the database much like the production database. Corruption is not copied in such an environment.
Of course, we can’t forget that by using CCR, we also can eliminate the need for a SAN, which is a huge cost savings.
So, add the increased risk mitigation and elimination of the SAN requirement for high availability and you can see that CCR is a vast improvement over SCC.
A fairly common scenario for a cluster administrator is to move a cluster from one SAN to another as SAN equipment is replaced with newer/faster SANs or the old SAN’s lease is up and a new one is being brought in.
The easiest way that I have found to do this is to use these steps (this is from memory, let me know if I missed one or two):
Super High Level Steps:
- Put the new array in the same fabric as the existing array
- Create new LUNs on the new array and make sure they are visible to the nodes
- Map the new LUNs to the old drive letters
- Copy data from the old drive to the new drive
- Move quorum and MSDTC
Slightly More Detailed Steps:
- Carve the new LUNs on the new array
- Add the new array and its LUNs to the same switch as the existing array
- Configure the LUN masking on the switch to expose the new LUNs to NodeA and NodeB
- Use the disk management tools in Windows to rescan the drives
- Use the active node to partition and format the disks
- Use Cluster Administrtor to create the new physical disk resources and put them into their proper cluster groups
- Move the Quorum using the GUI to a temp location
- In Cluster Administrator, right click the cluster name
- Select Properties
- Select the Quorum tab
- Use the drop down box to select a temp location for the quorum
- Delete the existing MSDTC folder (if any)
- Stop the MSDTC resource
- Copy the MSDTC folder from Q: to the final qurom disk target location
- Stop the Q: resource (remember, the quorum isn’t there anymore)
- Delete the MSDTC resource
- Move the quorum to its final location
- Go into disk management and change the Q: name to another letter
- Use disk management and name the final quorum drive to Q:
- Repeat steps 7.1-7.4 to move the quorum to its final destination
- Recreate the MSDTC resource
- Create a new MSDTC resource with the clustername network name resource and the new Q: as dependencies
- Bring the MSDTC resource online
- Stop the cluster service and the application cluster groups (you can just stop the application resources if you want to move app data an app at a time)
- Move the data from the old disks to the new ones
- Re-letter the old disks to something outside the current range, but do not remove them yet – you might need to use them in your back out plan
- Re-letter the new disks to the same drive letter as the old ones (no, you do not have to worry about disk signatures as applications don’t understand disk signatures and don’t care about anything other than drive letters)
- Verify that all dependent resources are pointing to the proper physical disk resource.
- Restart the cluster service
- Make sure the new drive letters and disk resources are showing up properly in cluster administrator
- Bring everything back online
Again, these are basic steps. Some of the individual steps will require lots of work. I have done this now several times and am very happy with the results.