Category Archives: 4551

Exchange Server 2007 CCR on Windows Server 2008 Failover Cluster

I did a one day workshop on March 15 in Orlando at the Exchange Connections conference. As usual, it was a great deal of fun. However, for some reason, I never posted this blog entry until today.
 
Anyways, I decided to put some of the key bits of information out here for others to enjoy. I hope it helps everyone.
 
First off, we need to understand that putting up a CCR cluster requires several steps, which can be combined into four categories.
  1. Configure the Hardware
  2. Install and Configure the Operating System
  3. Install and Configure the Failover Cluster Feature
  4. Install and Configure Exchange Server 2007 on the Cluster
Configuring the hardware really isn’t difficult since we are talking about CCR. There is no need for a Storage Array Network (SAN) with all of the issues around creating, presenting, and securing Logical Unit Numbers (LUNs) for cluster storage. What we need to do here is just purchase our servers with two Network Interface Cards (NICs) and two internal disks.
 
Installing and configuring the operating system is also pretty straight forward. We need to use either the Enterprise or Datacenter version of Windows Server 2008 on each node. Once the OS is installed, each node needs to be joined to the domain.
 
One of the most important steps is to configure the operating system on each node with the proper role and features that are prerequisites for clustering and supporting Exchange Server 2007.
 
The prerequisites include:
  • Web Server (IIS) and its Required Features
  • Web Server (IIS) Role Services which includes:
    • ISAPI Extensions
    • Basic Authentication
    • Windows Authentication
    • IIS 6 Management Compatibility
  • Windows Powershell

These prerequisites can be installed through the GUI or through the command line. For command line, run the following commands:

  • ServerManagerCMD -i Web-Server
  • ServerManagerCMD -i Web-ISAPI-Ext
  • ServerManagerCMD -i Web-Metabase
  • ServerManagerCMD -i Web-Lgcy-Mgmt-Console
  • ServerManagerCMD -i Web-Basic-Auth
  • ServerManagerCMD -i Web-Windows-Auth
  • ServerManagerCMD -i PowerShell

The Web Server prerequisites are demonstrated in this IISPrerequisites recording, while the Windows Powershell install is shown in this Powershell recording using the GUI to install them.

The next step for the operating system configuration includes setting up the networks. The public network, also referred to as the client access point (CAP), is configured just like any other server. The network used for intracluster communications should be configured so that each NIC (one per node) uses a private IP address range and should not have a default gateway. It is a good practice to rename the networks so there is no confusion regarding their use.

Many cluster administrators will tune the intracluster communication network (also known as the private network or heartbeat network) so it is not configured with unnecessary services. For example, the private network should be configured as follows and as shown in this network clip:

  • Clear the checkbox for Client for Microsoft Networks
  • Clear the checkbox for QoS Packet Scheduler
  • Clear the checkbox for File and Printer Sharing for Microsoft Networks
  • Clear the checkboxes for the Link-Layer Topology options
  • Clear the checkbox for Register this connection’s address in DNS
  • Clear the checkbox for Enable LMHOSTS Lookup
  • Select the radio button for Disable NetBIOS over TCP/IP

Install and configure the failover cluster feature is the third major step in configuring our CCR cluster. This step is pretty easy. All we need to do here is add the Failover Cluster feature to each of our nodes so that they can be part of a cluster. The clip shows the steps of installing this feature. Actually, the feature is already installed in the clip, but it is easy to see from the clip how the feature would be installed on each node. You can also run the feature installation from the command line by running:

  • ServerManagerCMD -i Failover-Clustering

Now that the feature is installed, we can take the next step and actually create our cluster. The Create Cluster link can be used in a couple of different locations to create the cluster and configure it as shown in this Failover Cluster clip.

Once we have created the cluster, we need to change the quorum type to support CCR. The recommended quorum type is Node Majority with File Share Witness. This clip shows the process of configuring the File Share Witness.

Installing Exchange Server 2007 on the Cluster is the second to last step. In this step, we run the setup program from the Exchange Server 2007 installation media. During the installation, we will select the custom installation option and select Active Clustered Mailbox Role. The option to select either Cluster Continuous Replication or Single Copy Cluster is next. The process is seen here in this CCR Installation clip.

The last step is to run the setup program from the Exchange Server 2007 installation media on the other node and select the Passive Clustered Mailbox Role. The steps are the same for the passive node as for the active node with the exception of selecting the passive installation option.

Exchange Server 2007 Disaster Planning

During my one day pre-conference session at Exchange Connections, I heard a desire for something that doesn’t exist today. Later in the week, I sat in the back during a Harold Wong session, and the same topic/request/demand/whatever came up.

The basic is this:

  • Customer sets up CCR in site1.
  • Customer sets up CCR in site1 as an SCR source to a member server in site2.
  • Customer wants to activate SCR destination because of a problem with bandwidth from site1 to site2 and from site1 to the Internet. (expected outage of less than 4 hours)
  • Customer activates SCR target. Customer does not want to take down CCR in site1 as employees in site1 still need to access email server.
  • Bandwidth issue is fixed.

Customer wants sync between SCR target (which is now active) with the CCR cluster in site1.

So, to summarize, customer wants something like Active Directory’s multi-master multi-write database copies with back fill capabilties, but for Exchange mailboxes.

hmmmm…. 🙂

Exchange Server 2007 and Virtualization

During my booth duty at the Failover Clustering booth, I must have heard questions regarding this topic about once per hour if not more.


The official stance: Microsoft does not support the virtualization of Exchange Server 2007 roles at this time. Why not? Well, Microsoft does not have a virtualization platform capable of supporting 64-bit virtual machines at this time. Hyper-V is not an RTM product. Whether Microsoft will change the stance once Hyper-V RTMs is another question, and I don’t have an answer. Also, keep in mind, Microsoft is not about to support a third party’s virtualization platform because they don’t have the control over it to properly support it and fix problems that might be discovered.


My point of view: Why would you ever want to do that anyways? Exchange and SQL are two services that really do require top-notch resources and sharing them on a server with other virtualized servers just seems counter productive to providing the best performance possible for two key business services.


OK, now that I am off my soap box, can you virtualize Exchange Server 2007? Yes, you can. It make perfect sense to me for development and testing environments. It makes perfect sense for a proof of concept, too. It even make perfect sense in small organizations that won’t push their Exchange implementation very hard.


Recently, I worked with a client that has a nice virtualization platform running Hyper-V RC1. They hosted mailbox servers, hub transport servers, and client access servers for their test environment. It ran wonderfully. They are considering doing it when Hyper-V RTMs because their expected load for 35 users isn’t very large. 


UPDATED: Scott Schnoll posted the official stance in his blog post, Exchange Server 2007 and Hyper-V.

Which Exchange Server 2007 Server Cluster Type Should I use, CCR or SCC?

This is becoming a pretty common question in my Exchange classes. Which should I use? Why one over the other?


My current recommendation is to use CCR whenever possible vs. SCC. Why? I am glad you asked that question.


High Availability, see my definition here, is all about risk mitigation. What we should be doing is identifying risks to our important/critical applications and finding ways to eliminate or at least mitigate the risks where economically feasible.


One of the major risks that I see with Exchange Server 2007, as well as previous versions of Exchange, is losing my production database because of a disk failure or my database becoming corrupted. In the case of a disk failure, I would normally restore my database, but that takes time, and very few people want to run a dial tone database while they recover. So, two Exchange Server 2007 technologies provide some protection against a lost database drive or a corrupted database. One is Local Continuous Replication (LCR). LCR, however, is a single server technology and does not provide the risk mitigation against an entire server loss that a cluster can provide. The second technology is to use Cluster Continuous Replication (CCR). CCR provides the one extra piece that a Single Copy Cluster (SCC) does not: it provides for loss of the database disk or corruption of the database.


Since CCR does not do a block by block copy like a SAN replication utility might, the likelihood of corruption passing from the production database to the passive copy is extremely low. Remember, the passive copy is receiving transactions and having them applied to the database much like the production database. Corruption is not copied in such an environment.


Of course, we can’t forget that by using CCR, we also can eliminate the need for a SAN, which is a huge cost savings.


So, add the increased risk mitigation and elimination of the SAN requirement for high availability and you can see that CCR is a vast improvement over SCC.

Wonderful Changes in Exchange Server 2007

I want to end the work day on a positive note. Yes, there are a couple of things about Exchange Server 2007 that tick me off, but overall, I love the product.


I just wanted to take a couple of minutes to mention some of my favorite features.




  • Databases – The change to a single database is a big plus in my mind. Also, I love that we can now have up to 50 Storage Groups and up to 50 Databases when using Enterprise Edition. With the larger number of databases, we can now have smaller and faster databases. We can also have an extremely large number of spindles to provide even more disk I/O. Of course, being able to have a single transaction log disk per database also is a nice change which will lead to better performance as well as easier database recovery.


  • OWA – Wow, lots of great changes here (except for the issue of Public Folder support which will be added back with SP1). I love that it is so much easier to select recipients without having to do a search. The vast number of new options is also a big plus.


  • Mobile – Being able to allow users to wipe their own lost/stolen mobile devices is nice as well as being able to allow users to setup their own devices. The updated Exchange Active Sync is fantastic.


  • OOF – Out of office messages are now much more granular. It is possible to configure them in a number of ways so that we can control the OOFs differently based on whether it is an internal sender vs. an external sender and even to the point of controlling OOFs to partners where we have SMTP connections setup directly with them.


  • Transport Rules – I can’t say enough about all of the fantastic things we can now do using transport rules. I could go on for hours about the many new options we have as administrators including simple things like being able to add disclaimers to all outbound email.


  • UM – Unified messaging with the Outlook Voice Access capabilites are fantastic. I am having a blast playing with this new functionality.

If you have looked into Exchange Server 2007, I highly recommend downloading an evaluation version, taking a class, and seeing just how it can improve messaging in your company. The changes I have listed above are just the tip of the ice berg. There are many more new features that I am sure your company can use today.

More on Managing Exchange Server 2007 CCR

I have been thinking about this a great deal lately. I, as I said in my previous blog post, I am pretty concerned at the way a CCR implementation is supposed to be moved using the Exchange Management Shell (EMS).


Scott Schnoll, somebody I respect greatly, posted on the Exchange Team blog that it is recommended to always use EMS to move the clustered mailbox server from one node to another. He says that you can use the Cluster Administrator tool, but that using Cluster Administrator is not recommended because: 


  • These methods do not validate the health or state of the passive copy. Thus, their use can result in an extended outage while the node performs the operations necessary to make the database mountable.
  • These methods may also leave a database offline indefinitely because the replication is in a broken condition.

  • What does this mean? Well, I have to say it since nobody else will. It means that:



    • Messages may be lost because they were not properly replicated before the move of the clustered mailbox server.

    • The database may not be mountable.

    • Replication might be broken.

    You know what that sounds like to me when you say a database can’t be mounted? It sounds to me like it might be corrupted. At the minimum, it might not be complete because of lost messages that didn’t get replicated. In either case, this is bad (how is that for a technical term?) and should be something that is discussed in your organization.


    With this in mind, all I can say to you is that you should never use the cluster administrator to move a CCR clustered mailbox server because it could cause ugly things to happen.

    Issue: Managing an Exchange Server 2007 Cluster

    Scott Schnoll posted on the Microsoft Exchange Team Blog the other day regarding the proper tools to use when managing an Exchange Server 2007 Cluster.


    Yes, there is some confusion. If you research the topic on Microsoft’s website, documentation clearly says to use the Move-ClusteredMailboxServer cmdlet. Most of us in the industry just took that and ran with it. But, there is a problem here. Every other single application that runs on Windows Server 2003 server clustering can be fully managed using the Cluster Administrator tool or the cluster.exe command line. Exchange Server 2007 is a bit different when dealing with Clustered Continuous Replication (CCR).


    Using the Cluster Administrator MMC, it is easy to properly delegate permissions so that operators and other non-administrators can perform basic tasks when managing clustered applications. The Exchange Server 2007 Management Shell (EMS), does not provide that ease of delegation. I can honestly say that I sure don’t want to be giving Exchange Administrator permissions to non-Exchange administrators. That is opening up a can of worms and it would be impossible to get them all back in again.


    Please take a few minutes and read Scott’s blog. It is extremely helpful, but at the same time, it really underlines a serious management problem. He states, “One of the reasons we recommend using Exchange tools to manage clustered mailbox servers is that, while Exchange is cluster-aware, the cluster tools are not Exchange-aware.” I see a potential problem here for many organizations. There really isn’t a good way to delegate permissions to an operations type of team to allow them to do Exchange Server 2007 clustered mailbox moves without giving them too many other permissions in the Exchange environment. 


     Right now, I have to say that I am a bit peeved. OK, maybe that is too strong, but I am extremely concerned about how this should be best handled.

    Exchange Server 2007 Hub Transport (HT) and Client Access Service (CAS) on the Same NLB Cluster – Updated Jan 9, 2008

    In order to keep the number of servers down in a high availability environment, administrators have been looking at using Network Load Balancing (NLB) for CAS and then co-locating the HT role on each node of the NLB cluster to also provide high availability for the HT role.


    This configuration can work, and it really is not too difficult to configure. It is extremely important to note that using NLB to load balance the default SMTP receive connectors (using port 25) is not supported and is completely unnecessary since they are load balanced for all intra-Exchange communications like HT to HT communications. However, using NLB to provide redundancy and load balancing for connections to  HTs that are hosting Client SMTP receive connectors (using port 587) is fully supported and may be desireable if you have a large number of external SMTP/POP and SMTP/IMAP clients that need to connect to this receive connector.


    The steps that you need are to:




    1. Setup two servers running Windows Server 2003 with two NICs in each server


    2. Install Exchange Server2007 Hub Transport and Client Access Service (CAS) on each server


    3. Configure one NIC for the Network Load Balance cluster and setup the other NIC in a separate network so it can be managed through that IP address


    4. Configure NLB with Unicast and even load balancing


    5. Setup the port rules:



      • Port 25 to 25 for both TCP and UDP and select the radio button to disable this port range (this will exclude port 25 from being listed to using the virtual IP address of the NLB cluster, but still allow the individual server IPs to still listen to port 25)


      • Port 465 to 465 for both TCP and UDP and selected the radio button to disable this port range


      • Port 80 to 80 for both TCP and UDP and set affinity to none (I recommend “none” so you can easily test and verify that it works)


      • Port 587 to 587 for both TCP and UDP, affinity none (this is for the client SMTP receive connector)


      • Port 443 to 443 for both TCP and UDP, affinity none


      • Port 110 to 110 for both TCP and UDP, affinity none


      • Port 993 to 993 for both TCP and UDP, affinity none


      • Port 143 to 143 for both TCP and UDP, affinity none


      • Port 995 to 995 for both TCP and UDP, affinity none


    6. With affinity set to none, you can more readily test the CAS (after updating the web pages to show which server is actually responding) and verify that the load is being shared. You can also test to make sure the NLB cluster does not respond to SMTP on port 25, which it shouldn’t if you set it right, and verify that each server does respond to SMTP as an individual server name.


    7. You can configure protocol logging for the other protocols and telnet to the ports using the NLB IP address to see if they are loading balancing like they should. You can also use the NLB IP for the testing by sending and receiving messages and checking the message tracking logs to see that the traffic was being balanced. It all worked.

    NOTE: You may want to change affinity to either single (especially if it is being used internally) or Class C (especially if it is accessible from the Internet) once your testing is done.


    Good luck, and have lots of fun!

    Xerox FreeColorPrinters