Note: David asked me to explain why I did not say anything about RAID 10 or 0+1. I told him that I can’t see throwing that much at a transaction log unless you really have huge amounts of transactions that require scaling up the size of the log space to hold a large size.
I got brave and decided to fix the problem. The issue, for those who missed the first show, was that I needed to reconfigure the MSDTC on a SQL cluster. The MSDTC needed to be moved to a different drive because we are retiring one of the EMC frames.
The more I read on this the more complex it seemed. I read through Q 301600 and Q 294209, and reviewed several other sources. These articles made it sound like I was going to have to rip out DTC on each node and then rebuild it on each node and restart SQL on the cluster. I just refused to believe it was so complex.
The MSDTC resource was configured as part of the initial cluster group with the cluster name and Q: as dependencies. Previously, the quorum was moved from Q: to I:, but the old Q: could not be removed until the MSDTC was reconfigured.
So, the more that I thought about it, the more sense it made to me, so I:
- Stopped the MSDTC resource
- Copied the MSDTC folder from Q: to I:
- Stopped the Q: resource
- Deleted the MSDTC resource
- Created a new MSDTC resource with the clustername and the new I: as dependencies
- Brought the MSDTC resource online
I don’t see any problems with it so far.
More like, it is semi-broken. Ok, it isn’t broken at all, but it isn’t configured the way I want it configured so I have a desire to fix it.
It all started last Friday night. We are migrating from one EMC frame to another one. The storage guys (they really do a great job here) added new LUNS to the SQL cluster. We then added the drives as resources in cluster admin and then put the drive resources into the proper SQL cluster groups. We shut down the SQL services, did a quick file copy to the new drives, changed the drive letters to match the old drives and restarted the SQL services. We also pointed the quorum to its new drive. Everything works.
Problem. The quorum is now the I: and not the Q: Yes, I know it is a stupid issue, but the company wants all quorums to be Q:’s. So, we try to make the change. Nope, the old Q: won’t let us change it. Why? Because of the MSDTC is using that resource. Time to go to bed.
Topday, we revisit the issue of the Q:. We have to do something because this one drive is holding up the retirement of the old EMC frame. The MSDTC is in the cluster group along with the quorum and the cluster IP and cluster name. So, there are two physical disks, one being the Q:(old quorum) and the other being the I:(new quorum) in the cluster group. We try stopping the MSDTC, copying all the msdtc folder info from Q: to I:, adding the I: as a dependency, and removing the Q: as a dependency. As you can guess, this just doesn’t work. Yes, I know the solution is to uninstall MSDTC and reinstall it. No, I don’t want to do that. I want a better way.
Back to thinking… I will probably just do it the right way, but I have a nagging feeling I am missing something really easy.
Whenever I teach Exchange Server 2003 classes, I get to the module that discusses clustering and I want to scream. There just isn’t enough material to discuss Exchange clustering properly. Anyways, I started talking more and more about clustering as there seems to be a great deal of interest in clustering Exchange in many organizations. So, here are some of the more common questions I get when discussing Exchange Server Clustering.
Q1. If I have two nodes in the cluster, do the mailboxes exist on both nodes?
A1. Microsoft Server Clustering uses a shared nothing architecture. In this architecture, resources are created for a virtual server (they include any needed Physical Disk resources, Network Name, IP Address, and services). In the case of Exchange, the cluster virtual server is built and all the resources run on the active node. If the virtual server fails over or is moved to the passive node, the second node in the cluster then takes control of all of those resources. So, short answer: The mailboxes exist in the storage group associated with the physical disk resource and this disk resource is passed back and forth between the nodes. Only one copy of each mailbox exists.
Q2. If I build a two node cluster, do the computers have to be exactly the same?
A2. No, they don’t need to be exactly the same, but they need to be very close in order to be supported. See KB 814607 and read the section on Server Cluster Qualification for more information.
Q3. I read the book and I also heard you say that you will often need additional single machine Exchange servers when using Exchange Server Clusters. Why do I need to have Exchange servers that are not part of a cluster?
A3. Several different services are not properly supported in a cluster and others just simply do not work. These services include:
- Active Directory Connector
- Intelligent Message Filter
- Site Replication Service
- Internet Mail Wizard
- /DisasterRecovery setup switch
- Lotus Notes Connector
- Novell GroupWise Connector
- Exchange Events
To top it off, because of the SRS and ADC issues, an Exchange Server 2003 cluster can’t be the first Exchange Server 2003 server in an Exchange 5.5 site. Thanks to David Elfassy for helping me with this list. http://spaces.msn.com/members/elfassy/Blog/cns!1pvwhiXzZoTl_cUJCU1PSHfw!185.entry
Q4. MSDTC is required as part of the cluster install and there are conflicting articles on the Microsoft site about whether it needs its own cluster group with its own IP resource, network name resource, and physical disk resource. What is the right answer?
A4. MSDTC does not require its own physical disk resource and it can be included in the default cluster group. You can get more info on my blog under the Micrososft Clustering category.
Q5. What is wrong with using Active/Active for Exchange clustering vs. Active/Passive?
A5. Auggghhh. Read my blog here for the answer(Summary – Don’t use Active/Active): http://spaces.msn.com/members/russkaufmann/Blog/cns!1pwuGkyvTDx37q1_Y3JQ_E6g!137.entry
Q6. How do I add the IMAP4 and POP3 services to my Exchange cluster after it is installed?
A6. It is covered here: http://www.microsoft.com/technet/prodtechnol/exchange/guides/E2k3AdminGuide/47c09fa5-09cc-4fe6-a748-d45f0d3b5ded.mspx but to boil it down to the basics, the steps (shown for IMAP4 only) are:
- In the Cluster Group for the Exchange Virtual Server (EVS), right click it and select New, Resource, then enter the name (i.e. EVS1 IMAP4)
- Select Microsoft Exchange IMAP4 Server Instance from the Resource Type list.
- Add all nodes as possible owners
- Add the System Attendent as a dependency
Q7. Why do I need MSDTC to be installed in order to build an Exchange cluster?
A7. Because. It is really only needed during the installation of the cluster because the Exchange install application needs the cdowfevt.dll that is part of the Com+ installation. MSDTC is used for workflow applications in Exchange, but other than that, it isn’t used at all after the install. Oops, I take it back, it is used for upgrading as well.
Q8. How many physical disk resources should I plan for an Exchange cluster?
A8. At a minimum, you should have 4 physical disk resources per EVS, 5 if you the MTA is heavily used as it should have its own physical disk resource.
- One for the quorum and MSDTC (yes, you can put the MSDTC on the same disk without any trouble)
- One for the store (one for each storage group at a minimum, I prefer one for each store)
- One for the transaction logs (on for each storage group)
- One for SMTP
- One for the MTA (possibly… it depends how much the MTA will be used in your environment)
Keep in mind that each of these disks should be a LUN on a SAN. If you are carving them up yourself, I highly recommend using RAID 1 sets for the transaction logs, SMTP, and MTA (if you use it heavily) and RAID 5 for the mailbox stores. Do not create physical disk resources that are partitions on the same physical drives. When it comes to disk sizing, I highly recommend reading Nicole Allen’s blog entry at http://blogs.technet.com/exchange/archive/2004/10/11/240868.aspx. She does a fantastic job of explaining how to size disks for Exchange. You can also see similar information on storage optimization at http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/optimizestorage.mspx.
Q9. Why do you recommend MSCS for the mailbox servers but not for the OWA servers?
A9. The OWA (also known as the Front End or FE) servers do not have a requirement for shared disk storage. You can achieve server redundancy and horizontal scaling using NLB or hardware load balancers with multiple FE servers since there is no requirement for a database or information stores on an FE.
Q10. Windows Server 2003, Enterprise Edition, support eight nodes in a cluster. Can I have eight virtual Exchange servers?
A10. While you can have up to eight nodes in a cluster, you can’t have that many Exchange Virtual Servers in a single cluster. Once you go to three or more nodes, Exchange forces you to have at least one passive node. So, for eight nodes, it is possible to have only up to seven active nodes and one passive node. There are a couple of concerns that you need to be aware of when creating larger than two node Exchange clusters.
- If you have three or more nodes, each node can only host a single EVS. If, for example, you have three nodes with two active nodes and one passive node(Active/Active/Passive), and one of the active nodes failes, it will failover to the one passive node. If the other EVS failed, then the EVS would not failover to another node. It would just fail. While you can potentially have two EVSs on the same node in an Active/Active two node cluster, you can’t have two EVSs on the same node in larger clusters.
- In a large cluster, it makes sense to have two or more passive nodes so that you can support more than one failure at a time.
- My personal recommendation is to never go beyond 4 nodes (Active/Active/Active/Passive) as you will be fighting disk letter issues (think about 5 or more physical disk resources per EVS and then do the math), and it would become very complicated to monitor and manage. With three EVSs, the number of disk drive letters gets to be pretty high and will make it difficult to add new physical disk resources and do to things like disk migrations in the future. Yes, you can use mount points, but using disk letters makes it easier to manage.
Q11. How many IP address do I need for an Exchange cluster?
A11. You need IP address for:
- NodeA Public Interface
- NodeA Private Interface (for heart beat)
- NodeB Public Interface
- NodeB Private Interface (for heart beat)
- Default Cluster Group (IP is needed as dependency for network name resource)
- Exchange Virtual Server (IP is needed as dependency for network name resource for your EVS). You will need one IP for each EVS in your cluster.
- MSDTC cluster group (if you break it out into its own cluster group, it will need an IP resource, but thisis not needed)
Remember a couple of important things regarding your heart beat networks:
- It should not be routable
- Should have NetBIOS disabled
- Should not register their IP with DNS
- Should have Microsoft Networking disabled
- More info on the heart beat network can be found here: http://spaces.msn.com/members/russkaufmann/Blog/cns!1pwuGkyvTDx37q1_Y3JQ_E6g!146.entry
Note: I will return and update this entry as I think of the more common questions that I get in my Exchange classes.