Recently Microsoft released a white paper (Using WSRM and Scripts to Manage Clusters – http://www.microsoft.com/downloads/details.aspx?familyid=ba2559e6-dd23-41a6-9efb-1d90f8f1fc17&displaylang=en) on how to configure and use Windows System Resource Manager to manage Clusters.
For those of you not familiar with WSRM, it’s a free product that comes with Windows Server 2003, Enterprise Edition or DataCenter – both of which you can run a cluster on today. http://www.microsoft.com/windowsserver2003/techinfo/overview/wsrmfastfacts.mspx.
Here are a few features of WSRM:
- Set CPU and memory allocation policies on applications. This includes selecting processes to be managed, and setting resource usage targets or limits.
- Manage CPU utilization (percent CPU in use).
- Limit the process working set size (physical resident pages in use).
- Manage committed memory (pagefile usage).
- Apply policies to users or groups on a Terminal Services application server.
- Apply policies on a date/time schedule.
- Generate, store, view, and export resource utilization accounting records for management, service level agreement (SLA) tracking, and charge-back purposes.
Basically you ensure with WSRM that your clustered application gets the resources it needs and so does your base OS. This way Exchange or SQL gets everything it can without impacting normal operations.
The article correctly states that WSRM is not cluster-aware. It will monitor individual computers in a cluster J I would follow the best practice of configuring each clustered node with WSRM and identical resource allocation policies, process matching criteria, and other components of WSRM. Scripting the process is an excellent way to configure WSRM, as the articles title suggests.