Monthly Archives: January 2010

Server Health Checks

I’d like to share some of the things I look at while do a health check on a server.  Its funny how few resources there are out there on the Internet.  I believe people keep this kind of stuff to them self because they are scared they are going to miss something and they will never live it down.  My response to that is, So What!  Heck, I don’t claim to know it all but why not share what I do know and maybe others can share via the Comments!!!

When I’m troubleshooting I like to compartmentalize what I”m looking for.  With that my health checks are set up the same way.  I also believe health checks are quick snapshots of the health of a server.  Sure there are tools that you can use to analyze systems further but in this case we are doing a quick health check.  Not all of these need to be done but some should, you get to decide.


Occasional high CPU spikes are ok as long as you are aware of the process causing this. A server should maintain 80% CPU utilization for an extended period of time.  If it does it may be time to upgrade.  Its a good idea to keep Task Manager open during the duration of your troubleshooting to see trends.

Check CPU Usage

  1. Open Task Manager

  2. Check the Processes tab, ensure there are no processes consuming excessive CPU

  3. Check the Performance tab, ensure there are no single CPU’s that have excessive CPU usage

Check CPU HW

  1. Open Device Manager (right click computer –> Manage)

  2. Ensure that no CPU’s have red X or yellow ! underneath the Processors


This is one area that you may not want to do for quick health checks but is something you should be familiar with.  Task Manager only gives you basic info on processes and you will find that you may need to dig a bit deeper.  For that I recommend Process Monitor from the great SysInternal tools.  Process Explorer can also be used.  In fact download and play with all these tools…they will save your bacon, I guarantee it.

In-Depth Check

Copy Process Monitor locally, then launch it.

  1. Analyze each process and watch what operations open the reg keys, file etc.

Copy Process Explorer locally, then launch it.

  1. Analyze each process based upon the number of threads, handles, loaded DLL’s,etc.

Two great webcasts can be viewed here to see these types of tools in action.


General rule of thumb is to make sure the general memory utilization does not exceed 80%within a given period of time.

Check Memory Availability

    1. Open Task Manager
    2. Select the Performance tab

    3. Look at the Physical memory box,and multiply the total memory by .2

    4. If the total available memory is less than this number then the box is currently utilizing more than 80 percent of the memory.

Current utilization by process

  1. Select the Process tab

  2. Check the ‘show processes from all users’ box in the bottom left corner

  3. Click the column header ‘Mem Usage’ to sort the processes by memory utilization, highest to lowest. This will help you determine what processes are currently utilizing the memory on the box and can help you narrow your search for memory intensive processes.


Check NIC HW

  1. Verify both ends of the network cable are securely seated in the port

  2. On the back of the server verify you have a green blinking link light on the NIC port

  3. Verify NIC HW is working properly by using Device Manager and ensure the active NICs are showing green

  4. Verify gateway, IP, subnet mask, DNS, DNS suffixes, etc. are properly configured.

  5. If everything is properly configured and HW is working, you should be able to get a ping response from the gateway.

Check Network Connections
Here are some other checks you should perform to ensure proper network connectivity:

  1. ipconfig /all will display all you TCP/IP settings including you MAC address

  2. ipconfig /flushdns will flush your dns resolver cache

  3. ipconfig/displaydns will display what is in your dns name cache

  4. Netstat -an command will show all the connections & ports from a machine

  5. Nbtstat command will show net bios tcp/ip connection stats

  6. Tracert <IP or DNS Name> command will show you the path the packet takes, the routers, and the response time for each hop.

  7. pathping <IP or DNS Name> command combines ping and tracert to the 100th degree.  It pings each hop 100 times and is great for testing wan connectivity

Disk Space

All kinds of bad stuff can happen when your disk space is filling up.  The best way to alleviate this is to write a script to notify you when you reach a certain threshold. In a future post I”ll share a method for you to do just that…however if there is a problem and you need to perform a health check then here is how you check the space the old fashion way.

To check disk space manually:

  1. Right Click on My Computer

  2. Select Manage

  3. Select Disk Management

  4. Validate each disk more than 10 percent free space

Event Logs

Event logs can reveal a more historical perspective on what is going on with the system and applications. Things to look for when troubleshooting event logs is to query either the system or the application logs and look for the presence of events that have a timestamp near the time of the issue you are troubleshooting.

Events have 3 categories in the event viewer:

  • Informational: Noted with a white icon and letter ‘i’. Successful operations are logged as informational. Usually not used in troubleshooting problems or failures

  • Warning: Noted with a yellow icon and exclamation point. These usually are looked up as they serve as predictive future failure indicators, such as disk space running low, dhcp ip address lease renewal failures, etc.

  • Error: Noted with a red circle icon and ‘x’. These are indications that something has failed outright and are a good starting point for troubleshooting.

When looking at event logs, use the information to determine the following:

  • Is the incident tied to a particular time or outage incident?

  • Is this a one-off, or has this particular error occurred multiple times in the past?

  • Does this error appear on other systems or is it unique to the system that has failed?

Also make sure you take a look at eventcombmt from Microsoft.  This tool allows you to search the logs of multiple machines.  The benefit to this is to see if a specific error or warning message is also occurring on other systems.  This can help rule out issues.


Troubleshooting services should be limited to the specific that is affected by the problem being troubleshot. Each server will have specific services varying upon the types of applications running. You should document how your servers services are configured to and compare that to the server in question to see if anything is not configured correctly.


Servers that host applications and services that require high availability should be clustered so that if one node fails the other can pick up the workload.  Clustered servers need the same type of health checks as stand-alone systems except you will want to check on the health of the cluster.

Check Cluster Resource Status

  1. Open Cluster Administrator: Log onto server, select Start –> Run –> cluadmin

  2. Check the Resources and ensure all are Online

  3. If Cluster Administrator does not open, ensure that the Cluster Service is running on the node.

  4. Cluster resource status can also be checked from a remote server. From a command prompt, just type – cluster res <cluster name>

Client Side Health

  1. Right click on My Computer, select Manage

  2. Open Device Manage

  3. Drill down to SCSI and RAID Controllers, verify that the HBA HW is visible and does not show any errors

  4. If it does not show up in Device Manager, you may need to re-scan for the HW, re-seat the fiber card, or re-install the driver.

  5. If the HBA is showing healthy in Device Manager, open the tool that you use to view configuration and settings for the fiber card and verify there aren’t any transmit/receive errors on link statistics or counters

Switch Health

  1. Make sure fiber is properly connected to each switch

  2. Make sure switch has no errors

  3. If you’re using zoning verify it is properly configured

Check Fiber and SAN Connectivity

  1. Log onto san appliance and verify that the SAN is in general good health and no major errors are present for the controllers, loops, switches, or ports.

  2. Ensure that the LUNs are presented to the servers in the cluster


Some applications will require you to spread the load across multiple servers.  Web servers are a very popular choice to network load balance.  As with clusters we will need to check the status of the load balancing.

Check NLBS Status CMD Line

  1. From a command prompt on the local system, run ‘wlbs query’. This will give you the convergence status of the local node with the nlbs cluster.

  2. Other useful NLBS commands: wlbs stop (stops nlbs), wlbs start (starts nlbs), wlbs drainstop (drains node)

Check NLBS Configurations

  1. Open up the network properties –> Network Load Balancing, right click & select Properties

  2. On the Cluster Parameters tab, verify that the IP address is configured for the shared NLBS IP and that the subnet mask, domain, and operation mode are configured correct1y.

  3. On the Host Paramters tab, make sure each node of the cluster has a unique host identifier. Also verify the IP and subnet mask are configured for the local values.

  4. Also make sure that your switch has a static ARP entry if using multi-cast NLBS. The entry should be that of the virtual MAC of the cluster. To get the virtual MAC of the cluster, you can run the following command: WLBS IP2MAC <virtual IP address>

Name Resolution

To healthcheck name resolution, open a command prompt and enter the following

  • nslookup <servername>

Verify that the servername is correctly entered in DNS

If a record does not show up in the DNS query, or maps to a different name, perform a reverse lookup by IP address to see what name is associated with the IP address * nslookup <IP address>

If no name shows up associated with the IP address, log into the domain controller and check the DNS records for this particular name/ip address

  1. From a Domain Controller go to start–>run–>dnsmgmt.msc

  2. Expand the Forward Lookup Zones

  3. Expand the zone for you primary zone that holds the records for the system/s you are troubleshooting

Validate that the record exists. If it does not exist manually enter the record name and IP address by right clicking on this same zone,

  1. Select new host (a)

  2. Enter the name and IP address

  3. Check the box next to Create associated pointer (PTR) record

  4. Click add Host

Additionally log back into the node that you manually entered the record for and ensure that DNS is registering in DNS

  1. Right click on the My Network Places icon on the desktop and select Properties

  2. Double click on the primary adapter

  3. Select properties

  4. Highlight internet protocol (TCP/IP) and select properties

  5. Validate the IP addresses of the DNS servers are correct

  6. Select Advanced

  7. Select DNS tab

  8. Make sure the box is checked next to Register this connection’s address in DNS

As I wrap this up I realize there is so much more that can be done.  Each application type of server needs its own set off health checks.  For example web servers, terminal servers and database servers.  Remember this is just the baseline for each server and that other components can and should be layered on top of it.  Again I would love to hear from others so please feel free to add you comments below.

The new auto-start feature

Version 4.0 of ASP.NET introduces a new feature called auto-start. The idea is simple: to improve the performance of the web app by allowing apps to run some expensive code before the first request comes. I must say that this is really an interesting concept (and no, I won’t be going into details here because the white paper already explains most of what you should know).

I’m only mentioning this here because this will only work from windows 7 and windows 2008 server R2 onwards. This is something which makes me really sad because people running windows 2008 server won’t be able to use this feature. I really believe that it’s time for MS to change the interaction between IIS and the OS…I mean, am I the only one that is fed with having the IIS version tied up with the OS version???

Routed events in Silverlight

Routed events were introduced by WPF and they’re responsible for enabling several advanced scenarios in that platform:

  • tunneling: in this case, the event is first raised in the root and goes “down the tree” until the source element that generated the event is reached;
  • bubbling: in this case, the event bubbles from the source element to the root element;
  • Direct: the event is only raised in the source element.

Once again, the use of routed events in Silverlight is limited. By default, it only exposes a couple of routed events and it only supports bubbling (ie, there’s no tunneling for routed events in Silverlight).In order to illustrate the bubbling feature, we’ll start running the following example:

<UserControl x:Class="Tests.test"
"" xmlns:x="" xmlns:my="clr-namespace:Tests"> <Canvas Width="150" Height="200" x:Name="cv"> <StackPanel x:Name="sp"> <TextBlock Text="Click me" x:Name="bt" /> </StackPanel> </Canvas> </UserControl>

And here’s the code-behind:

public partial class test : UserControl {
    public test() {
        bt.MouseLeftButtonDown += PrintInfo;
        sp.MouseLeftButtonDown += PrintInfo;
        cv.MouseLeftButtonDown += PrintInfo;
        MouseLeftButtonDown += PrintInfo;
    private void PrintInfo(
Object sender, MouseButtonEventArgs e) { var info = String.Format(
"elemento original: {0} - elemento actual: {1}n", ((FrameworkElement)e.OriginalSource).Name, ((FrameworkElement)sender).Name); MessageBox.Show(info); } }

If you run the previous sample, you’ll notice that the event bubbles from the button until it reaches the root user control. The MouseButtonEventArgs class ends up inheriting from the RoutedEventArgs class. Due to that, we can access the OriginalSource property and find out which object is responsible for the event that is bubbling. Notice that the MouseButtonEventArgs ends up adding the read/write Handled property: when you set it to true,the event won’t be propagated beyond the current element that is responsible for the event that is being fired.

Unfortunately,you can’t really create custom routed events in Siverlight. The reason is simple: there isn’t a public API for letting you do that (if you dig into the code of the MouseLeftButtonDown event instantiation, you’ll notice that routed events are created through the RoutedEvent constructor which is internal). What this means is that you’re limited to creating “normal” events in your custom classes. I’m not sure if this limitation will ever be removed from Silverlight. And I guess this sums it up quite nicely. Stay tuned for more on Siverlight.

Gira Up To Secure 2010 – Las fotos :-)

Llevo unos días de lo más ajetreado y no he podido publicar nada hasta ahora. Sorry :-D

DSC03162 DSC03142 DSC03154

El evento salió muy bien, salvo un pequeño problemilla con la conexión a Internet de unos de los ponentes. Al final se tuvo que tirar de un adaptador 3G –> WIFI que traía Carles de Quest Software (gracias!), al cual le pegaron un tremando palo por el consumo de datos entre paises… pobre :-)

Os dejo algunas de las fotos del evento. Muchas gracias a todos por asistir, espero que os lo pasáseis tan bien como yo.

DSC03140 DSC03141  DSC03143 DSC03147 DSC03148 DSC03149 DSC03151 DSC03152    DSC03156 DSC03157  DSC03163

Un saludo y nos vemos en el próximo evento de AndorraDotNet (el próximo Febrero) sobre las novedades de VS2010.


 @Happy hacking! ;-)

** crossposting desde el blog de Lluís Franco en **

Cannot See \Server when VPN into a SBS2008 Server


Getting WINS-like computer name resolution over VPN in SBS 2008
by Nicholas Piasecki on June 13th, 2009
So this week concluded several sleepless nights and much heartburn as I migrated Skiviez’s SBS 2003 machine (running as our domain controller and our mail server) to SBS 2008. As far as things go, it went relatively smoothly, and the remainder of the week was dealing with lots of small niceties that I had forgotten that I had set up on the 2003 server that I now needed to set up once again.

One of these was something that I used for my convenience over a VPN connection from home. You see, the internal order processing application that I wrote uses some shared folders to store some temporary data, such as e-mails that are generated but not yet released to Exchange, or a local copy of images that are available on the Web site. This software–and our users–are used to referring to Windows file shares as \COMPUTER-NAMESHARE-NAME; for example, \CYRUSPickup Holding, because for some reason some of the older servers are named after my boss’s dead cats.

When connecting through VPN to SBS 2008, however, that “suffix-less” name resolution was not working. So when \CYRUSPickup Holding failed to resolve to anything, \cyrus.skiviez.comPickup Holding would work fine. This was super annoying.

The reason this worked previously with our SBS 2003 installation is that it was acting as a WINS server, which provided this type of computer name resolution for us. SBS 2008 finally retires this ancient technology by default, however, so I had two choices: I could either install the WINS server role on SBS 2008, or I could just figure out how to get the 015 DNS Domain Name option from DHCP to relay through the VPN connection.

I chose the latter option, since it’s certainly less confusing to be able to say to someone in the future “we don’t use WINS, DNS does everything.” So here’s how to do it:

1.On the SBS 2008 server, click Start > Administrative Tools > Routing and Remote Access.
2.In the tree view, drill down past the server name to IPV4 > General. Right-click the General option and choose “New Routing Protocol” and choose DHCP Relay Agent.
3.Now right-click the newly appended “DHCP Relay Agent” node and choose Properties. Add the IP address of your DHCP server (which is probably your SBS server itself), and click OK. Then click it again and choose “New Interface” and add the “Internal” interface.
4.Now if you connect through VPN, an ipconfig /all should show your domain name as a “Connection-specific DNS suffix” and pinging machines by their suffix-less computer names should work. (If it doesn’t, make sure your DHCP server is using that 015 DNS Domain Name option, which the SBS 2008 wizards set up by default.)

Vacations vs work time

I’ve just finished reading an excellent post by Scott Berkun on this topic. I couldn’t agree more, but unfortunately, things don’t work like that (and the problem is not limited to America only!)

SBS 2008 Outlook Pop Ups and Continuous Logon Prompts

SBS 2008 Outlook Pop Ups and Continuous Logon Prompts

SYMPTOM: for client workstations, when Outlook is launched, all users are continuously prompted for logon credentials in order to authenticate with Exchange even though they are properly authenticated on the domain. Authentication is in the Intranet Zone. Active Directory is fully functional and integrated with Exchange.

Exchange 2007 SP 1 Update Rollup 9
The resolution to restoring Exchange authentication communication with SBS 2008 Active Directory has been, in all cases, to download and install Exchange 2007 SP1 Update Rollup 9

Here are issues we’re aware of when installing the Update Rollup 9 for Exchange.

1.When installing Exchange 2007 Sp1 Update Rollup 9 on Small Business Server 2008 it is advisable to open the command prompt and install the update using RunAs and Administrator credentials. Update Rollup 9 has failed on SBS Installations WIGITAL has attempted unless initially launched in this manner. Navigating to the path where the update package is, and launching the installer using RunAs from the command line seems to solve some UAC (User Account Control) issues.
2.You may see the following Event IDs when installing this Update Rollup: Event ID 1024, Event ID 1603, Event ID 11321. Details follow….

Log Name: Application
Source: MsiInstaller
Date: 1/11/2010 1:22:21 PM
Event ID: 1024
Task Category: None
Level: Error
Keywords: Classic
User: DOMAINadministrator
Computer: SBS2008-SERVER.DOMAIN.local
Product: Microsoft Exchange Server – Update ‘Update Rollup 9 for Exchange Server 2007 Service Pack 1 (KB970162) 8.1.393.1′ could not be installed. Error code 1603. Windows Installer can create logs to help troubleshoot issues with installing software packages. Use the following link for instructions on turning on logging support:
as well as Event ID 11321

Log Name: Application
Source: MsiInstaller
Date: 1/11/2010 1:13:04 PM
Event ID: 11321
Task Category: None
Level: Error
Keywords: Classic
User: DOMAINAdministrator
Computer: SBS2008-SERVER.DOMAIN.local
Product: Microsoft Exchange Server — Error 1321. The Installer has insufficient privileges to modify this file: C:Program FilesMicrosoftExchange ServerRelNotes.htm.

NOTE: For more information about errors, Exchange Server 2007 SP1 Update Rollup 9 creates a log file here:


Review the LOG to determine a course of action if you experience errors during your install.

To remedy the 11321 error ( which we’ve seen every time we’ve installed Update Rollup 9 ):

•Open Explorer
•Go to C:Program FilesMicrosoftExchange ServerRelNotes.htm
•Change the permissions on this file to allow the current Administrator FULL CONTROL
•Apply the changes
•Restart Update Rollup 9 using RunAs from the Command Line
Oddly enough, the installers permissible access to RelNotes.htm has ended more than one installation prematurely

You receive a ".pst is not compatible" error message when you open an Outlook 2003 or Outlook 2007 .pst file in earlier versions of Outlook

You receive a “.pst is not compatible” error message when you open an Outlook 2003 or Outlook 2007 .pst file in earlier versions of Outlook

When you use a version of Outlook that is earlier than Microsoft Office Outlook 2003, you may experience the following problem. If you to try to open or to import a personal folder file (.pst) that contains information that was exported from Microsoft Office Outlook 2003 or from Microsoft Office Outlook 2007, you may receive the following error message:
The file file_name.pst is not compatible with this version of the Personal Folders information service.
Contact your Administrator.

This problem occurs when the information in the .pst file was exported from Outl…This problem occurs when the information in the .pst file was exported from Outlook 2003 or from Outlook 2007 by using the Import and Export Wizard. Outlook 2003 and Outlook 2007 use the Unicode format to export information to a .pst file. Earlier versions of Outlook cannot open or import Unicode-formatted .pst files. Earlier versions of Outlook can open only .pst files that are formatted in American National Standards Institute (ANSI) format

To work around this problem, copy the contents of the Outlook 2003 or Outlook 20…To work around this problem, copy the contents of the Outlook 2003 or Outlook 2007 .pst file to a .pst file that has not been converted to the Outlook 2003 or Outlook 2007 Unicode format.

To copy information from a Unicode-formatted .pst file to an ANSI-formatted .pst file in Outlook 2003 or in Outlook 2007, follow these steps.

Note To follow these steps, you must use a .pst file from an earlier version of Outlook.
Start Outlook.

On the File menu, click Data File Management, and then click Add.
Click Outlook 97-2002 Personal Folders File (PST).
Click OK.
Click OK to accept the default name, and then click OK again.

Outlook 2003 now creates a new .pst file that is based on the earlier .pst file and maintains the ANSI formatting for that .pst file.
Click Close.
At the bottom of the navigation pane, click Folder List.

In the navigation pane, you now see the new .pst file.
Drag the information from your existing Outlook 2003 or Outlook 2007 folders to the new .pst file. You may also use the Import and Export Wizard on the File menu to move the information from your existing Outlook 2003 or Outlook 2007 folders to the new .pst file.
In the navigation pane, right-click the new .pst file, and then click Close “file_name”.

Note E-mail messages or other items that contain Unicode characters will not be copied to the new .pst file.

Folder Sync Still Grabbing Old Sync Folder

If you have an Issue where the PC is still grabbing an Old offline folder sync and the offline path dosn’t exist anymore!

How to re-initialize the offline files cache and database

The Offline Files (CSC or Client Side Caching) cache and database has a built-in capability to restart if its contents are suspected of being corrupted. If corruption is suspected, the Synchronization Wizard may return the following error message:
Unable to merge offline changes on \server_nameshare_name. The parameter is incorrect.

Method 1
The Offline Files cache is a folder structure located in the %SystemRoot%CSC folder, which is hidden by default. The CSC folder, and any files and subfolders it contains, should not be modified directly; doing so can result in data loss and a complete breakdown of Offline Files functionality.

If you suspect corruption in the database, then the files should be deleted using the Offline Files viewer. After the files are deleted out of the Offline Files viewer, a synchronization of files may then be forced using Synchronization Manager. If the cache still does not appear to function correctly, an Offline Files reset can be performed using the following procedure:

1. In Folder Options, on the Offline Files tab, press CTRL+SHIFT, and then click Delete Files. The following message appears:
The Offline Files cache on the local computer will be re-initialized. Any changes that have not been synchronized with computers on the network will be lost. Any files or folders made available offline will no longer be available offline. A computer restart is required.

Do you wish to re-initialize the cache?
2. Click Yes two times to restart the computer.

Method 2
Use Registry Editor
If you cannot access the Offline Files tab, use this method to reinitialize the Offline Files (CSC) cache on the system by modifying the registry. Use this method also to reinitialize the offline files database/client-side cache on multiple systems. Add the following registry subkey:
Key Name: FormatDatabase
Key Type: DWORD
Key Value: 1
Note The actual value of the registry key is ignored. This registry change requires a restart. When the computer is restarting, the shell will reinitialize the CSC cache, and then delete the registry key if the registry entry exists.

Warning All cache files are deleted and unsynchronized data is lost.
Use Reg.exe
You can also automate the process of setting this registry value by using the Reg.exe command line editor. To do this, type the following command in the Reg.exe window:
REG.EXE. REG ADD “HKLMSOFTWAREMicrosoftWindowsCurrentVersionNetCache” /v FormatDatabase /t REG_DWORD /d 1 /f

Note For specific steps to re-initialize the offline files cache and database in Windows Vista, click the following article number to view the article in the Microsoft Knowledge Base:

On a Windows Vista-based client computer, you can still access offline files even though the file server is removed from the network


Focusing on Testability – An Example

I am working on finishing an open source project and releasing a V1.0 of my Nucleo project (  I came across an example that I can use to illustrate testability.  Take a look at the following component:

 public class WebTraceLogManager : ILogManager
 private Page GetPage()
  if (HttpContext.Current.Handler is Page)
   return (Page)HttpContext.Current.Handler;
   return null;

 public void LogError(Exception ex, string source)
  Page page = this.GetPage();
  if (page != null)
   page.Trace.Warn(source, ex.Message, ex);

 public void LogMessage(string message, string source)
  Page page = this.GetPage();
  if (page != null)
   page.Trace.Write(source, message);

Notice that this component is self-contained; the Page class reference is obtained from a static member.  This static member must be supplied at testing time if I want to be able to run outside of a unit test (unless I’m using a web testing product like WATIN or WebAii by Telerik).  With a general testing product like Moq, static classes can’t be typically mocked.  One product that I know of, TypeMock, does have the ability to write a unit test for this class, and it would be something like:

public void TracingErrorsWorksOK()
 var page = Isolate.Fake.Instance<Page>();

 var contextFake = Isolate.Fake.Instance<HttpContext>();
 Isolate.WhenCalled(() => HttpContext.Current).WillReturn(contextFake);
 Isolate.WhenCalled(() => contextFake.Handler).WillReturn(page);

 var traceContext = Isolate.Fake.Instance<TraceContext>();
 Isolate.WhenCalled(() => page.Trace).WillReturn(traceContext);

 var manager = new WebTraceLogManager();
 manager.LogError(new Exception(), “Test”);

 Isolate.Verify.WasCalledWithAnyArguments(() => { traceContext.Warn(null, null, null); });

TypeMock has the ability to mock static method calls using Isolate.WhenCalled; furthermore, you can see that it takes a good bit in order to fake the ability to write to the trace output.  We have to create a mock HttpContext, and ensure that HttpContext.Current returns our faked implementation of HttpContext.  Furthermore, we have to ensure the Handler property returns a reference to a faked Page class reference, which we also have to create a fake for.  Furthermore, the Page class has to use the Trace property (of type TraceContext), which if we manually constructed would also use HttpContext.  Because I created a fake, no actual methods will get called (including the constructor), which is good because if I were to write my own fake, who knows how many features I would have to implement to ensure writing to the TraceContext doesn’t call an error?  TraceContext could use a property on HttpContext that I would have to guarantee be there, which may also use another property that I have to handle, and so on.  Writing fakes in TypeMock makes that a lot easier.

But there is an alternative way, which requires a little restructuring.  While it doesn’t solve all of the problems of testability, it makes it a little more testable by adding a constructor:

 public class WebTraceLogManager : ILogManager
 private  Page _page = null;

 public WebTraceLogManager(Page page) { _page = page; }

 public void LogError(Exception ex, string source)
   if (_page != null)
    _page.Trace.Warn(source, ex.Message, ex);

 public void LogMessage(string message, string source)
  if (_page != null)
   _page.Trace.Write(source, message);

Now, anything that’s passed in can be used, so outside of the unit test, we can do something a little more simple like:

public void TracingErrorsWorksOK()
  var page = Isolate.Fake.Instance<Page>();

  var traceContext = Isolate.Fake.Instance<TraceContext>();
  Isolate.WhenCalled(() => page.Trace).WillReturn(traceContext);

 var manager = new WebTraceLogManager(page);
 manager.LogError(new Exception(), “Test”);

 Isolate.Verify.WasCalledWithAnyArguments(() => { traceContext.Warn(null, null, null); });

Now we can reduce the number of fakes we need; we need a fake Page implementation (though we can directly instantiate Page as it requires no specialty implementations depending on what properties you need).  The Trace property though, needs a reference to HttpContext in the constructor, so any mocking framework needs to be able to mock HttpContext in order to create the fake for the TraceContext class.  But you can see the amount of reduced dependencies.

To take this one step further, we can do this to add another level of testability, and make the faking process even easier:

 public class WebTraceLogManager : ILogManager
 private  ITracer _t = null;

 public WebTraceLogManager(ITracer t) { _t = t; }

 public void LogError(Exception ex, string source)
   if (_t != null)
    _t.Warn(source, ex.Message, ex);

 public void LogMessage(string message, string source)
  if (_t != null)
   _t.Write(source, message);

Now we have an interface to deal with.  Anything implementing this interface can be used.  To use the page implementation in the web site, we can create an additional wrapper class like:

public class PageTracing : ITracing
   private Page _page = null;

   public PageTracing(Page page) { _page = page; }

   //ITracing members
   public void Write(..) { _page.Trace.Write(…); }
   public void Warn(..) { _page.Trace.Warn(..); }

And then another fake implementation that does:

public class FakeTracing : ITracing
   private List<Message> _traces = new List<Message>();

  public List<Message> Messages { get { return _traces; } }

   public void Write(..) { _traces.Add(new Message(..)); }
   public void Warn(..) { _traces.Add(new Message(..)); }

So the fake implementation exposes the collection of messages; you can pass in a reference to the test as in:

public void TracingErrorsWorksOK()
  FakeTracing tracing = new FakeTracing();
 var manager = new WebTraceLogManager(tracing);
 manager.LogError(new Exception(), “Test”);

 Assert.AreEqual(1, tracing.Messages.Count);
 Assert.AreEqual(“Test”, tracing.Messages[0].Message);

So you can see the immediate benefit; the interface buys us the ability to use a fake that exposes the inner contents directly for testing purposes, while using the PageTracing class in production web environment that continues to use the Page approach we need.  So the benefits are many, also including the ability to use other testing frameworks outside of just TypeMock, since we can pass in an interface implementation.

Recent Comments