IMightBeDisposable

While disposition stuff is on the brain, there’s another disposition topic I’ve been wanting to write about for a while now: handling disposition of objects of nebulous ownership.

The issue of nebulous ownership creeps in whenever you use any sort of complex creational pattern. If you instantiate an object by invoking its constructor directly, it’s pretty clear that you are its owner (unless, of course, you’re deliberately handing it off to other code). If that object happens to be disposable, then you can (and should) safely dispose it when you’re done working with it.

But what if you obtained an object instance via a method call? How can you tell if you’re the owner? If you’re lucky, that method has a well-documented contract that specifies whether you’re getting your own instance or not. Otherwise, you’re pretty much going to have to guess or go code spelunking (and hope that the behaviour doesn’t change in later versions). Luckily, we can often make some pretty decent educated guesses about who owns an object returned from a method call. For example, if the method is called CreateFoo, we can be pretty sure we’re getting our own Foo instance. On the other hand, if we’re calling the getter of a static Instance property, chances are that we’re getting a shared singleton.

There are, however, cases where one is explicitly not supposed to care whether one is using a shared instance or not. Use of the service locator pattern is probably the most obvious example, and it’s the one I’ll focus on for the rest of this post. When you retrieve an instance from a service locator, you can’t tell if it’s a shared instance. Even more importantly, you’re really not supposed to care. You’re getting an object that’s castable to the type you requested, and you should use it as that type, and that’s it. Pretty simple, right? And it is, unless the object happens to be disposable.

Unfortunately, when you retrieve a disposable instance from a service locator, you have no way to tell if you’re getting a shared instance or not. This means that you can’t safely dispose it. I’ve seen a few different approaches to trying to deal with this:

  1. Don’t ever dispose anything received from a service locator.
  2. Use a separate lookup table keyed on service registration data (usually type and an optional registration name) to indicate whether instances of any given retrieved service should be disposed.
  3. Offload instance disposition to the container.
  4. Build disposition-prevention logic into registered service instances so that they will not be disposed unless the service container is being disposed.

#1 has the obvious disadvantage that some things that ought to be disposed won’t be. This will usually lead to at least poorer performance, but it could also cause some more obvious functional bugs if resources locks are not released in a timely fashion.

#2 is kludgy. It means that you can’t simply call Dispose() on a serviced object. Instead, you need to either perform the disposability lookup first or invoke a helper method that takes care of both the lookup and the Dispose() invocation. Either way, you can’t use using statements, which is massively annoying. You’re also very likely to run into some nasty little bugs if you switch from using a locally newed instance to a serviced instance some time in the future. For example, if you make the following change, how likely are you to notice that you also need to change the disposition code?

Original code:

using (IFoo foo = new Foo())
{
	foo.DoSomething();
}

New code:

using (IFoo foo = ServiceLocator.GetInstance<IFoo>())
{
	foo.DoSomething();
}

#3 is the approach used by Autofac and, as far as I can tell, only by Autofac in the .NET space. When I first saw it, I was pretty darn excited – for about 20 seconds. And then that inconvenient little voice in my head started raising objections:

  1. Disposition is meant to allow “expensive” state to be cleaned up as soon as possible. By deferring disposition of container-sourced objects until the end of a predefined lifetime scope, we will be delaying disposition of individual objects in at least some, if not most, cases.
  2. Unless the tracking mechanism uses weak references (and I do have to admit that I haven’t examined the Autofac implementation in any depth, so I don’t know whether this is the case for it or not), it is very likely preventing non-shared disposable instances from being garbage collected until the end of their declared lifetime scope. This means that we could end up taking an unexpected memory hit unless we use fairly granular lifetime scopes.
  3. We shouldn’t dispose any container-sourced objects, which means some potentially difficult-to-diagnose problems if we move from a newed instance to a container-sourced instance (as in the previous example).
  4. Using this mechanism ties us into using the Autofac container. We can’t swap out a different container without making some pretty major code modifications.

So that leaves approach #4… From the point of view of the instance retriever, it’s very simple: you can safely dispose of any instance you retrieve from the container, just as if it had been newed. That’s really quite elegant, and it’s what I want to be able to do in my own code. However, getting to the point where you can do this is anything but simple. The only reference I’ve been able to find to a similar approach leaks implementation details up to the instance users, which is inconvenient in practical terms and, well, more than a little icky from a design point-of-view. So what can be done to avoid this?

Well, whatever we do, we’re going to need a wrapper type that actually controls the disposition. It has three key features:

  1. It wraps an instance of an interface, with that instance being provided to the wrapper constructor.
  2. It implements that focal interface, directly forwarding all interface member invocations to the wrapped instance, except for IDisposable.Dispose().
  3. For IDisposable.Dispose(), it checks whether disposition is allowed before invoking the Dispose() method of the wrapped instance.

On the surface, that looks pretty simple but, as is so often the case, the devil is lurking in the details… Let’s start with the question of how the wrapper class knows whether disposition of the wrapped instance should be allowed. Regardless of how we implement this, we’ll need to modify our service container to toggle some “disposition allowed” flag. However, I’d prefer to avoid forcing the container type to interact directly with our disposition-prevention mechanism. Instead, let’s make the container implement a more general-purpose interface:

public interface ITrackingDisposable : IDisposable
{
	DispositionState DispositionState { get; }
}

where DispositionState is an enum with the following definition:

public enum DispositionState
{
	Undisposed = 0,
	Disposing = 1,
	Disposed = 2
}

(There are a few potentially interesting enhancements to both the interface and the enum, but they’re not relevant to the disposition-prevention mechanism, so we’ll leave them out of the current discussion.)

In order to implement ITrackingDisposable, we’ll need to make a couple of changes to the service container. Adding the DispositionState property is pretty trivial:

public DispositionState DispositionState { get; private set; }

Changing the Dispose implementation is pretty trivial too, as long as you’re not trying to make any distinctions between successful and unsuccessful disposition:

public void Dispose()
{
	this.DispositionState = DispositionState.Disposing;
	try
	{
		this.Dispose(true);
		GC.SuppressFinalize(this);
	}
	finally
	{
		this.DispositionState = DispositionState.Disposed;
	}
}

Once our service container implements the ITrackingDisposable interface, we can pass a reference to the container to our wrapper class, and let it decide whether the wrapped instance should be disposed based on the current DispositionState of the container.

That just leaves us with one other picky detail: how do we end up with a wrapper type that forwards the focal interface implementation to the wrapped instance? First, we’ll create an abstract base class for our wrappers:

public abstract class DispositionPreventer<TComponent> : IDisposable
	where TComponent : IDisposable
{
	private readonly TComponent _component;
	private ITrackingDisposable _owner;

	protected DispositionPreventer(TComponent component, ITrackingDisposable owner)
	{
		Contract.Requires(component != null);
		Contract.Requires(owner != null);

		this._component = component;
		this._owner = owner;
	}

	protected TComponent Component
	{
		get
		{
			return this._component;
		}
	}

	public void Dispose()
	{
		switch (this._owner.DispositionState)
		{
			case DispositionState.Disposing:
			case DispositionState.Disposed:
				this._component.Dispose();
				break;
		}
	}
}

Now, we’ll need to create a concrete subclass for each interface for which we wish to register an instance with the service container. For example, if we have an interface IFoo defined as follows:

public interface IFoo : IDisposable
{
	event EventHandler Fooing;

	bool IsFooed { get; }

	void DoSomething();
}

we’ll need to create a disposition-preventer implementation like the following:

public sealed class FooDispositionPreventer : DispositionPreventer<IFoo>, IFoo
{
	public FooDispositionPreventer(IFoo component, ITrackingDisposable owner)
		: base(component, owner)
	{
	}

	public event EventHandler Fooing
	{
		add
		{
			this.Component.Fooing += value;
		}

		remove
		{
			this.Component.Fooing -= value;
		}
	}

	public bool IsFooed
	{
		get
		{
			return this.Component.IsFooed;
		}
	}

	public void DoSomething()
	{
		this.Component.DoSomething();
	}
}

That’s not so bad for a little interface like IFoo, but it can turn into hours of drudgery for larger interfaces. Unfortunately, given the lack of support for easy design by composition in .NET, you’re either going to have to write this code manually or build your own code generation tool to create the wrapper classes. (I recently created a T4 template for this at the day job and, while authoring the template itself was a somewhat annoying exercise, at least I won’t have to write (or, even worse, code review) the wrapper classes themselves.)

Once you’ve got the wrapper classes set up, all that’s left is to actually use them. This is done simply by wrapping our focal instance immediately before registering it with the service container. e.g.:

container.RegisterInstance<IFoo>(new FooDispositionPreventer(new Foo()));

To help detect a failure to wrap a disposable instance before registration, I use a custom FxCop rule that looks for RegisterInstance calls where the registered instance is disposable but does not inherit from DispositionPreventer<>.

And that’s pretty much it. The only really hard part of this is implementing the individual disposition wrapper classes. However, if you’re willing to invest a bit in that part of the puzzle, you’ll end up with a container-independent disposition-prevention mechanism that won’t intrude in any way into your container-consuming code. And, in case you missed it, the above mechanism can be made to work with pretty much any nebulous ownership scenario where interfaces are used for implementation decoupling since all you need to do is have the real “owner” pass around disposition-preventing wrappers instead of “naked” instances. If interface-based decoupling is not being used, a subclassing approach is also possible as long as the focal class is unsealed.

But it would still be nice if implementing the wrapper classes wasn’t quite so much work

Introducing the Bordecal tools for FxCop

I’ve had publication of a little FxCop rule-building toolkit on my todo list for at least the past three years. I finally got the beastie out the door yesterday evening…

The Bordecal tools for FxCop include two main pieces:

  1. A rule development framework with two main pieces:
    • A testing framework for custom rules, and
    • Rule base classes that allow configuration via code rather than an embedded XML file.
  2. A set of custom rules.

The testing framework is probably the single most important part. For those of you who are already familiar with Roy Osherove’s FxCopUnit, it is important to understand that this is a rather different approach to rule testing. It is focused on targeting entire sample target assemblies with the goal of detecting both false negatives and false positives. It’s also very picky about matching expected to actual results, largely because I believe that the violation details exposed to the rule user are almost as important as detection of the violation itself. That said, I could probably be convinced to make this behaviour configurable in some future version if enough folks vote for the feature.

The rule base classes that form the other part of the development toolkit are motivated primarily by my personal annoyance with the embedded XML configuration approach. The fact that so many beginner rule developers seem to have problems with generating an appropriate embedded XML resource was my main reason for including it in the published toolkit. However, there’s no need to use these base classes in order to use the testing framework since the testing framework uses fxcopcmd.exe and will be able to use any rule that FxCop itself can find.

The custom rules are meant primarily as an example of how to use the development toolkit, as is the rule test project that can be found in the source code. However, I’ve chosen rules that I tend to write and re-write at each day job, so I’m guessing that at least some folks are likely to find them interesting in their own right.

If you wish to report a bug or request a feature for either the development framework or the custom rule set, please use the project issue tracker. Similarly, if you need help using the toolkit, please use the project discussion list so that other users with similar problems may see the answers to your questions even if they do not read this blog.

And now there’s one more disposition post to finish up before I can finally give Bordecal.ImportsSorter some much-needed attention…

Dispose(ohNotThisNonsenseYetAgain)

For some odd reason, I’ve been authoring quite a few disposable types lately. Under normal circumstances, this would hardly be worth mentioning, but the recommended disposition pattern has been grating a raw nerve quite some time now. At best, it feels like a slightly nasty code smell to me. Due the volume of recent implementations, I’m starting to feel like I’m hanging out next to a sewer. I suspect that I’m starting to slip toward ignoring the guideline in favour of what I would consider a better practice, and I’m hoping that a little bit of venting might postpone that, so here it goes…

Just in case you’re not already familiar with it, here’s an outline of the implementation recommended in the .NET Design Guidelines:

public void  Dispose()
{
	this.Dispose(true);
	GC.SuppressFinalize(this);
}

protected virtual void Dispose(bool disposing)	// or private in a sealed class.
{
	if (disposing)
	{	
		// Clean up managed state here.
	}

	// Clean up unmanaged state here.
}

Seems a little odd, given that we don’t ever enter the Dispose(bool) method with a false parameter value, doesn’t it? So why bother? Well, once upon a time, we used finalizers to help clean up unmanaged state. A finalizable type ought to implement IDisposable, and the “Clean up unmanaged state here” bit in the above example should be run from both the finalizer and the Dispose() method. Ideally, we’d like to be able to write something like the following in a finalizable and disposable class:

~TheClass()	// In case you're not familiar with the syntax, this is the finalizer.
{
	// Clean up unmanaged state here.

public void  Dispose()
{
	// Clean up managed state here.

	this.Finalize();
	GC.SuppressFinalize(this);
}

Unfortunately, our code can’t invoke the finalizer directly, so we can’t call the finalizer from the Dispose() method in order to clean up the unmanaged state. To work around this, we use the Dispose(bool) method shown in the first example above, and we invoke it with a false value from the finalizer:

~TheClass()
{
	this.Dispose(false);
} 

So far, so good. However (very, very big “however”), starting in .NET 2.0, we’re not supposed to write finalizers anymore (or at least not for typical unmanaged resource clean-up). This means that we’ll never call Dispose(bool) with a false argument in the vast majority of our new disposable type implementations. In other words, there’s no practical benefit in implementing the recommended disposition pattern anymore, unless you happen to be working on one of the very rare cases where a finalizer is still required. For the vast majority of cases, where a finalizer is not required, the following would be functionally equivalent:

public virtual void Dispose()	// or non-virtual in a sealed class
{
	// Clean up managed resources here.

	GC.SuppressFinalize(this);
}

Now, let’s assume for the moment that you’ve never heard of the recommended disposition pattern. What might your naïve implementation of IDisposable.Dispose() look like? Probably an awful lot like that last example, except you would probably omit the GC.SuppressFinalize call (which wouldn’t actually matter if your type isn’t finalizable, which it shouldn’t be anyway) In other words, if the recommended disposition pattern didn’t involve a bunch of unnecessary code, naïve implementers would automatically do the right thing. This is known as the “pit of success”, and it is generally considered a good thing.

But let’s come back to you, the very well informed disposition author. You’re not naïve. You know how to implement the disposition pattern. Heck, you even have a lovely little snippet. Why should it bother you that the recommended pattern is more verbose than strictly necessary? It’s unfortunate, but every little bit of code we write is an opportunity for error. Every bit of extra complexity (including little things like a senseless redirection or one extra nested if block) increases the probability of introducing a bug and decreases the readability and maintainability of the code. It’s bad enough when things like this appear in one-off code. How much worse when we’re asked to systematically add unnecessary code as part of a recommended implementation pattern?

Would you accept the following in a code review, if the DoSomething(bool) method were never invoked with a false parameter value:

public void DoSomething()
{
	this.DoSomething(true);
}

private void DoSomething(bool allowDoing)
{
	if (allowDoing)
	{
		...
	}
}

If not, why accept a guideline that recommends the same thing? Well, obviously because it’s a guideline, and you’re a well-mannered developer who cares about exposing APIs that are consistent with what consumers have come to expect from their interactions with core .NET Framework code. But if the fact that it’s a guideline is the only reason you’re still following the recommended disposition pattern (or if you’ve already given up on following it), you might want to consider letting the folks responsible for the guideline know that you’re not too fond of it

A statistics collection tool for FxCop backlogs

For the past week or so, I’ve been performing last-minute documentation tasks at the day job in preparation for the impending maternity leave. If you’ve been reading this blog because of my FxCop posts, then you might be interested in my latest post on the FinRad blog, which covers the statistics collection tool that we developed internally to help us track our FxCop backlog cleanup efforts. A copy of the tool is also available on CodePlex if you would like to try it out for yourself.

Absence makes the heart grow fonder?

So… Umm… It’s been a while. A little over six months, to be specific. That would be three months of unrelenting nausea and surprisingly heavy fatigue, followed by three months of desperately trying to catch up on all the “real world” stuff that didn’t get done during those first three months. But now it’s winter, and I’m looking forward to three months of not skiing, so there should be plenty of time for ‘putering (at least once the pesky December holiday season is over), and care and feeding of this blog thing will slowly resume…


For today, I figured I’d address some of the correspondence I’ve shamefully ignored since June. Comments have been moderated and, where appropriate, answered. For those of you who sent me e-mails, here are some replies:



  1. You’re quite correct: a single-line textbox is actually attribute-encoded. Why I didn’t pick up that error in the table while I was working on the double-encoding example is a bit of a mystery…


  2. I do indeed have a solution for automatic HTML-encoding that avoids the double-encoding problem. I actually started working on a framework and series of blogs posts about a week after writing the HTML-encoding post, but I got busy with other stuff and never ended up publishing anything on the topic. Now that I know that at least some folks are interested, I’ll try to find some time to update the framework and explain how it works and how to use it.


  3. A version of Bordecal.ImportsSorter that works with Visual Studio 2008 will be coming soon-ish. It ranks right below work, holiday preparations, and future offspring concerns on my to-do list. (Hmm… That probably doesn’t sound like I’m likely to find much time to work on it any time soon. However, please rest assured that I want this new version at least as much as you do, and I’m highly motivated to work on it whenever I have the time and energy.)

My apologies to those of you who have been waiting far too long for those answers. I’ll try to do better over the next little while (or at least until I drop out of sight again in about three months)…



P.S.: In case you haven’t figured it out yet and are worried about my health, please don’t be. I do have an abdominal parasite, but it’s mammalian and quite benign. ðŸ˜‰ For those of you who are curious, it’s a girl, and she’s due on February 29th.

Some FxCop rules for VB

We’ve got a rather large VB code base at FinRad, at least some of which we would like to migrate to C# in the nearish future. Because of this, we have created several custom rules that are intended to detect issues in VB code that would cause problems when migrating the code to C#. However, while attempting to assign categories for those custom rules, we quickly realized that each of them had merits besides portability of the code base, and that we wanted our assemblies to abide by these rules regardless of whether or not they might eventually be ported to C#.


I hadn’t been planning on writing a post about these rules for some time to come, but then I saw the announcement last week that Microsoft is using VB to build the Silverlight javascript compiler and VBx, which seems to have boosted the topic’s prority on my internal TODO list enough that I can’t quite seem to ignore it…



Rule #1: Interfaces should not contain nested types (Design)


By definition, interfaces should not contain any implementation, and a nested type would certainly qualify as implementation. Unfortunately, the VB compiler generates no warnings or errors when any type whatsoever is nested within an interface. For example, the following interface will compile just fine (and FxCop won’t flag it as an “empty” interface either):

Public Interface ISomething

Enum Animal
Cat
Dog
Fish
End Enum

Class SomeClass
Public Sub DoSomething()
Console.WriteLine(“something”)
End Sub
End Class

End Interface


To make matters worse, the VB compiler will also generate a type within an interface under certain conditions. Consider, for example, the following interface:

Public Interface ISomethingElse

Event SomethingDone(ByVal sender As Object, ByVal e As EventArgs)

End Interface


When an event is declared using the above syntax, the VB compiler will generate a delegate for the event signature within the same parent type. The resulting compiled assembly actually ends up containing the following interface definition, which includes a nested type:

Public Interface ISomethingElse

Delegate Sub SomethingDoneEventHandler(ByVal sender As Object, ByVal e As EventArgs)

Event SomethingDone As SomethingDoneEventHandler

End Interface



Rule #2: Properties should not have arguments (Design)


In order to be easily invokable from C#, a property should never have more than one argument, and only indexers (i.e.: properties named “Item” in VB) should have even one argument. If either of these conventions are ignored, C# code is going to have to use syntax like get_SomeProperty(a, b, c) to use your properties, which is considerably less readable than some of the alternatives.  Do you really want to be forcing your API’s callers to be writing ugly code?



Rule #3: Member names should not match type names (Naming)


C# will generate compiler error CS0542 if you try to create a member with the same name as its parent type, but VB won’t complain about this at all. The use of a member with the same name as the type to signify a constructor in C# presumably explains this difference in compiler behaviour. However, there’s an API useability problem here as well since this significance of a member (usually a property) with the same name as its parent class is rarely immediately obvious. For example, imagine a class named “Label” with a property named “Label”. What exactly does “Label.Label” represent? In such a case, either the class or the member probably ought to be renamed to convey more information about its role.



Rule #4: Review call scope forcing (Usage)


VB allows for forcing the scope of invocation of a virtual member to the current class via the MyClass keyword. Both VB and C# allow for forcing the scope of invocation of a virtual member to the base class (via MyBase and base, respectively). Since call scope forcing isn’t really a VB-specific capability, this isn’t really a VB-specific rule. However, eliminating MyClass use was what provoked its development at FinRad, so I’m including it here anyway.


In case you’re wondering why call scope forcing should be reviewed, consider why a member would be declared as virtual in the first place: it’s meant to be overrideable. If a subclass has overridden a virtual method, why should a base class be deliberately avoiding invoking that override? Obviously, there are cases where this behaviour is desirable, which is why this is a “review” rule rather than a “do not” rule, but there are also plenty of cases where scope forcing is likely to be used inappropriately, particularly when developers aren’t aware of the consequences.


If you want to implement this rule, there are a few things to watch out for:



  • Within the implementation of an overriding, overloading, or hiding member, it is quite common to invoke the base class implementation explicitly. In order to avoid undesirable noise in the rule analysis results, these cases should not be flagged as problems by the rule.

  • Because of VB interface member implementation renaming, the name of an overload will not necessarily be the same as the name of the member it is overloading.

  • The C# compiler will force a non-virtual call when invoking a non-virtual base class member even if your code uses this rather than base to scope the call. The VB compiler will retain the virtual call.

  • FxCop 1.35 and Visual Studio 2005 Static Analysis both have problems detecting whether a property is overriding and/or hiding, although the exact behaviour differs between the two engines, as well as between .NET 1.1. and 2.0 assemblies for FxCop 1.35. For more details on this issue (which appears to have been resolved in Orcas), see this bug report.

FxCop backlog tools: FxCop

If you’re considering tackling an FxCop backlog, you’re going to need a few tools. Obviously, the most important of these is FxCop itself, but that still leaves you with a potential choice to make between stand-alone FxCop and Visual Studio Static Analysis. If your target code base is built against .NET 1.1 or if you’re not willing/able to shell out for Team Edition for Software Developers or Team Suite, the decision is pretty trivial. However, what if all your developers already have Visual Studio 2005 editions that include static analysis? You’ve paid for it, so your first instinct will probably be to use it. Unfortunately, it’s probably not the best tool for this particular job.


While VStudio Static Analysis is great for dealing with a small volume of problems that should be addressed immediately, it kind of sucks the big wazoo with respect to handling the much larger volume of violations that one needs to deal with while tackling a backlog. This is partly a UI issue, and it shouldn’t come as much as a surprise given that the backlog scenario probably wasn’t exactly a major priority during the development of the VStudio integration piece. However, there are also some issues that go beyond the limited capabilities of the Visual Studio Error List window, and it’s important to be aware of these as you make your tooling decisions.


In order to help understand why the stand-alone tool will likely be a better choice, let’s take a closer look at some of the tasks in the backlog winnowing process…



Fix this


You’ve decided to activate a given rule. Your code base has 400 existing violations of the rule, and you want your team to start fixing those right away. However, you certainly don’t want to have to specifically assign each and every fix to a given developer. How do you create a “pool” of violations from which developers can draw?


If you’re using Static Analysis and you also happen to be using Team Foundation Server, you might want create work items for the violations, then let developers assign the work items to themselves before starting to work on the fixes. However, from the VStudio Error List, you’re going to able to generate work items in only one of two ways:



  • Select all the violations, then generate a single work item, or

  • Generate a work item for each violation individually.

Generating the individual work items would be ridiculously time consuming and tedious, so a single work item would usually be the only practical choice. However, in order to ensure that the more than one developer isn’t working on any given violation at the same time, there’s still going to have to be some manual work in terms of copying and editing work items before a developer will even start fixing a violation. Unfortunately, this procedural overhead doesn’t really have much added value besides preventing overlapping work, and the additional effort may end up accounting for a very large proportion of the time spent on any given fix. Wouldn’t you rather use that time for fixing other violations instead? (Of course, you could create a tool that automatically creates individual work items for you based on an FxCop output report, but that’s again time that might be better spent doing other things.)


Another problem with adding backlog work items to TFS from VStudio Static Analysis results is that they’re going to end up in the same TFS project as all your “real” work items for the same code base. Assuming your Static Analysis backlog is non-trivial, these backlog work items will start to swamp out your other feature and bug fix work items pretty quickly. Sure, you can isolate them in a category of their own, but sooner or later someone is going to include those beasties in a report where they shouldn’t belong, and your stats are going to be skewed halway to heck.


OK, so if those are problems with using Static Analysis for violation reservation, what can the cheapskate’s standalone edition do better? For starters, we can mark all the violations for fixing in one step without requiring later splitting. After activating a new rule and executing an analysis, simply select all the newly active violations of the rule and exclude them with a “TODO” note. If you’re feeling fancy, you can even add a priority level (e.g.: “TODO 1” or “TODO 2”). After excluding the pre-existing violations, simply check the FxCop project into source control, and all the developers on your team can instantly access the items.


Under the “TODO” exclusion scheme, it’s equally easy for any given developer to grab himself a chunk of new backlog work. He simply selects a bunch of violations from those that still have “TODO” notes, adds a new note (in one step) to claim all of them, then checks the modified FxCop project back into source control. What’s this new note that gets added for the claiming? At FinRad, we simply add our initials to the note. For example, I claim violations by adding the note “TODO – NC” to an unclaimed TODO exclusion. If you’ve got a bigger team, you might need to identify developers by somewhat longer strings than their initials, but the underlying technique should still work.


Using stand-alone FxCop and TODO exclusions, I can both create and reserve violation “work items” in seconds, so I can spend more time on actually fixing violations rather than on managing the workload itself.



Don’t break that


Identifying the current workload is only part of managing the violation backlog. Another important part is ensuring that the backlog violations that have not yet been fixed don’t break your builds (assuming, of course, that you’re using automated builds and that you’ve set FxCop rule violations to break those builds). If you’re using VStudio Static Analysis, you’ve got two basic options for doing this:



  • Set the rule to not break the build until after all the backlog violations are fixed or

  • Use SuppressMessageAttribute to exclude the pre-existing violations.

If you don’t allow violations of the newly activated rule to break the build, you won’t start detecting new violations of the rule until after you’ve cleaned up the entire backlog, and that’s not pretty. On the other hand, if you add a SuppressMessageAttribute to any given pre-existing violation, that violation will no longer show up in the Error List, so it’s going to start being difficult for your team members to find and fix the problem.


Another problem with the SuppressMessageAttribute exclusions approach is that VStudio Static Analysis doesn’t let you know when you don’t need a suppression anymore.1 This means that if you fix the original violation but forget to remove the attribute, you could later create another violation of the same rule in the same code element, and you’ll never notice it because the old placeholder exclusion will hide the new problem. That’s a nasty enough problem even when you have a small number of “real” exclusions, but it’s asking for big trouble when you consider the very large number of suppressions that you would need to create for your backlog violations.


This is another problem for which a much cleaner, simpler solution is possible when using the FxCop stand-alone UI. In fact, we’ve already covered the approach above. Since we create a “TODO” exclusion for every violation in the immediate backlog, we can keep the rule activated and build-breaking while still readily identifying the backlog violations. The stand-alone UI also allows us to detect exclusions that are no longer needed, so we can remove the unecessary exclusions in batches even if individual developers forget to remove them when fixing a violation.


On a bit of a side note, if you happen to be running a backlog cleanup of a .NET 2.0 application, you will probably want to store only your TODO exclusions in the FxCop project file. If you happen to resolve a violation by creating a “real” exclusion, that exclusion should probably be created with SuppressMessageAttribute rather than in the FxCop project file. After all, one of these days, you’ll actually finish tackling the backlog, and you may want to switch over to using VStudio Static Analysis for finding and addressing the piddly few rule violations you’ll be generating in your day-to-day work.



Sorting, filtering, and other fancy-pants advanced techniques


So let’s say you’re a developer who is ready to give an hour or so to fixing some of the backlog problems. How to you select which ones to work on? If you simply grab a random assortment, you’ll probably end up working on violations of a variety of different rules that are widely separated with respect to location in the source code. You’ll very likely end up wasting time on context switching that could have been put to much better use actually fixing violations.


As a more concrete example, let’s say you decide that you have time to fix about 30 violations of the Performance.RemoveUnusedLocals rule. Trivial as these might seem, chances are that you’ll probably find that somewhere between 10% and 20% of the violations actually require some decisions and maybe even a bit of code spelunking. This is why you won’t try tackling 3 or 4 violations per minute, but it also has another consequence. If you’re going to be encountering “weird” violations, wouldn’t it be nice if they’re mostly of the same style? One way of trying to increase the probably of this would be to draw the violations from a smallish area of the code base, where you’ll at least stand a reasonable chance of encountering mostly code authored by a single developer.


While selecting violations using the above pattern is certainly possible from the VStudio Error List, it’s not terribly convenient, and it’s certainly not something that most of your developers are likely to enjoy doing several times a week. Developer patience with the selection process is likely to fray even faster if you’re using TFS to store the backlog work items. To make things worse, you’ll again be wasting time that could be better spent on actually fixing violations.


On the other hand, making the same issue selection in the stand-alone FxCop UI is trivial. First, I select the Performance.RemoveUnusedLocals rule in the rule treeview to hide violations of any other rule. Then I sort the “Excluded In Project” list by the item column. Since I can see availability of items in the “Last Note” column, all I have to do is select a bunch of violations that are clustered close together by item path in order to increase the chances of similar types of weirdness in the problems that I’m going to address.2



But I don’t like the extra step involved in running FxCop…


Let’s step back a moment and look at the usual reason for preferring VStudio Static Analysis: it runs seemlessly as part of the build process. Even ignoring the backlog problem, there’s another nasty gotcha here: duration of a static analysis run. It takes over 3 minutes to analyze the assemblies in the product on which I work day-to-day. Given this, I’m certainly not going to run an analysis as part of my usual debug builds, whether it’s via the integrated Static Analysis tool or by launching FxCop via a post-build event. If I really want the analysis to be associated with a build, I’m going to have to create an alternate build configuration at the very least.


This means that, even if I’m targeting the integrated tool, I’m still going to be launching analyses via a conscious, at least partially manual step. With one little extra step (clicking the “Analyze” button within the FxCop UI), I can run the analysis in the richer, otherwise more convenient stand-alone UI instead. To make this rapid launch possible, I simply add my FxCop project file to my solution as a solution item, then configure Visual Studio to open FxCop files using FxCop.exe. Once this initial setup is done, all I need to do to run my FxCop analysis is open the FxCop project from within Visual Studio then click the “Analyze” button once the FxCop UI has opened (to watch a 700K demo video, click here). That’s amply convenient for me, although I guess your mileage may vary (but only if you’re a complete nut <gdr>).



But that Static Analysis thingy cost me 5000$ (per seat) !!!


If you’ve bought one of the Team System editions that includes static analysis, I very much hope that you’re taking full advantage of it for your new development work. I also hope that static analysis isn’t the only “extra” feature that you’re using, or perhaps you did spend rather a lot for one feature. 😉 Either way, the fact that you happen to have the integrated static analysis tool available to you shouldn’t be an important factor in making a decision regarding which FxCop tool to use for your backlog cleanup project.


On the other hand, the integrated tool ships with some rules that are not available in the free version of FxCop, and you might be swayed toward using that tool in order to take advantage of the additional rules, particularly given that FxCop 1.35 and Visual Studio 2005 Static Analysis cannot consume each other’s rules. However, what you may not know is that FxCop 1.35 also shipped with some rules that aren’t present in the integrated static analysis rule set3. The following table lists the rule set differences:



































Category FxCop 1.35 Visual Studio 2005
Design ExceptionsShouldBePublic  
Maintainability   AvoidExcessiveComplexity
AvoidExcessiveInheritance
VariableNamesShouldNotMatchFieldNames
Naming CompoundWordsShouldBeCasedCorrectly
FlagsEnumsShouldHavePluralNames
IdentifiersShouldBeSpelledCorrectly
OnlyFlagsEnumsShouldHavePluralNames
ResourceStringCompoundWordsShouldBeCasedCorrectly
ResourceStringsShouldBeSpelledCorrectly
 
Performance RemoveEmptyFinalizers  
Reliability   DisposeObjectsBeforeLosingScope
DoNotLockOnObjectsWithWeakIdentity
DoNotTreatFibersAsThreads
RemoveCallsToGCKeepAlive
UseSafeHandleToEncapsulateNativeResources
Security CatchNonClsCompliantExceptionsInGeneralHandlers
SecurityTransparentAssembliesShouldNotContainSecurityCriticalCode
SecurityTransparentCodeShouldNotAssert
SecurityTransparentCodeShouldNotReferenceNonpublicSecurityCriticalCode
 
Usage AttributeStringLiteralsShouldParseCorrectly
LiteralsShouldBeSpelledCorrectly
TestForNaNCorrectly
 

If you’re seeking to maximize your rule coverage, jumping on static analysis because it supposedly contains more rules isn’t exactly looking like an optimal approach… If you want to cover the combined rule set offered by the two tools, I’d recommend using FxCop 1.35 for your backlog cleanup until you’ve exhausted all of its rules. Once that’s done, you would potentially be faced with a decision of switching over to the integrated tool for the remaining rules or coding custom rules for FxCop 1.35 that check for the same problems as the additional rules shipped with Visual Studio.


Then again, is that really a decision you’re likely to end up facing? Assuming you start a backlog cleanup now, when do you think you’re likely to be almost done? Unless your backlog is trivial, chances are excellent that both Orcas and FxCop vNext will have shipped by the time you’re even half way done. Personally, I’m hoping that Orcas static analysis and FxCop vNext will be able to consume each other’s rules, thereby making it possible to use the richer UI when addressing a backlog of the Visual Studio rules, as well as easing some of the pain around custom rule authoring. There’s no technical reason why they shouldn’t be able to consume the same rules, but that doesn’t mean that a difference couldn’t be introduced explicitly in order to prevent using the expensive rules from the free tool. I’m keeping my fingers crossed that this won’t happen, although I won’t start counting any chickens until Orcas and FxCop vNext have both RTMed…





1If you wish VStudio Static Analysis and/or FxCopCmd.exe would let you know when you’ve got unecessary suppressions, vote here.


2If FxCop wouldn’t “lose” the rule selection when switching between the rules and targets treeviews, it would be possible to make an even more convenient selection by “climbing” the target treeview from the target associated with any given violations. Unfortunately, the rule selection is lost when switching to the other treeview tab in FxCop 1.35. I’m keeping my fingers crossed that this won’t be the case in the vNext UI, but there seems little point in creating a bug report until I’ve actually tried the beastie.  I’m also hoping that the vNext UI will allowing filtering by column values within the violation lists, which would also make working with large backlog lists more convenient, but I won’t be holding my breath for that change.


3In case you’re not already aware of this, the Visual Studio rule set is supposed to be a superset of the stand-alone FxCop rule set. FxCop 1.35 only contains rules that aren’t present in Visual Studio 2005 because of time constraints during the preparation of Visual Studio 2005. (Sorry, but the best reference I seem to be able to find regarding this at the moment is a post on the FxCop forum.) The extra FxCop 1.35 rules are present in Orcas, with the exception of Usage.LiteralsShouldBeSpelledCorrectly, which is a control flow rule.

The magically disappearing start page

Off on a bit of a tangent this morning…


I have a bit of a love/hate relationship with the Visual Studio start page. I do find it useful enough when I first open VStudio. However, once I start working on anything, the mere presence of the start page tab in the IDE main window is really, really irritating to me for some reason. Of course, given that even one little using directive out of place also drives me to distraction, this probably isn’t too surprising.


At any rate, the other day, I decided that it was high time to do a little something about that start page annoyance. At some point in the medium-ish future, I might turn Bordecal.ImportSorter into Bordecal.ToolsForObsessiveCompulsiveNitPickers and throw in a start page closer. In the meantime, here’s a VStudio 2005 macro to close the start page as soon as any document is loaded in the IDE.


For those of you who are interested in using the macro, here are a couple of picky details:



  1. The start page will be closed when any document is opened, regardless of how it was opened. If you open a solution that has open documents, that’s good enough to trigger closing of the start page.

  2. The start page will only be closed once. If you reload the start page manually, the macro won’t close it on you.

The above are completely intentional features–that’s exactly how I personally prefer the start page to be closed. If you would prefer some alternate behaviour, please feel free to ask. I’m quite willing to consider adding customization of the behaviour when I eventually get around to adding this stuff to an add-in…

FxCop backlog themes: Disposition and finalization

I skipped ahead a while back with my post on the exceptions theme, and it’s time to get back on track with stuff that would usually precede that rather involved topic during an FxCop backlog cleanup project.


The good news with the disposition and finalization topic is that its scope is quite a bit narrower than the exceptions topic. The bad news is that you’ll probably see almost as many preconceptions regarding how it works, particularly if you have quite a few developers who are accustomed to deterministic finalization. As with the exceptions topic, it’s probably a pretty good idea to start the topic from scratch, making the assumption that most of the developers on your team will get at least something from coverage of even the fundamentals of the subject matter.



Resources


This time around, Chris Brumme’s post on finalization is absolutely required reading, at least for the topic trainer. If you don’t understand at least 90% of that article, you probably don’t know enough about finalization to be teaching the topic.


You’ll also need some resources that are more on the “how to” side of things. The MSDN articles Implementing Finalize and Dispose to Clean Up Unmanaged Resources and Implementing a Dispose Method to provide a reasonable starting point around guidelines for finalization and disposition. By the time you complete your training on finalization and disposition, all the developers on your team should be able to read and understand these articles.



Some gotchas


By and large, the documentation available is pretty clear, and the recommended practices are relatively unambiguous. However, we did run into a few picky detail issues while tackling the disposition and finalization theme at FinRad:



  1. In a disposable class that contains a collection of disposable items, how does one deal with exceptions that might be thrown during the disposition of any individual item?
    The issue here is that one wants to continue disposing of any remaining items even if an exception is thrown while disposing of any given item. However, one still wants to throw an exception out of the parent class Dispose method. The question here is exactly what should one end up throwing? One answer is to store any exceptions into some custom exception that contains an exception collection, then throw the resulting beastie at the end of the process. Unfortunately, there are several problems with this approach, not least of which is that the original Dispose method caller isn’t likely to be want to iterate over the exception collection trying to decide what to do about each and every one of the member exceptions.

    What we ended up deciding to do instead was to treat disposition of each individual collection item as if it we were disposing a field on the parent class. In other words, we wanted to expose the same exception scenario that would be seen if we were able to code the parent Dispose method like this:
    try
    {
    this._someDisposableField.Dispose();
    }
    finally
    {
    try
    {
    this._someOtherDisposableField.Dispose();
    }
    finally
    {
    try
    {
    this._collectionOfDisposables[0].Dispose();
    }
    finally
    {
    try
    {
    this._collectionOfDisposables[1].Dispose();
    }
    finally
    {

    }
    }
    }
    }
    If one were able to do this, the only exception the caller of the parent class Dispose method would see is the last exception thrown in any of the individual try blocks. This is quite easy to simulate even while disposing in a loop. e.g.:
    try
    {
    this._someDisposableField.Dispose();
    }
    finally
    {
    try
    {
    this._someOtherDisposableField.Dispose();
    }
    finally
    {
    try
    {
    Exception lastException = null;
    foreach (IDisposable item in this._collectionOfDisposables)
    {
    try
    {
    item.Dispose();
    }
    catch (Exception ex)
    {
    lastException = ex;
    }
    }

    if (lastException != null) throw lastException;
    }
    finally
    {
    GC.SuppressFinalize(this);
    }
    }
    }

    Is this ideal? Obviously not, but I happen to believe that it presents less problems than any of the alternatives. If you think you have a better approach, I’d love to hear about it.

  2. What exactly is an unmanaged resource?
    The basic rules for implementing disposition and finalization are the following:

    • If a class has a disposable field, it should disposed from the Dispose method of the parent class.

    • If a class has a field that represents an unmanaged resource, this should be released from both the Dispose method and the finalizer of the parent class.
    The first rule is pretty unambiguous, but what exactly qualifies as an “unmanaged resource” that should be released both on disposition and finalization of the parent class? Most of the articles on disposition and finalization make some airy-fairy mention of things like database connections and file handles, but does that mean a SqlConnection should be disposed in the parent class finalizer? The simple answer is “no”. The barely more complicated answer is pretty much the only things you should be cleaning up during finalization are IntPtr fields. If you’ve gone beyond IntPtr to start using SafeHandle in .NET 2.0, you don’t even need worry about those since the unmanaged handle is already wrapped in a disposable, finalizable managed object.

  3. There’s no framework design guideline with respect to interface inheritance from IDisposable, but there probably ought to be.
    Interfaces that are likely to be implemented by types that will need to be disposable should themselves be disposable. This is because you don’t want to end up having to write code that looks like this:
    IShouldBeDisposableButIsNot thingy = Factory.CreateThingammy();
    try
    {
    //…
    }
    finally
    {
    IDisposable disposableThingy = thingy as IDisposable;
    if (disposableThingy != null) disposableThingy.Dispose();
    }
    Instead, you want to be able to simply write code like this:
    using (IShouldBeDisposableAndIs thingy = Factory.CreateThingammy())
    {
    //…
    }


The built-in rules


Here are the built-in rules that you’ll want to start activating during the disposition and finalization theme:































































Category Rule In FxCop 1.35? In VStudio 2005?
Design ImplementIDisposableCorrectly x x
TypesThatOwnDisposableFieldsShouldBeDisposable  x x
TypesThatOwnNativeResourcesShouldBeDisposable x
Performance  DisposeMethodsShouldCallSuppressFinalize x x
RemoveEmptyFinalizers x  
Reliability  DisposeObjectsBeforeLosingScope   x
 Usage DisposableFieldsShouldBeDisposed x x
DisposableTypesShouldDeclareFinalizer x x
DisposeMethodsShouldCallBaseClassDispose x x
DoNotDisposeObjectsMultipleTimes x
FinalizersShouldBeProtected  x x
FinalizersShouldCallBaseClassFinalizer x x


Some custom rule ideas


We haven’t created any custom rules yet around finalization and disposition at FinRad, but there are at least a couple that I would like to implement eventually:



  • IDisposable.Dispose should be called from within a finally block (and the only other code in the finally block should be a cast to IDisposable)

  • An interface with at least one disposable implementation should itself be disposable

That last one is bound to be interesting to try to author, but I’ve got loads of time to figure out how to implement it since there are quite a few more important rules to tackle first…

FxCop is case-sensitive. VB is annoying.

With profound apologies to VB lovers, there are a few features of the VB compiler that occasionally make me want to start drinking at a very early hour of the day. Perhaps the most troublesome of these is its failure to start screaming bloody murder upon detection of namespaces that differ only by case, even while compiling an assembly that is marked as CLS-compliant. VB 2005 is at least kind enough to conserve the casing from the source code. However, VB 2003 seems to more or less randomly pick one case version and apply it throughout an assembly, and that leads to all sorts of fun and games.


If you’ve developed mixed VB and C# solutions, chances are pretty good that this isn’t exactly news to you since C# tends to get mighty unhappy the resulting changes in namespace casing. However, you might be blissfully unaware that FxCop is also case-sensitive, and that has its own special set of consequences…


At FinRad, we’re still supporting applications that run against .NET 1.1 (and will be for at least a couple more years), and a very large part of that code base is written in VB 2003. Given that this is .NET 1.1 code, we’re using the stand-alone version of FxCop for static analysis, and exclusions are stored in the FxCop project file, not as SuppressMessageAttribute annotations in the source code.


Given that this also happens to be the same code base covered by the bulk of our backlog cleanup efforts, the FxCop project files contain an unusually large number of exclusions. (I’ll write about the reasons for this soon. For now, just trust me: lots more violation exclusions than you might normally expect.) Every now and then, we would end up with a situation where a sizeable chunk of our exclusions would suspiciously disappear, and all the violations they covered would auto-magically become active again. Needless to say, this was terribly annoying, but we chalked it up to gremlins, manually re-created the exclusions, and forged bravely ahead.


A few weeks ago, we hit a really weird problem where an automated build running on an integration server was detecting active FxCop rule violations that couldn’t be duplicated on any developer machines. Several of us tried clean check outs, rebuilding, etc., but without any success at reproducing the problems seen in the automated build. In retrospect, it seems painfully obvious, but the namespace casing problem (which also tended to raise its ugly head in a Visual Studio vs. automated build disjunction with respect to C# callers) didn’t occur to me until I was stuck in traffic that evening. It was the first thing I checked the next morning and, sure enough, the exclusions were registered under the “Something” namespace, but the classes compiled in the automated build now fell under the “SomeThing” namespace.


Some lessons learned:



  • FxCop is case-sensitive.

  • The gremlins actually lived in the VB compiler. (Luckily, they’ve been evicted, but the nasty smells are lingering.)

  • Giving an “aha!” smack to one’s forehead while stuck in traffic will earn some very concerned looks from drivers of neighbouring vehicles.