A few months ago, I gave a presentation on using FxCop at the Montreal Visual Studio Users Group. The material was divided into two main topics: (a) the mechanics of using FxCop and (b) integrating FxCop use into a development process. During the first part of the talk, some members of the peanut gallery kept piping up with questions about what one can do to handle the huge number of FxCop rule violations that an existing code base will have when one first runs FxCop against it. Lucky for me, most of the second part of the talk covered exactly that problem, and I managed to finish the evening without having any rotten produce lobbed in my direction. However, their questions did confirm my suspicion that dealing with the backlog problem is a topic of potential interest to folks, so it’s one I might as well spend a bit of time writing about…
No backlog? No problem.
Before tackling the hard part of the backlog problem, let’s look at the easy part. If you’re starting a new project, the easiest way to handle the backlog issue is to never develop a backlog in the first place. The way to do this is to start running FxCop with all rules activated from the moment you first set up your project(s) and to fix problems (or flag true exclusions) before each and every check-in.
This may seem like a lot of work and, to be honest, it will almost certainly add noticeable overhead at the very beginning, but the additional effort required will probably taper off more quickly than you might expect. Within a week or two, most individual developers should be over the learning curve “hump” with respect to the FxCop tool itself. What might surprise you is that it probably won’t take much longer for most developers to become familiar with the rules that touch their own work and to stop introducing new violations of those rules. This will, of course, vary depending on the spectrum of tasks that any given developer performs, but most developers in anything but the very smallest shops are likely working on fairly similar tasks from one week to the next.
Another issue on the “no backlog” side of things is that you might need to occasionally create temporary exclusions (e.g.: when you’ve added the first class to a namespace, but the others that would eventually prevent
Design.AvoidNamespacesWithFewTypes from firing don’t yet exist), and allowing these to accumulate could also create a backlog problem. The way to prevent this is to avoid them where possible and clean them up as soon as possible when their creation can’t be avoided. I would recommend creating a project work item for removing any given temporary exclusion immediately before adding the exclusion itself. A final check that they have all been removed should be part of a pre-release checklist, and the release in question should be the earliest you can isolate (e.g.: internal release to QA).
And now for the hard stuff…
If, like many folks with a medium-to-large existing code base, you’ve run FxCop and been overwhelmed at the thousands or tens of thousands of violations that it spits out, what can you possibly do? The most common reaction seems to be essentially giving up on the idea of using FxCop for that product. That’s unfortunate, partly because the existing problems will go unaddressed until they manifest as user-noticeable bugs, but also because the product will continue to accumulate new problems as development progresses.
Another path is to disregard the existing backlog and follow the “no backlog” approach described above for new development. Since the FxCop team has produced a tool to help facilitate this approach, I presume that it is a relatively common choice. Unfortunately, while it does avoid the problem of accumulating new violations, it doesn’t address the existing backlog particularly well.
So… Can one do something to both prevent new problems and clean up the old problems? At FinRad, we started a process in late September to do just that. One of the things that was immediately obvious was that “hiding” the existing backlog wasn’t likely to be terribly helpful if our goal was to eventually clean it all up. Part of the problem is that folks would probably become rapidly inured to the large volume of existing exclusions. Another, potentially more important issue, is that a big backlog is just plain too depressing.
Rather than activating all rules at the beginning and excluding all existing rule violations (even temporarily), we decided to adopt a process whereby rules are activated one-by-one. Once a rule is activated, its existing violations are added to the immediate backlog and nobody is allowed to introduce new violations. In addition, we decided to provide training on the newly activated rules rather than expecting individual developers to do all the rule reason and fix approaches research on their own. The basic outline of this process is:
- Every week, a new set of rules to activate is selected.
- We have a one hour training session on the new rules. This training includes:
- any background information that is required to understand the rules,
- the specific problems that the rules are meant to address,
- recommended approaches to fixing rule violations, and
- reasons for which it is acceptable to create exclusions for violations.
- Immediately after the training session, the new rules are activated, and existing violations are added to the backlog by creating “TODO” exclusions.
- As of the activation of a rule, no new violations are accepted.
- Each developer is supposed to spend two hours per week fixing rule violations from the backlog.
In practice, this exact process isn’t followed to the letter every week. Besides the obvious impossibility of grabbing three hours of each developer’s time every single week, there are also all sorts of picky details around which rules can/should be activated when, and what one should do to keep the momentum going when the size of the backlog jumps because of the activation of a single rule. There’s also the question of what sort of hit we’re taking with respect to new violations against the rules that haven’t yet been activated. However, I think I’ll keep all of that as meat for further blog posts for now…