FxCop and the big, bad backlog

A few months ago, I gave a presentation on using FxCop at the Montreal Visual Studio Users Group. The material was divided into two main topics: (a) the mechanics of using FxCop and (b) integrating FxCop use into a development process. During the first part of the talk, some members of the peanut gallery kept piping up with questions about what one can do to handle the huge number of FxCop rule violations that an existing code base will have when one first runs FxCop against it. Lucky for me, most of the second part of the talk covered exactly that problem, and I managed to finish the evening without having any rotten produce lobbed in my direction. However, their questions did confirm my suspicion that dealing with the backlog problem is a topic of potential interest to folks, so it’s one I might as well spend a bit of time writing about…



No backlog?  No problem.


Before tackling the hard part of the backlog problem, let’s look at the easy part. If you’re starting a new project, the easiest way to handle the backlog issue is to never develop a backlog in the first place. The way to do this is to start running FxCop with all rules activated from the moment you first set up your project(s) and to fix problems (or flag true exclusions) before each and every check-in.


This may seem like a lot of work and, to be honest, it will almost certainly add noticeable overhead at the very beginning, but the additional effort required will probably taper off more quickly than you might expect. Within a week or two, most individual developers should be over the learning curve “hump” with respect to the FxCop tool itself. What might surprise you is that it probably won’t take much longer for most developers to become familiar with the rules that touch their own work and to stop introducing new violations of those rules. This will, of course, vary depending on the spectrum of tasks that any given developer performs, but most developers in anything but the very smallest shops are likely working on fairly similar tasks from one week to the next.


Another issue on the “no backlog” side of things is that you might need to occasionally create temporary exclusions (e.g.: when you’ve added the first class to a namespace, but the others that would eventually prevent Design.AvoidNamespacesWithFewTypes from firing don’t yet exist), and allowing these to accumulate could also create a backlog problem. The way to prevent this is to avoid them where possible and clean them up as soon as possible when their creation can’t be avoided. I would recommend creating a project work item for removing any given temporary exclusion immediately before adding the exclusion itself. A final check that they have all been removed should be part of a pre-release checklist, and the release in question should be the earliest you can isolate (e.g.: internal release to QA).



And now for the hard stuff…


If, like many folks with a medium-to-large existing code base, you’ve run FxCop and been overwhelmed at the thousands or tens of thousands of violations that it spits out, what can you possibly do? The most common reaction seems to be essentially giving up on the idea of using FxCop for that product. That’s unfortunate, partly because the existing problems will go unaddressed until they manifest as user-noticeable bugs, but also because the product will continue to accumulate new problems as development progresses.


Another path is to disregard the existing backlog and follow the “no backlog” approach described above for new development. Since the FxCop team has produced a tool to help facilitate this approach, I presume that it is a relatively common choice. Unfortunately, while it does avoid the problem of accumulating new violations, it doesn’t address the existing backlog particularly well.


So… Can one do something to both prevent new problems and clean up the old problems? At FinRad, we started a process in late September to do just that. One of the things that was immediately obvious was that “hiding” the existing backlog wasn’t likely to be terribly helpful if our goal was to eventually clean it all up. Part of the problem is that folks would probably become rapidly inured to the large volume of existing exclusions. Another, potentially more important issue, is that a big backlog is just plain too depressing.


Rather than activating all rules at the beginning and excluding all existing rule violations (even temporarily), we decided to adopt a process whereby rules are activated one-by-one. Once a rule is activated, its existing violations are added to the immediate backlog and nobody is allowed to introduce new violations. In addition, we decided to provide training on the newly activated rules rather than expecting individual developers to do all the rule reason and fix approaches research on their own. The basic outline of this process is:


  1. Every week, a new set of rules to activate is selected.
  2. We have a one hour training session on the new rules. This training includes:
    • any background information that is required to understand the rules,
    • the specific problems that the rules are meant to address,
    • recommended approaches to fixing rule violations, and
    • reasons for which it is acceptable to create exclusions for violations.
    In addition, progress statistics are presented at each training session so that everyone is aware of where we stand with respect to the existing backlog.
  3. Immediately after the training session, the new rules are activated, and existing violations are added to the backlog by creating “TODO” exclusions.
  4. As of the activation of a rule, no new violations are accepted.
  5. Each developer is supposed to spend two hours per week fixing rule violations from the backlog.

In practice, this exact process isn’t followed to the letter every week. Besides the obvious impossibility of grabbing three hours of each developer’s time every single week, there are also all sorts of picky details around which rules can/should be activated when, and what one should do to keep the momentum going when the size of the backlog jumps because of the activation of a single rule. There’s also the question of what sort of hit we’re taking with respect to new violations against the rules that haven’t yet been activated. However, I think I’ll keep all of that as meat for further blog posts for now…

4 thoughts on “FxCop and the big, bad backlog”

  1. We took care of the backlog via different approach.

    We wrote a simple tool to count the number of errors on the output. If the count went up we fail the build. If the count goes down we pass the build and lower the max count allowed.

    This allowed us to turn on ALL errors and not let new errors into the system while we try to remove existing errors. It worked great.

    We also use the same tool to compare the # of projects found in the solution file and the fxcop project file. If they differ we fail the build. This way if someone adds project and forgets to add it to FxCop the build fails.

    I posted this info on the FxCop forum but nobody seemed to even notice. To me this made it feasable to use FxCop in the middle of a modest sized project.

  2. George: We briefly considered a similar approach but ended up rejecting it due to the sheer volume of our existing backlog*. I’m guessing that you probably started with a considerably smaller smaller backlog than ours, particularly since you mention that it was a “modest sized project”, and I’m curious as to the initial size of your backlog and the rate of clean-up that you saw once the problem tracker was activated.

    *The desire to provide training on any given rule before bulk clean-up of problems for that rule begins was also a factor in this decision, but it didn’t carry as much weight as the backlog size issue.

  3. Some interesting ideas here. I recently started using FxCop on a project of mine, and have a good-sized initial backlog to deal with – 835 items, not sure how that compares with your situation…I recently wrote a post on my own blog on the basics of FxCop and using it with NAnt, so I am getting on my way.

    I have been reading up on the various violations I keep getting, to learn how to deal with them. It will be a process, all right.

    Something I am curious about, do you necessarily write code to FxCop’s standards, as far as naming goes? I know there are many preferences are out there, and wondered if anyone tailored their code just to satisfy FxCop.

  4. Grant: We do apply the FxCop naming rules, but not “just to satisfy FxCop”.  The naming rules are based on the naming conventions from the .NET API design guidelines (msdn2.microsoft.com/…/ms229002.aspx), and following those guidelines can go a long way toward helping consumers of an API feel comfortable with its public interface.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>