FxCop backlog themes: Exceptions

Since I started monitoring traffic on this blog a little more closely about a week ago, I had the unexpected surprise that the posts on HTML encoding and server vs. client cultures were getting a lot more hits than I expected. I had been planning on starting a series of “how to” posts on those topics this weekend, but that was before David Kean from the FxCop team was kind enough to direct a bunch of folks my way with a post about my recent FxCop posts. Since it would seem that I’ve now got quite a few new subscribers who are interested in FxCop, I figured I might as well continue on that theme for a while…



Jumping the gun


I’ve got exception stuff on the brain this morning, so that’s what I’m going to write about. However, if you have decided to tackle an FxCop backlog, exceptions are probably not the first major theme you should cover. Then again, even assuming that my recent posts have inspired you to immediately start a backlog winnowing project, you should still be at least a month away from covering any major theme. I’m going to try to write about the much easier disposition and finalization topic within the next couple of weeks. Heck, it might even happen tomorrow morning if I don’t get coerced into some sort of housework. In the meantime, just keep in mind that the sequence in which I end up covering the topics here isn’t necessarily the sequence in which you’ll want to cover them in your backlog process.



Starting from scratch


Unless your team already has very specific guidelines in place around the generation and handling of exceptions (along with training and code reviews to reinforce those guidelines), it’s pretty safe to assume that there is a very wide variety of exception treatment in your code base. In addition, the developers on your team likely have similarly disparate understandings of when and how exceptions ought to be generated and handled. Instead of jumping right into covering the existing FxCop rules around exceptions, it would probably be a much better idea to start with some training around the basics of exceptions. Chances are excellent that individual developers will be carrying quite a bit of baggage based on their experiences with exception treatment in other languages or platforms, and you will most likely need to devote quite a bit of time to dispelling inappropriate preconceptions.


Here are a few resources that you may wish to consult during your exception training:



If you’re concerned about globalization and localization issues, you might also want to have a look at my post on client vs. server cultures. The disparity between application user and administrator languages can present challenges when emitting localized exception messages for any application, and it’s something you may wish to consider as you develop your internal exception generation guidelines, even if you don’t happen to be writing web applications.


The built-in rules


As part of the exceptions theme, you’ll want to start activating the FxCop rules that touch on exception generation and handling. The following table lists the rules available in FxCop 1.35 and Visual Studio 2005 static analysis:

























Category Rule
Design DoNotCatchGeneralExceptionTypes
ExceptionsShouldBePublic1
Security CatchNonClsCompliantExceptionsInGeneralHandlers1
WrapVulnerableFinallyClausesInOuterTry
Usage DoNotRaiseExceptionsInFilterBlocks
DoNotRaiseReservedExceptionTypes
InstantiateArgumentExceptionsCorrectly
RethrowToPreserveStackDetails


I suppose that it could be argued that Usage.DoNotIgnoreMethodResults might also be worth including while activating exception rules. If your team is resistant to the idea of throwing exceptions instead of using failure return codes, then a bit of time cleaning up violations of this rule might be just the thing to convince them. Otherwise, it’s probably safe to activate this rule whenever it otherwise seems reasonable during your backlog process. If you’re following the guidelines, you’ll probably want to fix most violations of this rule by modifying the invoked method to throw exceptions rather than return a status code, which is a breaking change for public methods, and scheduling breaking changes can be a wee bit tricky.



Custom rules


If you’ve read the design guidelines concerning exceptions, it should be pretty obvious that the above eight rules don’t exactly cover all the potential problems for which you might want to screen. Here are a few custom rules for exceptions that we’re either already using at FinRad or will be introducing once we get a little further along in the backlog cleanup:



  • Do not throw System.NotImplementedException
    Pretty obvious, although you’ll occasionally need to create temporary exclusions, particularly for when someone checks in a class skeleton to be fleshed out by somebody else. Also, as a bit of side note, when you introduce this guideline, you might want to stress that the exception signifies that the code is not implemented yet, not that it’s not implemented here. For “not implemented here”, you should be using System.NotSupportedException instead.

  • Methods that throw System.NotSupportedException should not include other instructions
    Simple as it might seem, this rule is a bit tricky to author since the VB compiler has the nasty habit of adding extra instructions (e.g.: return statements) to methods. Also, one needs to keep in mind that a custom message might be provided for the exception, and that message might be retrieved via a method call.

  • Throw System.ComponentModel.InvalidEnumArgumentException for an invalid enum argument value
    Not really relevant to the rule itself, but it’s really annoying that this particular exception lives in the System.ComponentModel namespace rather than System.

  • An argument exception should be thrown directly from the method whose argument is being validated
    I had hoped that the parameter name verification on Usage.InstantiateArgumentExceptionsCorrectly would cover this issue but, after looking at the rule logic via Reflector, I’ve decided that a custom rule is going to be need for this one. I’m planning on activating it at the same time as Design.ValidateArgumentsOfPublicMethods (which will definitely be happening before we move to Orcas <gdr>).

  • Use recommended exception logging methods
    In order to make logged exceptions more noticeable, we decided to create a new helper method that is capable of writing exception details to both a central repository and whatever target location was previously used. The decentralized logging problem was the consequence of an application architecture that you may not share, so this might not be particularly relevant for your team. However, if you do have similarly decentralized logging, you may want to consider a similar approach. If you do decide to add a special exception logging method, then you’ll probably also want to add a custom rule to verify that it’s being used.

  • Review use of System.Diagnostics.Debug methods
    Some folks have a tendency to use Debug.Assert or Debug.Fail where they really ought to be throwing exceptions.

  • Review catch without throw
    Last but definitely not least, this is our “do not eat exceptions” rule. Unfortunately, fixing an undesirable violation of this rule can be very time-consuming, but discovering places where an application is hiding exceptions has tremendous impact on reliability, and the short-term cost is well worth the long-term benefit.

Personally, I’d love to see at least the more generally useful rules from the above list make it into the rule set shipped by Microsoft. I seem to remember seeing something from the code analysis team that promised some new rules around exceptions for Orcas, but I can’t find it anymore, and I haven’t seen any new exception rules either. It’s a pity, because I was really hoping to get rid of my custom rules, particularly the less than ideal verification for other instructions in methods that throw System.NotSupportedException…






1This rule is not available in Visual Studio 2005, but it is shipped with both FxCop 1.35 and Orcas beta 1.

FxCop backlogs: Some rules for rule activation

If you’ve decided to try to tackle an FxCop violation backlog, one of the first issues you’re going to face is deciding which rules to activate when. Here are some general guidelines…



Starting out


When you first begin the backlog clean-up process, you’re going to need to introduce the FxCop tool to your team (assuming, of course, that you’re not already using FxCop for new projects). In order to focus on mastering the tool and the cleanup process before diving into “difficult” rules, you’ll want to pick a few rules to activate that meet the following criteria:



  1. The rule itself should be easy to understand with minimal background explanation.

  2. Violations of the rule should be easy to fix.

  3. Your code should contain very few violations of the rule.

My favourite introductory rule is DoNotConcatenateStringsInsideLoops. Unfortunately, it’s slated to disappear in Orcas, so you might want to get a start on that backlog cleanup soon if you want to use it. 😉 For finding other candidate “early” rules, the best approach would generally be to count your violations per rule, then look for any rules with a violation count of between approximately 5 and 20 that also meet the “easy to understand” and “easy to fix” criteria.



If it ain’t broke, it don’t need fixin’…


Another thing you’ll want to do fairly early in the process is activate rules for which you have no existing violations. Even though this lock down might seem very important with respect to preventing additional backlog problems, it should probably wait until you’re a few weeks into the cleanup process, and the team is starting to get reasonably comfortable with the whole thing. Don’t worry too much about this delay causing new rule breaking — if you’ve gone this long without breaking those rules, chances are pretty slim that you’ll break them in the next three weeks or so. 😉


Simple as it may seem, there are still a few things to consider as you select “no violation” rules to activate:



  1. Some of these rules will have no violations because your team has been actively avoiding some problem. Take this opportunity to highlight that they’ve been doing the right thing and confirm what that “right thing” is since newer team members may not be aware of it.

  2. Some of the rules will have no violations because your project doesn’t touch on the area covered by the rule. For example, if you don’t interact with native APIs, there are several rules around p/invoke calls that you won’t have broken. Given that your team is unlikely to start breaking these “irrelevant” rules, you can safely activate them without too much explanation. However, you should at least mention that you are activating the rule, give a quick description of its significance, and explain why it’s not likely to be violated in the target code base. This approach will ensure that the team is aware of the rule should it eventually become more relevant due to changes in the project scope.

  3. Some rules (but not very many) will have no violations almost by chance alone and will require more in-depth explanation than you may wish to allot this early in your process. You might want to consider delaying activating such rules until you have a more appropriate opportunity to provide more in-depth training on the rule.


Mix and match


As you start activating rules, you’ll soon realize that some rule violations are a lot quicker and easier to fix than others. For example, fixing a violation of RethrowToPreserveStackDetails usually takes seconds, while there are a few rules for which a fix may take hours. It’s important that your immediate fix backlog contain a mix of problems, both with respect to difficulty and fix duration. The quick and easy fixes will provide small tasks that any developer will be able to perform, and they’re nice little “fillers” for those five minute blocks when you’re waiting on something like a pre-commit test run for another task. They’ll also help keep the momentum going when you’ve got a backlog of “big” problems to address. If you only have big problems in the backlog at any given time, the overall fix rate will be slow, and that’s just plain depressing for everyone.


Some fixes will require considerable knowledge of a particular area of your product, and one approach to handling these would be have developers fix such problems in their own areas of responsibility. However, you may also want to view such fixes as an opportunity for a bit of cross-training. Either way, you should make sure that your immediate backlog doesn’t only contain violations against a limited subset of your product. Any given developer on your team should be able to draw problems from the backlog pool at any given time.



Slow down, you crazy child


Now and then, your immediate fix backlog is going to grow just a bit too big. This usually happens because you’ve activated a rule with a large number of violations and/or for which violations are expensive to fix. In order to prevent the backlog from growing even more, you’re going to have to stop activating new rules for a while. There are basically two things you can do to avoid losing momentum because of this hiatus. The first is pretty trivial, and simply involves having developers fix violations during the time that would normally be allotted to introducing new rules. If you’re concerned that the fix rate has dropped off because developers have fallen out of the habit of working on the backlog, then having everyone work on fixing problems at the same time like this can be one way of “rebooting” the process. However, if developers are already spending the time they ought to be on the backlog, a fix session like this isn’t necessarily a very effective use of anyone’s time.


The other “delaying tactic” is to treat the increased between-rule activation delay as an opportunity to provide more in-depth training, particularly for rules where a lot of background information may be required. It can help quite a bit to divide such rules into themes, with background training on any given theme spanning several weeks. A few obvious themes are:



  • finalization and disposition,

  • exceptions, and

  • code access security.


Custom rules


As you tackle your backlog, you’ll probably start to identify some less than desirable usage patterns that aren’t covered by existing rules. While it can be quite tempting to immediately activate a new custom rule based on such a discovery, don’t forget that you have plenty of built-in rules that aren’t yet active either. Your custom rule should go into the same backlog “pot” as the built-in rules, and it should be activated when most appropriate to your backlog process, which may or may not be when you first discover the problem that the rule covers.



How far, how fast?


Obviously, the rate at which you will be able to activate rules will depend quite a bit on the size of the backlog and rate at which your team will be able to fix violations. However, you should also expect the activation rate to drop off over time as you start to activate rules that are difficult to fix and rules with large numbers of violations. Given that you may also start adding custom rules, the proportion of rules activated may drop even though you’re not inactivating any rules. Nothing exactly surprising in any of that, but it does mean that you shouldn’t use the rate of rule activation as an important indicator of your backlog clearing progress.


Control flow engine, 200?-2007, RIP

Surprise! (not the good kind)


If you use FxCop or Visual Studio Static Analysis and haven’t yet started playing with Orcas, you may be in for a bit of an unpleasant surprise. While the code analysis team is doing all sorts of interesting things for Orcas, one somewhat less desirable change you probably haven’t heard about yet is removal of the control flow engine and, consequently, the following rules which depend upon it:




































Category Rule
Design ValidateArgumentsOfPublicMethods
Globalization DoNotPassLiteralsAsLocalizedParameters
Performance AvoidUnnecessaryStringCreation
DoNotCallPropertiesThatCloneValuesInLoops
DoNotConcatenateStringsInsideLoops
Reliability DisposeObjectsBeforeLosingScope
Security ReviewSqlQueriesForSecurityVulnerabilities
Usage AttributeStringLiteralsShouldParseCorrectly
DisposeMethodsShouldCallBaseClassDispose
DoNotDisposeObjectsMultipleTimes
LiteralsShouldBeSpelledCorrectly
ProvideCorrectArgumentsToFormattingMethods

The reasons given for removing the engine are basically that it’s slow, buggy, and difficult to maintain. Well, we’ve all had to deal with that sort of problem before, but most of us haven’t have the luxury of removing the nasty, old implementation without providing a replacement. I can certainly understand that the code analysis team might be under pressure to rapidly address performance problems introduced by an engine that slows any use of their product by a factor of 2 or more, but this strikes me as a case of throwing the baby out with the bath water.


The one piece of good news is that there seem to be plans to eventually replace the engine. However, we supposedly won’t be seeing the new implementation until at least the post-Orcas release of Visual Studio, which very likely means waiting two years or more. A couple of things to consider:



  1. How big a backlog of violations against the missing control flow rules do you think your team will likely create in two years?

  2. Given that the code analysis team apparently considers it acceptable to remove this feature set without providing an immediate replacement, how confident are you in their ability to keep development of the replacement at a high enough priority level that it will actually ship in the post-Orcas release of Visual Studio?


On again, off again…


Let’s assume for the moment that the performance hit caused by the control flow engine really does prevent some folks from using FxCop. Even if it doesn’t, performance improvements are probably pretty high on many folks’ wish lists for FxCop, and it would seem likely that the code analysis team has to put up with a lot of complaining on that front. In addition, there are known bugs in the control flow engine and rules, and those problems may cause both some minor inconvenience as well as false confidence that rules are being applied where they’re not. Are these really sufficient justifications for eliminating the control flow engine entirely in Orcas?


Rather than removing the engine, I would much rather see it become an optional, disabled-by-default component. This scenario would remove the potential performance obstacle for new users while still allowing existing customers to at least keep up the same level of screening that was previously applied. If there is concern on the part of the code analysis team that users aren’t aware of the buggy, partial screening against control flow rules, the feature that allows enabling of the control flow engine could include whatever warnings they deem necessary.


Given that Orcas is already in beta, it seems likely that re-introducing the control flow engine might be too big a change to support. If so, another possibility might be to make it available only in the stand-alone FxCop product. Obviously, this would be a little less convenient for users of Visual Studio Code Analysis. However, it would at least allow such users to make their own performance/convenience/quality trade-off decisions rather than having a potentially unsuitable decision imposed upon them by the complete absence of the engine.


I’ve already submitted a suggestion regarding re-introduction of the control flow engine as an optional component on the Microsoft Connect site. If you’re concerned about the removal of the engine, perhaps it’s time to consider voicing your opinion there as well…

FxCop and the big, bad backlog

A few months ago, I gave a presentation on using FxCop at the Montreal Visual Studio Users Group. The material was divided into two main topics: (a) the mechanics of using FxCop and (b) integrating FxCop use into a development process. During the first part of the talk, some members of the peanut gallery kept piping up with questions about what one can do to handle the huge number of FxCop rule violations that an existing code base will have when one first runs FxCop against it. Lucky for me, most of the second part of the talk covered exactly that problem, and I managed to finish the evening without having any rotten produce lobbed in my direction. However, their questions did confirm my suspicion that dealing with the backlog problem is a topic of potential interest to folks, so it’s one I might as well spend a bit of time writing about…



No backlog?  No problem.


Before tackling the hard part of the backlog problem, let’s look at the easy part. If you’re starting a new project, the easiest way to handle the backlog issue is to never develop a backlog in the first place. The way to do this is to start running FxCop with all rules activated from the moment you first set up your project(s) and to fix problems (or flag true exclusions) before each and every check-in.


This may seem like a lot of work and, to be honest, it will almost certainly add noticeable overhead at the very beginning, but the additional effort required will probably taper off more quickly than you might expect. Within a week or two, most individual developers should be over the learning curve “hump” with respect to the FxCop tool itself. What might surprise you is that it probably won’t take much longer for most developers to become familiar with the rules that touch their own work and to stop introducing new violations of those rules. This will, of course, vary depending on the spectrum of tasks that any given developer performs, but most developers in anything but the very smallest shops are likely working on fairly similar tasks from one week to the next.


Another issue on the “no backlog” side of things is that you might need to occasionally create temporary exclusions (e.g.: when you’ve added the first class to a namespace, but the others that would eventually prevent Design.AvoidNamespacesWithFewTypes from firing don’t yet exist), and allowing these to accumulate could also create a backlog problem. The way to prevent this is to avoid them where possible and clean them up as soon as possible when their creation can’t be avoided. I would recommend creating a project work item for removing any given temporary exclusion immediately before adding the exclusion itself. A final check that they have all been removed should be part of a pre-release checklist, and the release in question should be the earliest you can isolate (e.g.: internal release to QA).



And now for the hard stuff…


If, like many folks with a medium-to-large existing code base, you’ve run FxCop and been overwhelmed at the thousands or tens of thousands of violations that it spits out, what can you possibly do? The most common reaction seems to be essentially giving up on the idea of using FxCop for that product. That’s unfortunate, partly because the existing problems will go unaddressed until they manifest as user-noticeable bugs, but also because the product will continue to accumulate new problems as development progresses.


Another path is to disregard the existing backlog and follow the “no backlog” approach described above for new development. Since the FxCop team has produced a tool to help facilitate this approach, I presume that it is a relatively common choice. Unfortunately, while it does avoid the problem of accumulating new violations, it doesn’t address the existing backlog particularly well.


So… Can one do something to both prevent new problems and clean up the old problems? At FinRad, we started a process in late September to do just that. One of the things that was immediately obvious was that “hiding” the existing backlog wasn’t likely to be terribly helpful if our goal was to eventually clean it all up. Part of the problem is that folks would probably become rapidly inured to the large volume of existing exclusions. Another, potentially more important issue, is that a big backlog is just plain too depressing.


Rather than activating all rules at the beginning and excluding all existing rule violations (even temporarily), we decided to adopt a process whereby rules are activated one-by-one. Once a rule is activated, its existing violations are added to the immediate backlog and nobody is allowed to introduce new violations. In addition, we decided to provide training on the newly activated rules rather than expecting individual developers to do all the rule reason and fix approaches research on their own. The basic outline of this process is:



  1. Every week, a new set of rules to activate is selected.

  2. We have a one hour training session on the new rules. This training includes:

    • any background information that is required to understand the rules,

    • the specific problems that the rules are meant to address,

    • recommended approaches to fixing rule violations, and

    • reasons for which it is acceptable to create exclusions for violations.
    In addition, progress statistics are presented at each training session so that everyone is aware of where we stand with respect to the existing backlog.

  3. Immediately after the training session, the new rules are activated, and existing violations are added to the backlog by creating “TODO” exclusions.

  4. As of the activation of a rule, no new violations are accepted.

  5. Each developer is supposed to spend two hours per week fixing rule violations from the backlog.

In practice, this exact process isn’t followed to the letter every week. Besides the obvious impossibility of grabbing three hours of each developer’s time every single week, there are also all sorts of picky details around which rules can/should be activated when, and what one should do to keep the momentum going when the size of the backlog jumps because of the activation of a single rule. There’s also the question of what sort of hit we’re taking with respect to new violations against the rules that haven’t yet been activated. However, I think I’ll keep all of that as meat for further blog posts for now…

No rules in your FxCop rule assembly?

Since I posted an FxCop rule sample over at bordecal.mvps.org, it’s rapidly become the most popular content on the site. Not something I expected but, given that I have trouble coming up with blogging topics and there seems to be some interest in FxCop, I figured I might as well spend a lovely Saturday morning writing about it… (Actually, I’m pretty much just killing time while waiting for a tire change, so please feel free to keep that “get a life” comment to yourself. <g>)

There’s all sorts of stuff I could (and will try to) write about FxCop use, but one thing that jumped out at me this week was quite a few searches for “no FxCop rules could be found” landing on my rule sample page, so I guess I’ll start with that…

There are several reasons why FxCop might have trouble finding rules in a custom rule assembly:

  1. There are no rule classes.

    If you haven’t added any rule classes to your assembly, it shouldn’t be too much of a problem to figure out why FxCop can’t find them. 😉

    Another problem might be that your rule classes don’t implement the necessary rule interfaces. If you’ve done the usual thing and created your rules as subclasses of Microsoft.FxCop.Sdk.Introspection.BaseIntrospectionRule, then you’ve got the interface problem covered. However, if you’re rolling your own rules from scratch, you’re probably going to have to do quite a bit more work… (Which I’ve never done, so I have no idea what’s the minimum required. My first guess would be IRule, but that seems rather useless since there are no problem collections exposed until one gets to IControlFlowRule or IIntrospectionRule.)

  2. You’ve forgotten to add an embedded rules resource file.

    This is probably the most common problem for beginners. If you’re not sure what the rule resource file should contain, grab yourself a copy of Reflector, and take a look at the Microsoft.FxCop.Rules.Design.DesignRules.xml embedded resource in the DesignRules.dll file that ships with FxCop or Visual Studio Static Analysis.

    Also, remember that this file must be set to be an embedded resource. If the file doesn’t end up in your compiled assembly, FxCop won’t be able to find it.

  3. You’ve incorrectly identified the embedded rules resource file in which the data for your rule class can be found.

    Once you’ve created your embedded rules resource file, this may be the next problem you hit. If you’re inheriting from BaseIntrospectionRule, you must pass the full resource file name (minus the .xml extension) when invoking the protected BaseIntrospectionRule constructor. Since all the custom rules in a given assembly will typically use the same rules resource file, one would usually create a base class from which they would all inherit. The rule base class might look something like this:

    namespace Bordecal.FxCop.Rules
    {
    	internal abstract class IntrospectionRule : BaseIntrospectionRule
    	{
    		protected IntrospectionRule(string name)
    			: base(name, "Bordecal.FxCop.Rules.Rules", typeof(IntrospectionRule).Assembly)
    		{
    		}
    	}
    }
    

    This avoids requiring all rules to provide the resource file name, but it doesn’t help with figuring out what the file name ought to be in the first place.

    The bad news is that inferring the resource file name can be a bit tricky. It will depend on your project’s default namespace, as well as on where the rule file resides within your project’s directory hierarchy. The good news is that you don’t need to infer the name at all. Instead, simply open your rule assembly in Reflector, and copy the full name of your resource file (remember to omit the .xml extension):

CodePlex project for Bordecal.ImportsSorter

I’ve been getting a small but steady trickle of requests for new ImportsSorter features and source code availability, so I created a CodePlex project for it a while back at http://www.codeplex.com/ImportsSorter.  Given that it took me almost two months to get around to writing a teensy little announcement about it to post here, there’s probably good reason to expect that it might take me a while to actually implement any new features. 😉  However, if you want to request one, http://www.codeplex.com/ImportsSorter/WorkItem/List.aspx is the place to do it.


If you can’t be bothered to wait around for me to add your feature, you may wish to grab the source code from http://www.codeplex.com/ImportsSorter/SourceControl/ListDownloadableCommits.aspx.  Just a little warning: it’s in need of a fairly major clean-up, and you’re pretty much on your own until I find some time to do that…

Why I won’t be kissing that TOOD

I’ve frequently wondered why it is that folks that very heartily embrace a particular software quality attribute seem to lose all interest in any others. Now, if you’ve read any of my other postings, you’re probably wondering what right a security wonk like me has to be casting aspersions on the hue of anyone else’s kettle. Well, the truth is that, while I happen to be very interested in security, that interest doesn’t mean that I necessarily value security over other quality factors. It simply means that I want to know how to evaluate and implement security aspects of software appropriately, not that I think that it’s always of high importance on any given project.


Unfortunately, I’ve been seeing quite a bit of tendency recently toward systematic promotion of a given quality factor over all others, and Roy Osherove‘s recent promotion of “Testable Object Oriented Design” (or “TOOD”) struck a bit of a nerve for me. Personally, I’m a great fan of testability, but I’m less than thrilled with seeing it promoted at the expense of other quality factors, and Roy’s post is far from the first time that I’ve seen this occur.


But isn’t it a cute little TOOD?


So what’s wrong with what Roy proposes anyway? If making everything public, virtual, and injectable/swappable by default increases testability, why not just go ahead and do it? Unfortunately, there are three fairly large problems that seem to escape most folks who are focused on testing:



  1. Increasing the public (including virtual) portions of your API has a negative impact on its maintainability.

    Once something is public, changing it becomes a breaking change with potential impact on existing consumers. If someone uses my private classes and/or methods, and I make a change to the private interface, and their code breaks, that’s their problem. However, if I make something public or virtual, I become responsible for ensuring that the interface does not change since anyone might be using any part of it. Maintaining that public behemoth could become a very big and ugly problem for me over the long term, and that’s a pretty nasty hit to take for the sake of testability.


  2. Increasing the public portions of your API has a negative impact on its supportability.

    If you publish an API, you’ve pretty much got to document every public corner of it. Otherwise, customer inquiries (and often complaints) are going to start piling up regarding the lack of documentation around bits and pieces that folks want to use. You’ll eat team time either creating documentation or dealing with customer issues around lack of documentation, and I’d rather spend that time tackling the backlog.


  3. You need to do a lot more testing if you want to increase your public and virtual APIs without a deleterious effect on correctness/reliability.

    Roy hits this point on a tangent when he mentions that one might sometimes want to seal a class for security reasons. However, the problem isn’t just restricted to potential security risks. For example, when one extracts a private method to decrease the size or complexity of another method, one generally doesn’t bother validating the inputs to the new private method. (By the way, this isn’t just laziness – quite often adding the validation would have a negative impact on performance, particularly if the extracted method is invoked from within a loop.  And guest what?  Performance just happens to be another of those pesky quality factors that compete for our attention on any given project.) However, if one makes the method public, one becomes responsible both for validating its inputs and testing not only the “happy path” functioning of the method, but also the response of the method to bad inputs and bad state.


    The testing story gets even more complex when one considers potential inheritance. If you don’t seal a class, you really ought to be testing that it behaves as expected regardless of whatever bits of oddness a subclass might implement. Some subclass misbehaviour may be intentional (causing the potential security problems to which Roy refers), but most will likely be unintentional issues introduced by subclass developers who try to do things that you never expected. Unfortunately, designing tests to cover such scenarios is quite difficult and time-consuming, and the chances of achieving decent coverage are pretty slim for anyone without a truly devious mind.


    To be honest, I’ve never seen anyone do extensive testing around subclassing scenarios, and I certainly haven’t seen much mention of it in the TDD literature (not that I’ve been reading quite as much as I probably ought to in that space). My guess would be that there’s very little such testing going on out there, which make recommendation of making everything virtual by default for testability reasons strike me as a wee bit hypocritical…


Back to the tadpole


In case you missed it, Roy’s TOOD post was provoked by news of the restriction of visibility and removal of functionality upon which his FxCop rule testing approach depends from the next release of FxCop. Roy’s opinion is that the FxCop team has “decided to remove the little tiny shred of testability and automation” from their product. Them strike me as fighting words, and I feel that the FxCop team deserves a bit of support on this one.


For starters, restriction of visibility may be an inconvenience, but it’s certainly not a complete impediment to testing. With a bit of help from reflection, one can happily invoke those newly private methods to one’s heart’s content. Sure, it’s a bit more work and there’s a performance hit to take, but that’s a far cry from leaving not a “little tiny shred” of testability.


More importantly, there’s actually a very effective means of testing FxCop rules that remains available regardless of what the FxCop team might do to the innards of their assemblies. Instead of attempting to evaluate individual rules against individual targets for each test, why not process an FxCop output report and test its data against expected results? This has a few potential benefits over Roy’s proposed approach:



  1. It’s resistant to changes within the FxCop assemblies since it only deals with the output. (It’s even reasonably resistant to changes in output format since, as long as the format doesn’t change to the extent of modifying the contained data, it can be fairly easily transformed to one’s preferred processing format.)

  2. Its overall performance should be better since the FxCop engine doesn’t need to be restarted for each rule by target combination.

  3. It’s more thorough since each rule is tested against all available targets, thereby ensuring that violations aren’t created where they’re not expected.

  4. Adding new tests is easier since one only needs to specify expected violations, not select and specify rule by target combinations for test runs.

Now, this might not be quite the way that Roy would prefer to test his FxCop rules, but it works quite well, and it’s not particularly difficult to implement. Given this, doesn’t it seem more than a little harsh to accuse the FxCop team of making it next to impossible to test custom rules? Besides which, if there were one thing you could ask the FxCop team to work on, would it really be enhanced testability, or might it be the fabled SDK?

What’s wrong with ASP.NET? Cultures

The problem


The ability offered by .NET to set a thread-level culture then automatically format and select localizable resources using that culture’s settings is wonderful stuff. Unfortunately, it’s an approach that plays out quite a bit better in a client-side application than in a server-based application. The reason for this lies in the nature of the work one performs in a server-based application: some formatting and/or rendering is intended for consumption by client applications, but some (e.g.: log entries) is intended for consumption on the server.


Things tend to muddle along just fine as long as both the client and the server use the same or similar cultures. This is helped along by the fact that most servers are unlikely to have additional .NET language packs installed, so any exceptions logged on the server are likely to contain English error messages. However, there are plenty of cases in which it might be necessary to install additional language packs on the server (e.g.: server and/or application administrators don’t speak English), so it might be a wee bit naïve for a software vendor to assume that the only .NET language supported on a server is one that will be understood by folks who read text generated for comsumption on the server.


In case this all sounds a bit odd, let’s take a look at some concrete examples. Our base scenario is a localizable ASP.NET application that, as is typical, sets the current thread’s CurrentCulture and CurrentUICulture properties to match the preferred culture of the client user. Amongst other things, this application happens to create a log entry every time an invoice is created via its UI. e.g.:


private void LogOrder(int invoiceId, double price)
{
string logText = string.Format(“Invoice {0:N0} for {1:C} created at {2:G}.”,
invoiceId, price, DateTime.Now);

// Text written to log here…
}


If the application has set the current thread’s CurrentCulture property to the client’s preferred culture, here is what will be written to the server-side log for various values of the client culture:






























Client culture Logged text
en-US Invoice 21,456 for $5,000.00 created at 5/6/2006 9:30:00 AM.
en-CA Invoice 21,456 for $5,000.00 created at 06/05/2006 9:30:00 AM.
en-GB Invoice 21,456 for £5,000.00 created at 06/05/2006 09:30:00.
fr-FR Invoice 21 456 for 5 000,00 € created at 06/05/2006 09:30:00.
fr-CA Invoice 21 456 for 5 000,00 $ created at 2006-05-06 09:30:00.
es-ES Invoice 21.456 for 5.000,00 € created at 06/05/2006 9:30:00.
es-MX Invoice 21,456 for $5,000.00 created at 06/05/2006 09:30:00 a.m..
ja-JP Invoice 21,456 for ¥5,000 created at 2006/05/06 9:30:00.

The above table shows some of the sorts of trouble that one can get into applying client-side culture options to formatting of server-side texts, and things will only get worse when/if digit substitution is fully implemented. The problem is compounded by the fact that the client-specified culture used to apply formatting is no longer known at the time one reads the server-side text. Of course, formatting as in the above example is a mistake, and there are ways to avoid the problem, but more on that after we look at something a little more difficult to work around.


Like many applications built upon it, the .NET framework emits a variety of texts based on localized resources. The examples with which many developers will likely be most familiar are exception messages. In fact, in most applications, a very large majority of exception messages will have been generated within the .NET Framework base class libraries. Let’s say that your application happens to be running on a server with the Hungarian language pack, and an exception is logged while a Hungarian client is using the application. Would you have any idea what the exception message “Nem megfelelő a bemeneti karakterlánc formátuma” means? (It happens to be the Hungarian version of “Input string was not in a correct format”, but BabelFish won’t exactly be helping you figure that one out…)


To add to the fun, some Framework-generated texts (including exception messages) are automatically formatted from resource strings containing placeholders. Since you cannot control the formatting applied in these instances, you may run into interpretation issues even if you don’t have any additional language packs installed. For example, if you see an exception message containing the number “2,345”, how can you tell if this was a 2345 written under an American culture option or a 2.345 written under a French Canadian setting? Sometimes context may help, but other times you’ll just plain have to guess.



The workaround


So what can we do to avoid the problem? The easy answer is to not use the thread culture as a store for the client’s preferred culture. This, of course, means that we would then need to perform explicit formatting of any non-string values before passing them to UI elements for rendering. While somewhat burdensome, particularly for folks who like living the RAD life, this isn’t exactly the end of the world. There’s also a bit of good news: FxCop can help you identify at least those cases of explicit formatting that are defaulting to using the thread culture.


The first bit of rather bad news with this approach is that you can’t use ASP.NET’s auto-magical client culture detection since it pops the detected culture into the current thread properties. Things might be a bit different if the ASP.NET team had chosen to make the HttpApplication.SetCulture method virtual, but it’s not, so we’re stuck implementing our own mechanism from scratch if we want to use an alternate store for the client culture.


The second bit of bad news is that there are several ASP.NET controls that are rather aggressive about using the thread culture internally. For example, when using the CompareValidator with ValidationDataType.Double, it’s possible to use culture-specific parsing1. However, there’s no way to specify which culture one wants to use for parsing. Unless you choose to use culture-invariant parsing, the thread culture will be used. That means that a French Canadian user’s 5,123 would theoretically be interpreted as 5123 rather than 5.123 if the thread culture is set to en-US because that’s the appropriate setting for server-side use. However, things are even worse in practice than in theory, and our 5,123 will fail validation since the CompareValidator does not permit use of thousands group separators and will reject the comma if the thread culture is en-US.


For a fully functional workaround, you’ll need to implement at least four sets of changes:



  1. Use an alternate store for the client culture,
  2. Create a mechanism for populating this new store with the client-preferred culture (rather than setting Culture and UICulture attributes to “auto”),
  3. Where possible, pre-format non-string values that will be rendered by controls, and
  4. Implement custom versions of control that take the rendering culture as a property rather than using the thread culture exclusively.


The solution


The workarounds are a pack of trouble, and far more work than most folks would be willing to put in unless they’ve already been bitten by client vs. server culture bugs. The real solution here is for the ASP.NET platform to properly accomodate separate tracking of client and server cultures. Obviously, items 1 and 2 from the workarounds list above would be a start. In addition, it would probably be a great idea if controls defaulted to using the client culture but allowed easy overriding to use either the server culture or a custom assigned culture.




1For some bizarre reason, CompareValidator is strictly culture-invariant for some data types (e.g.: ValidationDataType.Integer), which presents its own set of potential problems. However, this isn’t directly relevant to the client vs. server culture issue, so I’ll drop it for now. However, if you want to complain, connect.microsoft.com is the place.

What’s wrong with ASP.NET? HTML encoding

The problem


Back when ASP.NET was first introduced, I had pretty high hopes that the new controls would offer support for automatic HTML encoding. Unfortunately, there was very little of this, and most of it was more than a bit lukewarm (more on this later). In some ways, things have improved a bit in v. 2.0, but they’re considerably worse in others.


Before you read any further, you might want to ask yourself which ASP.NET controls perform HTML encoding for you and under what circumstances this is done. If the answer doesn’t leap to mind, you’ve perhaps got a first inkling that there might be a little problem with API consistency and/or the documentation. Then again, maybe you’ve never worried about HTML encoding in your web applications, in which case I’d strongly recommend that you read up on HTML injection and cross-site scripting. A good starting point might be CERT Advisory CA-2000-02.


We’ll look at which controls perform HTML encoding soon. First, we’re going to need to nail down some conceptual stuff because not all encoding is created equal. You may already be aware that HTML, URLs, and client-side script use different encodings. For the sake of simplicity, the remainder of this post will refer mainly to HTML encoding, although the other two forms of encoding do merit consideration as well.


There is more than one flavour of HTML encoding, even within ASP.NET. The first is exposed via the System.Web.HttpUtility.HtmlEncode methods. These encode the characters >, <, &, “, as well as any characters with codes between 160 and 255, inclusive. The other main encoding flavour used by ASP.NET is “attribute” encoding, which is exposed via the System.Web.HttpUtility.HtmlAttributeEncode methods. In ASP.NET 1.1., these encode the & and ” characters only. In ASP.NET 2.0, these encode the characters <, &, and “.


Attribute encoding ought to be a superset of full HTML encoding that also encodes the single quote character in case that’s what happens to be wrapping the attribute. However, as you may have noticed from the above, the ASP.NET version of attribute encoding is even wimpier than its full encoding brother. To make matters worse, the full HTML encoding implemented by ASP.NET is no great shakes in the first place. Security isn’t the only reason for HTML encoding, and failure to encode everything outside the low ASCII range can impact on page readability when client browsers don’t apply the correct code page (which happens more often than you might think, whether it’s the client’s or the server’s fault).


Now that we know what kinds of HTML encoding are available in ASP.NET, let’s take a look at the encoding support offered by the built-in ASP.NET controls.  The following table covers some of the more commonly used controls and properties.  (There are, of course, many other controls and properties that one might wish to see encoded, but I’ve tried to keep the list down to things that most folks are likely to use reasonably frequently.)



























































Control ASP.NET 1.1 ASP.NET 2.0
Literal None None by default.
HTML encoded if Mode property is set to LiteralMode.Encode.
Label None
Button Text is attribute encoded.
LinkButton None
ImageButton Image URL is attribute encoded. Image URL is URL path encoded then attribute encoded.
HyperLink Text is not encoded.
NavigateUrl is attribute encoded.
Text is not encoded.
NavigateUrl is URL path encoded (unless it uses the javascript: protocol) then attribute encoded.
TextBox Single-line text box (input type=”text”) is attribute encoded.
Multi-line text box (textarea) is HTML encoded.
DropDownList and ListBox Option values are attribute encoded.
Option display texts are HTML encoded.
CheckBox and CheckBoxList Value is not used.
Display text is not encoded.
RadioButton and RadioButtonList Value is attribute encoded.
Display text is not encoded.
Table None
DataGrid None for text columns.
Hyperlink columns follow the pattern for HyperLink controls.
Validators (BaseValidator subclasses) and ValidationSummary Validator display text is not encoded.
For client script, the validator error message and validation summary header text are attribute encoded.
When rendering “populated” validators and the validation summary controls from the server, no encoding is applied.
Validator display text is not encoded.
For client script, the validator error message and validation summary header text are javascript encoded (blacklisting approach).
When rendering “populated” validators and the validation summary controls from the server, no encoding is applied.
HiddenField N/A Value is attribute encoded.
GridView and DetailsView N/A Text fields HTML encode if their HtmlEncode property is set to true. (This is the default, which is also used for auto-generated columns.) However, the null display text for text fields is not encoded even if the field’s HtmlEncode is set to true.
Hyperlink fields follow the pattern for HyperLink controls.


Assuming you’ve actually taken the time to read the above, you might have noticed that there are five basic patterns of encoding usage:



  1. No encoding ever applied.
  2. HTML and/or attribute encoding, as appropriate (with a bit of additional URL and/or javascript encoding applied when appropriate), applied all the time.
  3. Attribute encoding applied for attributes, but no encoding applied for other text.
  4. Optional encoding set via a boolean property, defaulting to applying the encoding.
  5. Optional encoding set via an enumerated property, defaulting to not applying the encoding.

If this strikes you as perhaps a wee bit inconsistent, you wouldn’t be alone. Wouldn’t it be great to see a consistent approach that telegraphs well and acts as a pit of success? If all the controls performed HTML encoding by default but allowed overriding when necessary (preferably via a single approach), the vast majority of developers writing for ASP.NET would end up generating a more secure, more reliable applications with considerably less effort.


Workarounds


While we’re all waiting around for the ASP.NET team to eventually provide reasonable built-in support for HTML encoding, what can we do to ensure that our apps are both protected from HTML injection and character mis-rendering? A good starting point would be to fully encode all data (i.e.: anything not 100% known at compile time, and even some stuff that is) that will be pushed to the client browser. Unfortunately, as was already mentioned above, the built-in encoding scheme leaves a little something to be desired. Luckily, the ACE team folks at Microsoft have been working on a couple of tools that take a more robust approach to HTML (and URL and script) encoding. Rather than blacklisting a fixed set of potentially problematic characters for encoding, they whitelist a set of known safe characters (low ASCII a-z, A-Z, 0-9, space, period, comma, dash, and underscore for HTML encoding) and encode everything else. This quite nicely takes care of both security and appearance issues, and you may wish to seriously consider using this approach rather than calling System.Web.HttpUtility.HtmlEncode to perform your HTML encoding.


Regardless of which HTML encoding approach you select, you’re quickly going to run into a bit of trouble with double encoding if you simply start assigning pre-encoded text to control properties (e.g.: someTextBox.Text = HttpUtility.HtmlEncode(someString)). When dealing with malicious input, this is pretty much a non-issue. However, not all data that ought to be encoded is malicious, and you usually wouldn’t want users seeing stuff like a &gt; b rather than a > b. Unfortunately, if we want to avoid double encoding in the set of controls that perform non-overrideable encoding (including attribute encoding), we need to use custom controls. To make matters worse, it can require rather a lot of work to subclass most of the controls in order to override the encoding behaviour. In quite a few cases, simply starting from scratch would probably make more sense than trying to subclass the built-in controls. Also, even for those controls where double encoding wouldn’t be an issue (e.g.: Label, CheckBox), it’s probably worth considering using custom controls anyway since the pain of authoring the custom control isn’t likely to outweigh the cumulative effort of all the manual encoding calls you might make across all your projects.


Don’t like these workarounds? Maybe it’s time to start complaining