I’ve frequently wondered why it is that folks that very heartily embrace a particular software quality attribute seem to lose all interest in any others. Now, if you’ve read any of my other postings, you’re probably wondering what right a security wonk like me has to be casting aspersions on the hue of anyone else’s kettle. Well, the truth is that, while I happen to be very interested in security, that interest doesn’t mean that I necessarily value security over other quality factors. It simply means that I want to know how to evaluate and implement security aspects of software appropriately, not that I think that it’s always of high importance on any given project.
Unfortunately, I’ve been seeing quite a bit of tendency recently toward systematic promotion of a given quality factor over all others, and Roy Osherove‘s recent promotion of “Testable Object Oriented Design” (or “TOOD”) struck a bit of a nerve for me. Personally, I’m a great fan of testability, but I’m less than thrilled with seeing it promoted at the expense of other quality factors, and Roy’s post is far from the first time that I’ve seen this occur.
But isn’t it a cute little TOOD?
So what’s wrong with what Roy proposes anyway? If making everything public, virtual, and injectable/swappable by default increases testability, why not just go ahead and do it? Unfortunately, there are three fairly large problems that seem to escape most folks who are focused on testing:
- Increasing the public (including virtual) portions of your API has a negative impact on its maintainability.
Once something is public, changing it becomes a breaking change with potential impact on existing consumers. If someone uses my private classes and/or methods, and I make a change to the private interface, and their code breaks, that’s their problem. However, if I make something public or virtual, I become responsible for ensuring that the interface does not change since anyone might be using any part of it. Maintaining that public behemoth could become a very big and ugly problem for me over the long term, and that’s a pretty nasty hit to take for the sake of testability.
- Increasing the public portions of your API has a negative impact on its supportability.
If you publish an API, you’ve pretty much got to document every public corner of it. Otherwise, customer inquiries (and often complaints) are going to start piling up regarding the lack of documentation around bits and pieces that folks want to use. You’ll eat team time either creating documentation or dealing with customer issues around lack of documentation, and I’d rather spend that time tackling the backlog.
- You need to do a lot more testing if you want to increase your public and virtual APIs without a deleterious effect on correctness/reliability.
Roy hits this point on a tangent when he mentions that one might sometimes want to seal a class for security reasons. However, the problem isn’t just restricted to potential security risks. For example, when one extracts a private method to decrease the size or complexity of another method, one generally doesn’t bother validating the inputs to the new private method. (By the way, this isn’t just laziness – quite often adding the validation would have a negative impact on performance, particularly if the extracted method is invoked from within a loop. And guest what? Performance just happens to be another of those pesky quality factors that compete for our attention on any given project.) However, if one makes the method public, one becomes responsible both for validating its inputs and testing not only the “happy path” functioning of the method, but also the response of the method to bad inputs and bad state.
The testing story gets even more complex when one considers potential inheritance. If you don’t seal a class, you really ought to be testing that it behaves as expected regardless of whatever bits of oddness a subclass might implement. Some subclass misbehaviour may be intentional (causing the potential security problems to which Roy refers), but most will likely be unintentional issues introduced by subclass developers who try to do things that you never expected. Unfortunately, designing tests to cover such scenarios is quite difficult and time-consuming, and the chances of achieving decent coverage are pretty slim for anyone without a truly devious mind.
To be honest, I’ve never seen anyone do extensive testing around subclassing scenarios, and I certainly haven’t seen much mention of it in the TDD literature (not that I’ve been reading quite as much as I probably ought to in that space). My guess would be that there’s very little such testing going on out there, which make recommendation of making everything virtual by default for testability reasons strike me as a wee bit hypocritical…
Back to the tadpole
In case you missed it, Roy’s TOOD post was provoked by news of the restriction of visibility and removal of functionality upon which his FxCop rule testing approach depends from the next release of FxCop. Roy’s opinion is that the FxCop team has “decided to remove the little tiny shred of testability and automation” from their product. Them strike me as fighting words, and I feel that the FxCop team deserves a bit of support on this one.
For starters, restriction of visibility may be an inconvenience, but it’s certainly not a complete impediment to testing. With a bit of help from reflection, one can happily invoke those newly private methods to one’s heart’s content. Sure, it’s a bit more work and there’s a performance hit to take, but that’s a far cry from leaving not a “little tiny shred” of testability.
More importantly, there’s actually a very effective means of testing FxCop rules that remains available regardless of what the FxCop team might do to the innards of their assemblies. Instead of attempting to evaluate individual rules against individual targets for each test, why not process an FxCop output report and test its data against expected results? This has a few potential benefits over Roy’s proposed approach:
- It’s resistant to changes within the FxCop assemblies since it only deals with the output. (It’s even reasonably resistant to changes in output format since, as long as the format doesn’t change to the extent of modifying the contained data, it can be fairly easily transformed to one’s preferred processing format.)
- Its overall performance should be better since the FxCop engine doesn’t need to be restarted for each rule by target combination.
- It’s more thorough since each rule is tested against all available targets, thereby ensuring that violations aren’t created where they’re not expected.
- Adding new tests is easier since one only needs to specify expected violations, not select and specify rule by target combinations for test runs.
Now, this might not be quite the way that Roy would prefer to test his FxCop rules, but it works quite well, and it’s not particularly difficult to implement. Given this, doesn’t it seem more than a little harsh to accuse the FxCop team of making it next to impossible to test custom rules? Besides which, if there were one thing you could ask the FxCop team to work on, would it really be enhanced testability, or might it be the fabled SDK?
I completely agree with you regarding the arguments against opening up a particular API only for testability purposes. That’s why I write in my response to Roy’s post (http://blogs.msdn.com/ploeh/archive/2007/02/28/ToodToop.aspx) that TOOD should embrace BOTH testability AND object-orientation.
While I can’t look into Roy’s mind, I think he meant something else than you seem to imply. At least my stance on TOOD is that it must embrace object-oriented principles as well as testability principles. If it sacrifices OOD to become testable, it’s not a TOOD – in that case, it’s only testable.
I think Roy’s list of TDD versus OO design principles was meant as an illustration of the opposing forces in these two paradigms. I don’t think he wanted to promote one over the other.
While I agree with you on your points about maintainability and supportability, I’ve also been embracing TOOD for years (that is, before it had a name). The point is to never sacrifice one quality for the other (I know compromises sometimes need to be made, but these should be carefully balanced in each case).
From what you write, that sounds pretty close to the way you already develop software 🙂