There’s quite possibly only one person in the world reading this blog who doesn’t think
it’s got anything to do with Vista. The windows in the title have nothing to do with
Microsoft, and I’m making no assertions whatsoever about how much unit testing gets done
there.
The one person who understands the title without reading the article is Stuart,
who lent me The Tipping Point
before callously leaving for ThoughtWorks,
a move which has signficantly reduced my fun at work, with the slight compensation
that my fashionable stripy linen trousers don’t get mocked quite as much. The Tipping
Point is a marvellous book, particularly relevant for anyone interested in cultural
change and how to bring it about. I’m not going to go into too much detail about the
main premises of the book, but there are two examples which are fascinating in and
of themselves and show a possible path for anyone battling with introducing
agile development practices (and unit testing in particular) into an existing
environment and codebase.
The first example is of a very straightforward study: look at unused buildings, and
how the number of broken windows varies over time, depending on what is done with
them. It turns out that a building with no broken windows stays “pristine” for a
long time, but that when just a few windows have been broken, many more are likely
to be broken in a short space of time, as if the actions of the initial vandals
give permission to other people to break more windows.
The second example is of subway trains in New York, and how an appalling level
of graffiti on them in the 80s was vastly reduced in the 90s. Rather than trying
to tackle the whole problem in one go by throwing vast resources at the system,
or by making all the trains moderately clean, just a few trains were selected
to start with. Once they had been cleaned up, they were never allowed to run
if they had graffiti on them. Furthermore, the train operators noticed a pattern
in terms of how long it would take the “artists” in question to apply the graffiti,
and they waited until three nights’ work had been put in before cleaning the
paint off. Having transformed one set of trains, those trains were easier to keep
clean due to the “broken windows” effect above and the demotivating aspects of
the cleaning. It was then possible to move onto the next set, get them clean
and “stable”, then move on again.
I’m sure my readership (pretentious, eh?) is bright enough to see where this is
leading in terms of unit testing, but this would be a fairly pointless post if I
stopped there. Here are some guidelines I’ve found to be helpful in “test infecting” code,
encouraging good practice from those who might otherwise be sloppy (including myself)
and keeping code clean once it’s been straightened out in the first place. None of
them are original, but I believe the examples from The Tipping Point cast them in
a slightly different light.
Test what you work with
If you need to make a change in legacy code (i.e. code without tests), write
tests for the existing functionality first. You don’t need to test all of it,
but do your best to test any code near the points you’ll be changing. If you
can’t test what’s already there because it’s a
Big Ball of Mud
then refactor it very carefully until you can test it. Don’t start
adding the features you need until you’ve put the tests in for the refactored
functionality, however tempting it may be.
Anyone who later comes to work on the code should be aware that there are
unit tests around it, and they’re much more likely to add their own for whatever
they’re doing than they would be if they were having to put it under test
for the first time themselves.
Refactor aggressively
Once code is under test, even the nastiest messes can gradually get under
control, usually. If that weren’t the case, refactoring wouldn’t be much use,
as we tend to write terrible code when we first try. (At least, I do. I haven’t
seen much evidence of developers whose sense of design is so natural that
elegance flows from their fingers straight into the computer without at
least a certain amount of faffing. Even if they got it right for the current
situation, the solution isn’t likely to look nearly as elegant in a month’s
time when the requirements have changed.)
If people have to modify code which is hard to work with, they’ll tend to
add just enough code to do what they want, holding their nose while they do it.
That’s likely to just add to the problem in the long run. If you’ve refactored
to a sane design to start with, contributing new elegant code (after a couple
of attempts) is not too daunting a task.
Don’t tinker with no purpose
This almost goes against the point above, but not quite. If you don’t need
to work in an area, it’s not worth tinkering with it. Unless someone (preferrably
you) will actually benefit from the refactoring, you’re only likely to provoke
negative feelings from colleagues if you start messing around. I had a situation
like this recently, where I could see a redundant class. It would have taken maybe
half an hour to remove it, and the change would have been very safe. However,
I wasn’t really using the class directly. Not performing the refactoring didn’t
hurt the testing or implementation of the classes I was actually changing, nor was it
likely to do so in the short term. I was quite ready to start tinkering anyway,
until a colleague pointed out the futility of it. Instead, I added a comment suggesting
that the class could go away, so that whoever really does end up in that area
next at least has something to think about right from the start. This is as much
about community as any technical merit – instead of giving the impression that
anything I had my own personal “not invented here” syndrome (and not enough “real work”
to do), the comment will hopefully provoke further thought into the design decisions
involved, which may affect not just that area of code but others that colleagues work
on. Good-will and respect from colleagues can be hard won and easily lost, especially
if you’re as arrogant as I can sometimes be.
Don’t value consistency too highly
The other day I was working on some code which was basically using the wrong naming
convention – the C# convention in Java code. No harm was being done, except everything
looked a bit odd in the Java context. Now, in order to refactor some other code towards
proper encapsulation, I needed to add a method in the class with the badly named methods.
Initially, I decided to be consistent with the rest of the class. I was roundly (and
deservedly) told off by the code reviewer (so much for the idea of me being her mentor –
learning is pretty much always a two-way street). As she pointed out, if I added another
unconventional name, there’d be even less motivation for anyone else to get things right in
the future. Instead of being a tiny part of the solution, I’d be adding to the problem.
Now, if anyone works in that class, I hope they’ll notice the inconsistency and be encouraged
to add any extra methods with the right convention. If they’re changing the use of an existing
method, perhaps they’ll rename it to the right convention. In this way, the problem can gradually
get smaller until someone can bite the bullet and make it all consistent with the correct
convention. In this case, the broken windows story is almost reversed – it’s as if I’ve
broken a window by going against the convention of the class, hoping that all the rest of
the windows will be broken over time too.
This was a tough one for me, because I’ve always been of the view that consistency
of convention is usually more important than the merit of the convention. The key here is that
the class in question was inconsistent already – with the rest of the codebase. It was only
consistent in a very localised way. It took me longer to understand that than it should have
done – thanks Emma!
Conclusion
Predicting and modifying human behaviour is an important part of software engineering
which is often overlooked. It goes beyond the normal “office politics” of jockeying for
position – a lot of this is just as valid when working solo on personal code. Part of
it is a matter of making the right thing to do the easy thing to do, too. If
we can persuade people that it’s easier to write code test-first, they’ll tend to
do it. Other parts involve making people feel bad when they’re being sloppy – which
follows naturally from working hard to get a particular piece of code absolutely clean just
for one moment in time.
With the right consideration for how future developers may be affected by changes we make
today – not just in terms of functionality or even design, but in attitude, we can
help to build a brighter future for our codebases.
Fun to see that you’ve applied the Broken Windows theory to software engineering on your own, these are good observations. If you’re interested, Andy Hunt and Dave Thomas also use this theory to underline that you need to care about your code in their excellent book The Pragmatic Programmer. This book is an excellent read. If you’d like an alternate view on whether broken windows matter as much as The Tipping Point claims, you should read Freakonomics by Steven D. Levitt. Apart from being a great read, much in style with Gladwell’s books, it also gets your mind going around business intelligence.
Regards,
Anders
The book “Working Effectively with Legacy Code” by Michael Feathers is very helpful for getting some tests into those big balls of mud we sometimes have to deal with.
A good interesting article. I’m all for “test infecting” the legacy code and getting some tests into those big balls of mud that we have to work on!