If You’re Using “#if DEBUG”, You’re Doing it Wrong

I was going through some legacy code the other day, refactoring it all over the place and I ran into many blocks of code wrapped in “#if DEBUG”.  Of course, after a bit of refactoring in a RELEASE configuration these blocks of code were quickly out of date (and by out of date, I mean no longer compiling).  A huge PITA.

For example, take the following code:

	public class MyCommand
		public DateTime DateAndTimeOfTransaction;
	public class Test
		public void ProcessCommand(MyCommand myCommand)
			if (myCommand.DateAndTimeOfTransaction > DateTime.Now)
				throw new InvalidOperationException("DateTime expected to be in the past");
			// do more stuff with myCommand...

If, while my active configuration is RELEASE, I rename-refactor DateAndTimeOfTransaction to TransactionDateTime then the block of code in the #if DEBUG is now invalid—DateAndTimeOfTransaction will not be renamed within this block.  I will now get compile errors if I switch to DEBUG configuration (or worse still, if I check in an my continuous integration environment does a debug build).

I would have run into the same problem had I been in a DEBUG configuration with all the “#if RELEASE” blocks.  Yes, I could create a configuration and define DEBUG and RELEASE and do my work in there.  Yeah, no, not going down that twisted path… i.e. try doing it without manually adding a conditional compilation symbol.

I got to thinking about a better way.  It dawned on me that #if blocks are effectively comments in other configurations (assuming DEBUG or RELEASE; but true with any symbol) and that comments are apologies, and it quickly became clear to me how to fix this.

Enter Extract Method refactoring and Conditional Methods.  If you’ve been living under a rock or have simply forgotten, there exists a type in the BCL: System.Diagnostics.ConditionalAttribute.  You put this attribute on methods that[whose calls] may or may not be included in the resulting binary (IL).  But, they are still compiled (and thus syntax checked).  So, if I followed the basic tenet for code comment smells and performed an Extract Method refactoring and applied the ConditionalAttribute to the resulting method, I’d end up with something like this:

		public void ProcessCommand(MyCommand myCommand)
			// do more stuff with myCommand...
		private static void CheckMyCommandPreconditions(MyCommand myCommand)
			if (myCommand.DateAndTimeOfTransaction > DateTime.Now)
				throw new InvalidOperationException("DateTime expected to be in the past");


Now, if I perform a rename refactoring on MyCommand.DateAndTimeOfTransaction, all usages of DateAndTimeOfTransaction get renamed and I no longer introduce compile errors.

Code Contracts

If you look closely at the name I chose for the extracted method you’ll notice that what this DEBUG code is actually doing is asserting method preconditions.  This type of thing is actually supported in Code Contracts.  Code Contracts implement a concept called Design by Contract (DbC).  One benefit to DbC is that these checks are effectively proved at compile time so there’s no reason to perform this check at runtime or even write unit tests to check them because code to violate them becomes a “compile error” with Code Contracts.  But, for my example I didn’t use DbC despite potentially being a better way of implementing this particular code.

Anyone know what we call a refactoring that introduces unwanted side-effects like compile errors?

[UPDATE: small correction to the description of conditional methods and what does/doesn't get generated to IL based on comment from Motti Shaked]

Working with Subversion Part 2 (or Subversion for the Less Subversive)

In my previous Subversion (SVN) post I detailed some basic commands for using SVN for source code control.  If you’re working alone you could get by with Part 1.  But, much of the time you’re working in a team of developers working on versioned software that will be deployed to multiple places.  This post will detail some more advanced commands for systematically using SVN to simultaneously work on multiple editions and versions of a software project. (hence the “less subversive”).


The basic concept of working on two editions or two versions of a software project at the same time is considered “branching”.  Simultaneous work is each done on it’s own “branch” (or “fork”) of the project.  Branch can be considered “copy”; but that’s generally only true at the very start of the branch.  With any project you can add and remove files from it; at which point a branch ceases to become a “copy”.  Semantics aside, SVN doesn’t really have a concept of a “branch”.  This is a practice used by SVN users.  SVN really only knows how to “copy” and “merge”.  The practice is the create a standard location to control branches, copy existing files and directories to a branch location, and merge changes from a branch to another location (like the trunk).  To support branching many repositories (repos) have a “trunk” and “branches” folder in the project root.  Trunk work is obviously done in the “trunk” folder and branches are recreated in the “branches” folder.

Branching Strategies

Before getting too much farther taking about things like “trunk”; it’s a good idea to briefly talk about branching strategies.  The overwhelming branching strategy for SVN users has been the “trunk” or “main line” branching strategy.  This strategy basically assumes there is really one “main” software product that evolves over time and that work may spawn off from this main line every so often to work independently and potentially merge back in later.  In this strategy the “current” project is in the “trunk”.

Another strategy is sometimes called the Branch per Release strategy which means there’s a branch that represents the current version of the project and when that version is “released” most work transitions to a different branch.  Work can always more from one branch to another; but there’s no consistent location that project files are located over time.  This a perfectly acceptable strategy and almost all Source Code Control (SCC) systems support it.  The lack of a consistent location of files makes discovery difficult and it really forces the concept of branching onto a team.  I’ve never really found this strategy to be very successful with the teams I’ve worked on, so I prefer the trunk strategy.


Branching is fairly easy in SVN.  The recommended practice is the perform a copy on the server then pull down a working copy of that branch to perform work on.  This can be done with the svn copy command, for example:

svn copy http://svnserver/repos/projectum/trunk http://svnserver/repos/projectum/branches/v1.0 -m "Created v1.0 branch"

…which makes a copy of the trunk into the branches/v1.0 folder.

Now, if you checkout the root of the project (e.g. svn checkout http://svnserver/repos/projectum) you’ll have all the files the trunk and all the branches.  You can now go edit a file in branches/v1.0 in your working copy and svn commit will commit that to the branch.  If you want to just work with a branch in your working copy you can checkout just the branch.  For example:

svn checkout http://svnserver/repos/projectum/branches/v1.0 .

..which makes a copy of all the files/directories in v.1.0 to the current local directory.  So, if you had …/branches/v1.0/readme.txt in the repo, you’d now have readme.txt in the current local directory.

Same holds true for the trunk, if you want to work on files in the trunk independently of any other branches checkout just the trunk, for example:

svn checkout http://svnserver/repos/projectum/trunk .

It’s useful to work just with the trunk or just with a branch because over time you may have many branches.  Pulling down the trunk and all branches will get time-consuming over time.

While SVN doesn’t really have the concept of a “branch”, it does know about copies of files and tracks changes to those copies.  So, if you shows a log of the changes to a file you’ll see the commit comments for all the branches too.  For example, edit readme.txt in the branch directory and commit the change (svn commit –m "changed readme.txt" for example) then go back to the trunk directory and show the log of readme.txt, for example:

svn log –v readme.txt

…you’ll see the commit comments for both trunk/readme.txt and branches/v1.0/readme.txt.  For example":

r2 | PRitchie | 2011-11-17 10:23:36 -0500 (Thu, 17 Nov 2011) | 1 line
Changed paths:
   A /trunk/readme.txt
initial commit


Okay, you’ve been working in v1.0 for a few days now, committing changes for v1.0.  One of those changes was a bug fix that someone reported in v1.0.  You know that bug is still in the trunk and it’s time to fix it in the trunk too.  Rather than go perform the same editing steps in the trunk, you can merge that change from the v1.0 branch into the trunk. For example, a typo fixed in the readme.txt needs to be merged into the trunk, from a clean trunk working copy (no local modifications):

svn merge http://svnserver/repos/projectum/branches/v1.0

This merges the changes from v1.0 into the working copy of trunk.  You can now review the merges to make sure they’re want you want, then commit them":

svn commit –m "merged changes from v1.0 into trunk"

Merge Conflicts

Of course, sometimes changes are made in the trunk and a branch that conflict with each other (same line was changed in both copies).  If that happens SVN will give you a message saying there’s a conflict:

Conflict discovered in ‘C:/dev/projectum/trunk/readme.txt’.
Select: (p) postpone, (df) diff-full, (e) edit,
        (mc) mine-conflict, (tc) theirs-conflict,
        (s) show all options:

You’re presented with several options.  One is to postpone, which means the conflict is recorded and you’ll resolve it later.  You can see the differences with diff-full.  Or you can accept the left or the right file as iss with mine-conflict or theirs-conflict—to accept the local file, use mine-conflict.  There’s also edit, which will allow you to edit the merged file showing the conflicts.  For example:

<<<<<<< .working
four >>>>>>> .merge-right.r9

This details that your working file has a changed second line “three” but the remote version has a changed second line “four “.  The lines wrapped  with <<<<<<< .working and ======= is the local change, which is followed by the remote change and >>>>>>> .merge-right.r9.  The “merge-right.r9” is diff syntax telling you which side the change came from and which revision (r9 in this case).  You can edit all that diff syntax out and get the file the way you want to merge it, then save it.  SVN will notice the change and present options again:

Select: (p) postpone, (df) diff-full, (e) edit, (r) resolved,
        (mc) mine-conflict, (tc) theirs-conflict,
        (s) show all options:

Notice you now have a resolved option.  If your edits fixed the conflict you can choose resolved to tell SVN the conflict is gone.  You can then commit the changes and the merges will be committed into the repo.

Merging Without Branches

Of course branches aren’t the only source for merges.  You might be working on a file in the trunk that a team member is also working on.  If you want to merge any changes they’ve committed into your working copy, you can use the update command.  For example:

svn update

This will merge any changes files with your local files.  Any conflicts will appear the same way they did with svn merge.

It’s important to note that with SVN you can’t commit changes if your working copy was out of date with the repo.  (e.g. someone committed a change after you performed checkout).  If this happens you’ll be presented with a message similar to:

Transmitting file data .svn: E155011: Commit failed (details follow):
svn: E155011: File ‘C:\dev\projectum\trunk\readme.txt’ is out of date
svn: E170004: Item ‘/trunk/readme.txt’ is out of date

This basically just means you need to run svn update before you commit to perform a merge and resolve any conflicts.

Mouse Junkies

If you don’t generally work with command-line applications and don’t care for the speed increase of not using the mouse, there’s some options you can use to work with SVN.


TortoiseSVN is a Windows Explorer extension that shows directories/files controlled with SVN differently within Explorer.  It will show directories and files that have been modified or an untracked with different icons (well, icon overlays) within Explorer.  It will also let you perform almost all SVN commands from with explorer (via context menu).


VisualSVN provides much of the same functionality as TortoiseSVN but does it within Visual Studio.  It’s not Source Code Control Visual Studio extension; which is interesting because you can use VisualSVN and another source code control extension (like TFS) at the same time.  VisualSVN requires TortoiseSVN to work correctly, so install that first.

Both TortoiseSVN and VisualSVN make dealing with merge conflicts easier.  I recommend using these for merging instead of the command-line.



“Explicit” Tests with Resharper

NUnit introduced a feature called Explicit Tests (a long time ago, I believe) that basically mean a test is considered tagged as Ignore unless the test name is explicitly given to the NUnit runner.

This is useful if you have tests that you don’t want run all the time.  Integration tests or tests highly coupled to infrastructure or circumstance come to mind…  But, it’s difficult to automate these types of tests because you always have to maintain a list of test names to give to the runner.

The ability of NUnit to run explicit tests aside, I don’t generally use the NUnit runner directly; I use other tools that run my tests.  I use tools like Resharper to run my tests within Visual Studio, ContinuousTests for continuous testing, and TeamCity to run my tests for continuous integration.

Continuous integration is one thing, I can configure that to run specific test assemblies and with specific configuration and get it to run whatever unit tests I need it to for whatever scenario.

Within Visual Studio is another story.  I sometimes want to run tests in a class or an assembly but not all the tests.  At the same time I want the ability of the runner to run tests it wouldn’t normally run without having to edit and re-compile code.

With Resharper there are several ways you can do this.  One way is to use the Ignore attribute on a test.  This is effectively the same as the NUnit Explicit attribute.  If I run the test specifically (like having the cursor within the test and pressing Ctrl+U+Ctrl+R) Resharper will still run the test.  If I run all tests in solution/project (Ctrl+U+Ctrl+L/right-click project, click Run Unit Tests) the test is ignored.  This is great; but now this test is ignored in all of my continuous integration environments.  Sad Panda.

If you’re using NUnit or MSTest (amongst others that I’m not as familiar with) as your testing framework you can tag tests with a category attribute (MS Test is “TestCategory”, NUnit is “Category”).  Once tests are categorized I can then go into Resharper and tell it what category of tests to “ignore” (Resharper/Options, under Tools, select Unit Testing and change the “Don’t run tests from categories…” section in Resharper 6.x).  Now, when I run tests, tests with that category are effectively ignored.  If I explicitly run a test (cursor somewhere in test and I press Ctrl+U+Ctrl+R) with an “ignored” category Resharper will still run it.  I now get the same ability as I did with the Ignore attribute but don’t impact my continuous integration environment.  I’ve effectively switched from an opt-in scenario to an opt-out scenario.

With the advent of ContinuousTests, you might be wondering why bother?  That’s a good question. With ContinousTests only the tests that are affected by the changes you’ve just saved—automatically, in the background.  In fact, having any of your tests run whenever you make a change that affects the test is one reason why I make some tests “explicit”.  I tend to use test runners as hosts to run experimental code, code that often will become unit tests.  But, while I’m fiddling with the code I need to make sure it’s only run when I explicitly run it—having it run in the background because something that affects the test isn’t always what I want to do.  So, I do the same thing with ContinousTests: have it ignore certain test categories (ContinuousTess/Configuration (Solution), Various tab, Tests categories to ignore).

Test Categorization Recommended Practices

Of course, there’s nothing out there that really conveys any recommendations about test categorization.  It’s more or less “here’s a gun, don’t shoot yourself”…  And for the most part, that’s fine.  But, here’s how I like to approach categorizing tests:

First principle: don’t go overboard.

Testing frameworks are typically about unit testing—that’s what people think of first with automated testing.  So,  I don’t categorize unit tests.  These are highly decoupled tests that are quick to run and I almost always want to run these tests.  If the tests can’t always run or I don’t want them run at any point in time, they’re probably not unit tests.

Next, I categorize non-unit tests by type.  There’s several other types of tests like Integration, Performance, System, UI, Infrastructure, etc.  Not all projects need all these types of tests; but these other tests have specific scenarios where you may or may not want them run.  The most common, that I’ve noticed, is Integration.  If the test has a large amount of setup, requires lots of mocks, is coupled to more than couple modules, or takes a long time to run, it’s likely not a unit test.

Do you categorize your tests?  If so, what’s your pattern?

Working with Subversion, Part 1

Working with multiple client projects and keeping abreast of the industry through browsing and committing to open source and other people’s libraries means working with multiple source code control (SCC) systems.  One of the systems I use is Subversion (SVN).  It’s no longer one of the SCCs I use most often so I tend to come back to it after long pauses and my SVN fu is no longer what it used to be.  I’m sure my brain is damaged from this form of "task switching", not to mention the time I spend trying to figure out the less common actions I need to perform on a repository (repo).  I usually spend more than few minutes digging up the commands I need for the once-in-a-decade actions I need to perform. 

I don’t foresee getting away from SVN in the near future; so, I’d thought I’d aggregate some of these commands into one place.  My blog is the perfect place to do that (because it’s just for me, right? :)

Backing Up

Outside of adding/committing, the most common action to be performed is backing up the repository.  Unfortunately for my brain this is automated and I don’t see it for months at a time. To back up an SVN repository that you’re not hosting or being hosted by third-party software (like VisualSVN Server) then I like dump/load:

svnadmin dump repo-local-path > repo-bkp-path

This let’s you restore to the host that contains all the configuration data like permissions, users, and hooks.

If the repository is completely autonomous (i.e, just a directory on your hard drive and maybe an SVN daemon) then hotcopy is better:

svnadmin hotcopy local-path destination-path


If you used the dump method of backing up, you need to use the load command to put the backup into an existing repository.  If you’re not using a hosted repository, you’ll first need to create the repository (svn create repo-local-path) to run load (in which case I’d recommend using hotcopy instead).  To load the dump into the existing repository:

svnadmin load repo-local-path < repo-bkp-path

If you’ve used hotcopy then the backup is the fully functional repository; just make it available to users (i.e. put it where you want it :).


Migrating is basically just a backup and restore.  If you’re backing up one repository and putting into an existing repository, use dump/load.  On System A

svnadmin dump repo-local-path > repo-bkp-path

On System B after copying repo-bkp-path from System A

svnadmin load repo-local-path < repo-bkp-path

Even if you weren’t migrating to an existing repo, you could use this method; just add svnadmin create repo-local-path before svnadmin load.  The dump/load method has the added benefit of upgrading the data from one format to another if both systems don’t have the same version of SVN running.  The drawback of migrating with dump/load is that you’ll have to configure manually (or manually copy from the old repo) to get permissions, hooks, etc…

Now you’ve migrated your repo to another computer, existing working copies will be referencing the old URL.  To switch them to the new URL perform the following:

svn switch –relocate repo-remote-URL

Creating Working Copy

If you don’t already have a local copy of the repo to work with, the following command

svn checkout repo-remote-URL working-copy-path

Committing Changes

I added this section because I’ve become used to GIT.  GIT has a working directory and staging area model; so you tag files in the working directory for staging before committing.  This allows you to selectively commit modifications/additions.  SVN is different in that the working directory is the staging area so you effectively have to commit all modifications at once.  (you can stage adds because you manually tell SVN which files to start controlling).

svn status will tell you what’s modified (M) and what’s untracked (?).  To commit all modified

svn commit –m "description of changings included in the commit"

Undoing Add

Sometimes you a schedule something to add on commit by mistake or it’s easier to add by wildcard and remove the files that you don’t want to commit on your next commit.  To remove them from the next commit:

svn revert file-path

Be careful with revert because if the file is already controlled this will revert your local modifications.

Undoing Modifications

To revert the modifications you’ve made locally and restore a file to current revision in the repo:

svn revert file-path




What’s your favourite SVN workflow?

Getting a Quick Feel for a New Software Team

I deal with many different teams and many different companies.  I see a lot of teams not get the benefit of known techniques and fail to evaluate and improve.  There’s some check-lists out there that are used to evaluate a company/team before joining it; but I find the lists to be deeply rooted in the past.  They detail such fundamental things that knowing the criteria on the list really doesn’t tell you much about how your time with this team will be or your role within it.

There’s so many low-level features within software development team to aid in successfully delivering software.  But these low-level features are part of higher-level initiatives that if absent make the lower-level features almost moot.  "Do you use source code control?" for example.  Sounds like this is a fundamental part of being able to delivery quality software; and it is.  But, on its own it doesn’t really do a whole lot to help the team deliver software.  Is the code put into source code control using accepted design practices?  Is the code put into source code control in a timely fashion.  Is the code put into source code control not impeding other people?  Etc. etc. etc.  "Yes" to "do you use source code control" without any of other initiatives that follow-on doesn’t give me a warm and fuzzy feeling about getting on the team and focusing on software value and not spending a lot of time on, or stuck in, process.

Over the years I’ve been on many teams hired for a specific role that changed drastically almost as I began working with the team.  I’ve observed many things about many teams and have come up with some things I like to find out about a team before I start so I can better gauge the type of work that I’ll be doing and how successful the team will be fulfilling their goals.

Does the team:

  • have a known product owner/sponsor,
  • have a cross-functional team 6-9 people in size,
  • use appropriate tools,
  • foster SOLID design principles,
  • use continuous integration,
  • use continuous deployment,
  • foster communications with team members and stakeholders,
  • have a known and managed process and visualize workflow,
  • evaluate and improve process as a team,
  • have new candidates write code in their interviews,
  • have a plan that limits work in progress,
  • have a plan that orders or prioritized tasks;

and, how easy is it to change to start doing any of the above items?

“Do you use source code control” is covered by “Does the team use continuous integration” as it’s not just about using source code control, it’s about a process that can’t function properly without source code control.

And, for what it’s worth, this list doesn’t tell you whether you should work on a team or not; it just tells you the type of work you will be doing.  It’s up to you to dig deeper and decide whether you unrealistically want to limit your work to a specific or a small set of roles.  I would only use the last question as a acid test of whether I would join the team or not.  If they are not willing to improve, there’s not much I’m going to be able to do for them.

What do you look for in a team/project?