What Code Comments are Not For.

I deal a lot with other people’s and legacy code.  One of the things I see very often is what I call "misuse of code comments".  Let me provide an exhaustive list of what code comments are for:

  • Describing why code isn’t doing something obvious

There, done.

What code comments are not for (not complete):

The Obvious

// set the value of i
i = value;

It’s obvious that the code is doing that; the comment adds no value here.  The compiler provides no validation that the "i" in the comment really applies to any variable in the line the comment applies to.  Nor does it even know what line the comment applies to!  Comments like this actually introduce some technical debt because I can refactor this code to move it around independently of the comment and thus the comment would appear to annotate some other line of code. Refactoring tools help somewhat with this; but they only really do name matching in comments when you rename a variable.  Do you think renaming "i" to "count" really means replacing all "i"’s in all comments with "count"?  Probably not; don’t use refactoring tools as a crutch.

Helping the Reader Learn the Language

You can’t possibly know what the reader of your code does and does not know.  This is especially true of what they do and don’t know of language syntax.  The language syntax is your common language; you can only assume they know it. You can’t possibly know if a comment about language syntax is going to help the reader or not.  If they don’t know it, they should look it up.  Comments that describe syntax are a no-win proposition, you can’t possibly have a comment that helps every reader of the code.

An example:
        /// <summary>
        /// Constructor for class.
        /// </summary>
        public MyClass()

If your reader doesn’t know this is a constructor, they probably don’t even know what a c’tor is—this comment isn’t going to help them much. 

Slightly different example:

/// <summary>
/// Initializes a new instance of the MyClass class
/// </summary>
public MyClass()

If the reader doesn’t know what a c’tor does, does all your code include comments that will help this reader?  These comments are a waste of time and add no value. Same technical debt as Obvious, it’s not a syntax error to separate the comment from the declaration; there is a risk they will become disconnected or out of synch.  If the comment has no value having to manage it also has no value and therefore adds work to the project.

Another example verging on Obvious:
public MyClass()
  // Empty

As this stands, it seems benign. But one, it should be Obvious.  Two, if it’s not, the reader should be brushing up on language syntax.  Three, it’s not verified.  I can edit this c’tor to make it do something else this is perfectly syntactically correct:

public MyClass()
  x = 42;
  // Empty

Now, the comment is meaningless and potentially confusing.  Reading this for the first time makes you wonder did the class just have // Empty in it in the past and x = 42 was added, or does "empty" mean something different to the author, or did the author suffer from a stroke and possibly need medical attention?

You can assume the reader of your code doesn’t know anything about the code.  If the language can’t express the concepts in the code properly (if it can, you should be writing it that way; if you choose not to, comment why.) then comment why the language isn’t describing the concepts.


Writing comments to aid the reader in the understanding of the language is sometimes describing HOW the language is working.  That’s not describing the code but describing the language.  Comments should describe WHY the code was written that way if it’s not obvious.  Again, the language is the common denominator between the reader and the author.  There’s many references the reader can refer to to learn the language–let them do that; you may not be the best person to help someone learn the language; at the very least you don’t know the degree to which they don’t know the language.  Let the code clearly describe HOW.

Use of comments is often a form of religion; people are very opinionated about them in one way or another.   Robert Martin pulls no punches in Clean Code by saying:

“The proper use of comments is to compensate for our failure to express yourself in code. Note that I used the word failure. I meant it. Comments are always failures.”

Martin has previous described comments as “apologies” for “making the code unmaintainable”…

If you want to use concepts like XMLDOC or tools like JavaDoc to document members and classes, that’s fine; just make sure your comments are meaningful and can stand alone.

For what it’s worth, these are comments that I have encountered; more than once and on more than one project.

Do you have some code comment pet peeves?  We’d love to hear them!

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();

Avoid the volatile Modifier

[Update: 25-Jan-12 5:45 pm; fixed typo]

I was reminded recently of the misconceptions of the volatile modifier in C#, and I’d thought I’d pass along the recommendations of other’s that is tantamount to “avoid the volatile modifier”.  The volatile modifier in C# “indicates that a field might be modified by multiple threads that are executing at the same time” [1].

The first problem is that documentation.  What does that really mean to someone developing code that uses multiple threads?  Does that make the code “thread-safe”?  The answer is “maybe”; the real answer is “rarely”. Most people just stick volatile on a field because they think that’s what they need to do.

What volatile modifier does do to a field in C# is make all reads to the field use “acquire semantics” and all readswrites use “release semantics”.  Much clearer, right?  Acquire semantics means access is “guaranteed to occur prior to any references to memory that occur after it in the instruction sequence” and release semantics means access is “guaranteed to happen after any memory references prior to the write instruction in the instruction sequence” [6,2]

One of the problems with modern compilers [3] and processers and multithreaded code is optimization.  Within a block of code with no externally visible side effects the compiler is free to re-order instructions and remove instructions to result in the same visible side-effect.  “x+=1;x+=1;” can be freely optimized to “x+=2;” and “x+=1;y+=2;” can be reordered so that order of execution is effectively “y+=2;x+=1;”, for example.  The processor doesn’t optimize multiple instructions into one; but can re-order instructions and make the results of executed instructions visible to other cores/processors visible well after the instructions were executed (processor caching).  With fields, the compiler[3] has less freedom to optimize because side-effects to fields have more visibility—but the processor doesn’t discern between C# fields and any other bit of memory.

So, what does volatile really do for a field?  Realistically it prevents processor re-ordering and caching.  It tells the processor that all accesses to the value of that variable should come from or be made directly to memory, not the cache.

Not having a value cached by the processor so all other processors (and thus all other threads) can see what happens to the value of a variable seems like a really good thing though, doesn’t it?  In that respect, yes, it is a good thing.  But, let’s look at some of the drawbacks of volatile.

The syntax of volatile is that it just annotates a memory location.  In reality it really modifies the operations that occur on that memory location.  Code that operates on that memory location looks the same, regardless of whether it’s modified by volatile or not.  i.e. it’s unclear the code has different side-effects for multithreaded scenarios.  Another problem is that “volatile” is a very overridden word.  It’s used in C++ and Java; but it means something different (sometimes slightly) in each.  Moving back and forth from C++ to C# to Java could lead to using volatile incorrectly.  Volatile also assumes that all accesses to the field need to be protected from re-ordering.  Most of the time, there’s very specific times you want to force side-effects to a field to be made “visible” and, effectively, flush the processors cache to/from memory.  Flushing to the cache on every access is not very performant if you don’t need side-effects truly visible until specific times.  Plus, the volatile modifier means nothing if you pass the field by reference somewhere else. [4,5] There’s also limitations to what you can apply volatile to (e.g. you can’t have a volatile long).

What does volatile really mean from a code perspective?  Well, when you read from a volatile field effectively the following code is executed:

Thread.VolatileRead(ref field);

And when you write to a volatile field effectively the following code is executed:

Thread.VolatileWrite(ref field, newValue);

Both of these methods really just translate to a memory access and a call to Thread.MemoryBarrier (before the memory access for read, and after for write).  What MemoryBarrier does for is is ensure that all cached writes are flushed to memory at the call to MemoryBarrier (i.e after MemoryBarrier, any cached writes that occurred prior to MemoryBarrier are flushed and any cached reads are abandoned).  So, if you’ve noticed there’s nothing field-specific about Volatile[Read|Write], it makes everything flush, not just the field in question.  The performance aspect of the volatile modifier comes into play when you have multiple volatile fields within a class, it’s really only the last one that needs to use VolatileWrite when writing and the first one to use VolatileRead when reading. To steal some code from Jeffery Richter:

m_value = 5;
Thread.VolatileWrite(ref m_flag, 1);

is just as thread-safe as:

m_value = 5;
m_flag = 1;

if m_value and m_flag were volatile.

Same holds true for VolatileRead:

if(Thread.VolatileRead(ref m_flag) == 1)

is just as thread-safe as the following with volatile fields:

if(m_flag == 1)

I feel compelled to bring up invariants here.  When you’re dealing with modifications or accesses of multiple things that can’t operate atomically (i.e. the “validity” of m_value depends on the value of m_flag, making the two fields an “invariant” that can be accessed by a single instruction—i.e. not atomically).  While the above example ensures that changes to m_flag and m_value are made visible to other threads “at the same time” it doesn’t do anything to stop another thread from accessing the m_value before m_flag has been updated.  This may or many not be “correct”.  If it’s not correct, using lock or Monitor to model atomic access to such an “invariant” that isn’t natively atomic is a better choice.

On the topic of lock/Monitor.  It’s important to note that the end of a lock or the call to Monitor.Exit has release semantics and the start of the lock or Monitor.Enter (and variants) have acquire semantics.  So, if all access to a field is guarded with lock or Monitor, there’s no need for volatile or Thread.Volatile[Read|Write].

Using Monitor, lock, volatile, VolatileRead and VolatileWrite correctly shows that you understand what it means to be thread-safe and that you understand what it means to be externally visible and when in the context of your fields and invariants.

There have apparently been some discussions about the usefulness and applicability of volatile in C# as well as VB (and I assume C++11); but, I’m was not privy to those discussions and haven’t been able to find much in the way of reference to those discussions other than third-hand information…  Needless to say, VB doesn’t have anything similar to “volatile”, apparently for a reason.

I’m not saying there aren’t perfectly valid scenarios for volatile; but, look carefully at what you need; you probably could make better use of VolatileRead or VolatileWrite.  Just understand your needs and use what is correct—code on purpose.

[1] volatile (C# Reference)

[2] Acquire and Release Semantics

[3] CSC and JIT compilers.

[4] Sayonara volatile

[5] Aug 2011 Getting Started with Threading webcast with Jeffrey Richter (45MB)

[6] C# Language Specification (ECMA-334)

C#, Async, Limits, oh my!

One of the great sessions at Codemash was a dual-speaker session with Bill Wagner and Jon Skeet—Async from the Inside.

In that session Bill and Jon describe (in great detail) the state machine that the C# compiler generates when it compiles async code involving the await keyword.  When the Async CTP was released this state machine was one of the first things I noticed when I was reflecting through some generated code.  I too noticed the type of the state variable (int) and wondered, at the time, if that would be an issue.  All the information Bill, Jon and I portray is what we’ve learned from reflection, none of this is “insider information”.

Some background: a state machine is generated for each method that uses the await keyword.  It manages weaving back-together the asynchronous calls that you so lovingly linearly describe in an async method.  The state machine not only weaves together the various “callbacks” that it generates, but it also (now) manages the state changes of the data being modified between and by asynchronous calls.  Wonderfully complex stuff.

The state variable is basically an enum of the various states between async calls that the generated code can be in.  I won’t get into too much detail about that (you should have gone Codemash like all the other cool people :) but needless to say having a “limit” on the number of states a method can be in sounds a bit scary.

Of course Jon, at one point, brought the same thing up in the session about the “int” state variable.  This reminded me that I wanted to look into it further—not because I wanted to break the compiler (directly) but to know what the limits are, if any.

A couple of days later while I had some time waiting around at airports on my way home.  I thought I’d test my theory.  If you follow me on Twitter you probably saw some of my discoveries in real time.  For those of you who didn’t, here’s an aggregation of what I found.

First of all, the state machine represents the async states a single method can be in.  This is represented by a signed int.  the only negative value that seems [1] to mean anything is –1, leaving 2,147,483,647 states (or, roughly, await invocations) that can occur in an async method.

First glance, this seems disconcerting.  I quickly wrote some code to generate a method with 2,147,483,648 await invocations in it (that’s Int32.MaxValue + 1 for those without OCD).  Needless to say, that took a few minutes to generate on my laptop, and I have an SSD (which I’ve clocked at increasing my IO by about 7 times on average).  That generated a 45 gigabyte file (one class, one method).

Problems with the type int for the state variable started to seem ridiculous.  But, I didn’t stop there.

Now I’m doing everything outside of Visual Studio (VS) from this point on.  I’m running the app to generate the code from within VS; but everything else is from the command-line.  So, I run CSC on the 45 gig file honestly expecting and internal compiler error.  But, what happens is that I get a CS1504 error ‘file.cs’ could not be opened (‘Arithmetic result exceeded 32-bits. ‘).  Now, the state variable is 32-bits so, it sounds like that could be the culprit.  But, if you look at the error message, it can’t even open the file.  I tried opening the file in VS and it told me the file couldn’t be found…

Okay, at this point it’s seeming even more unlikely that a type of int for the state variable is even remotely going to be an issue.  But, now I’m curious about what 32-bit value has been exceeded.  My theory is now the number of lines in the file…  The compiler has to keep track of each line in the file in case it has to dump an error about it, maybe that’s the 32-bit value? I modify my code generation to limit the the number of await invocations so the number of lines in the file is 2,147,483,647 (kind of a binary search, if this works then I know it still could be the number of lines in the file). Same error.

The error isn’t from the # of lines.  Now my theory is that the overflow is from trying to allocate enough memory load the file (keep in mind, I’ve got 8 GB of RAM and I’m trying to load a 45GB file; but, I have yet to get an out of memory error).  So, I modify my code to generate a file that is approaching 2,147,483,647 bytes in size.  Things are much faster now…  I try again. Now I get the same CS1504 error but the message is ‘file.cs’ could not be opened (‘Not enough storage available to complete this operation’) (I’ve got 100 GB free space on the drive…).  Interesting.  I’ve lessened the data requirement and only now effectively getting “out of memory” errors.

Now I’m just looking for a file that the compiler will load—I’ve given up on some super-large number of await statements…  Longer story, short, I kept halving the size of the file until I reached about 270 megabytes in size then the compiler finally succeeded (meaning ~540 Megabytes failed).

At this point, I’ve successfully proven that a type of int for the state variable is not an issue.  If the compiler could load the 540 megabyte file and I somehow could use 8 bytes per await invocation (“await x;” for example) then I could never reach more than about 70,778,880 await calls in a single method.  Of course, I’m way off here; that number is even much lower; but 70,778,880 is about 3% of 2,147,483,647.  Clearly int is the smallest type that could store anything close to 70,778,880 or less…

Of course, I’m completely ignoring the fact that a 540 MB cs file is completely unmanageable in VS or a project in general; but, why get caught up in silly realities like that.

This state machine is almost identical to those generated by enumerator methods (yield return).  If we assumed that the async state machine generation code is “inherited” (by pragmatic reuse) from the enumerator method state machine generator, we can assume it has very similar limits (but even smaller)—meaning you’d never get close to overflowing its int state variable.


[1] Again, this all observed; I’m surmising the compiler never uses any other negative value.

(function() { var po = document.createElement(‘script’); po.type = ‘text/javascript'; po.async = true; po.src = ‘https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(po, s); })();