No "Add Method Stub" When Passing or Assigning Delegates

I finally noticed the other day the “Add method stub” SmartTag wasn’t appearing for a new method name I type in.  I decided I’d have a closer look…

When you’re practicing Test-Driven Development (TDD) you want to write a test for methods before you write the methods.  This means you write a test method that calls several other methods that don’t exist yet.  The Visual Studio IDE, in an effort to promote TDD, recognizes this and when you have your caret over a call to one of these methods a SmartTag shows up and you can select Generate method stub for ‘SomeMethod’ in ‘SomeNamespace.SomeClass’.  For example, if you have the following:

    static void Main(string[] args)

    {

        SomeMethod();

    }

…if you place the caret (e.g. click on “SomeMethod”) somewhere on “SomeMethod” (and it doesn’t exist in the current class) the SmartTag rectangle under the ‘S’ in SometMethod appears and you can hover your mouse over the word “SomeMethod” and the options icon appears that you can click and select Generate method stub for ‘SomeMethod’ in ‘SomeNamespace.SomeClass’, and it will generate a method like the following:

    private static void SomeMethod()

    {

        throw new NotImplementedException();

    }

Well, I figured this would also happen when I tried to assign a non-existent method to a delegate.  For example, if I had the following:

    static void Main(string[] args)

    {

        Action action = SomeOtherMethod;

    }


…I would expect that placing the caret over “SomeOtherMethod” that the SmartTag would show up and I would be able to select ‘SomeOtherMethod’ in ‘SomeNamespace.SomeClass’ and it would generate a method like the following:

    private static void SomeOtherMethod()

    {

        throw new NotImplementedException();

    }


Alas, the IDE doesn’t recognize use of an undeclared method when used with delegates.  i.e. it doesn’t appear in these circumstances either:

    static void ProcessDelegate(Action action)

    {

        //…

    }

    static void Main(string[] args)

    {

        ProcessDelegate(SomeOtherMethod);

        ProcessDelegate(new Action(SomeOtherMethod));

    }

I thought “Add method stub” would be more useful in these circumstances because you’re not explicitly passing arguments to the method so it’s more likely that you don’t know what signature you need to declare.  So, I logged a suggestion for it: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=328782


By the way, the non-generic Action delegate (System.Core.Delegate) is new to .NET 3.5.

My Wishlist for C# 4

[Edit: fixed the not-ready-for-publication problems] 

There seems to be more than few people blogging about what they hope C# 4 will do for them.  I haven’t seen one that really synchronizes with my thoughts, so I’d thought I’d post my own list.

Variance

A good story with regard to variance with generics is vital for upcoming versions of C# 4.  You could argue that this should have been done in 3, but, unfortunately, that wasn’t the focus.  I think this really needs to be done for 4; and if Eric Lippert’s blog is any indication, that may come true.

Design By Contract 

Design by Contract (DbC) means programmers can define verifiable interfaces.  This explicit intention information is then used by the compiler to greatly increase the compile-time checking it can do.  In most cases this means no checking need be done at run-time because the compiler has verified the that condition cannot occur at run-time, increasing reliability and improving performance.  There’s hint of this in the framework already with the internal classes ImmutableAttribute and InvariantMethodAttribute.  First-class language support for DbC would go a long way to being able to write more reliable software.

Concurrency

Various leaders in the industry (Microsoft included) are recognizing the processor speed improvements will essentially stop being vertical and continue to be horizontal (i.e. instead of increases in processor speed, increases in processors or cores will be the norm).  This means that in order for applications to utilize that type of system processing throughput concurrency will become more mainstream.  Microsoft has various concurrency initiatives going on (like the Parallel Framework Extensions).  It’s only logical that lessons learned from this project not long will make their way into the BCL and the .NET Framework but also into the respective languages (C# included, I hope).  In this respect I hope that concurrency issues become first-class citizens in the library.  This would include things like immutability.  In the spirit of Agile development, the sooner this gets into the language the sooner it can be embraced and evolve.

Object-Oriented Programming

Much like Jon Skeet, I believe the language designers should recognize that C# is an object-oriented language first and foremost.  This fact should continue to focus what, if any, aspects of other programming paradigms are added to the language. Paradigms like Aspect-Oriented Programming, Functional Programming, etc.

infoof, memberof, propertyof, eventof, methodof Operators

The information these operators require is so easy for the compiler to simply dump in the IL stream.  For users to do the same thing requires that they have a string containing the name of the member in question–which can’t be checked at compile time and leads to maintenance nightmares.  Imagine what life would be like without the typeof operator forcing use to use code like this:

    Type type = Type.GetType(“MyNamespace.MyClass”);

Instead of:

    Type type = typeof(MyNamespace.MyClass);


…if I rename MyClass to UsefulClass no refactorying tool I’ve see will modify the “MyNamespace.MyClass” string and compilation will succeed and lead to a runtime-error.


Cleanup Some Long-standing Issues.


The above and variance could be viewed as a long-standing issues; but, I think they deserves to be called-out on they’re own, they would be huge improvements.  Detection of recursive properties, for example.  The C# team have a backlog of a few things like this; now’s the time.


Extensible Compiler


Years back my original idea for this was to have an IDE that automatically corrects mistakes.  e.g. if an “; expected” error was spit out, the IDE could intercept it, correct the code, and recompile.  But, this extensibility idea can be so much more than that.  This sort of extensibility could introduce Aspect-Oriented Programming fundamentals or Domain-Specific Language abilities without the language really understanding the concepts at all.  This extensibility would be very powerful for programmers and would give them the ability to evolve their language without being tied to the release schedule of the C# team.


Thinking Out Loud


Class-scoped aliases: Aliases in C# are a bit of a pariah, they’re at global, file scope; making them not all that useful.  In C++, for example, it’s quite common to declare type aliases within a class declaration (usually based on a templated type).  It would be nice to be able to create an instance of a type based on an alias within a class.  E.g. MyType.MyAlias o = new MyType.MyAlias;


 

kick it on DotNetKicks.com

Not Knowing Why Something is Better or Worse Means You Believe it’s Magic

In the software development field you often get zealous people evangelizing that certain techniques or methodologies have certain side effects or help produce certain results.  They often don’t detail the causation of the results to the particular technique or methodology.  Sometimes it comes in the form of “use that for this”, or “I use this to do that”; and sometimes it comes in the form of “this helps to do that” or “they didn’t use this so it must be bad”.


There’s nothing wrong with taking the advice of someone you trust; but do yourself a favour and think about the technique or methodology being espoused.  Don’t indulge in magical thinking and not understand what you are using.  It may very well be that the advice you’re accepting is true; but if you don’t know why something does what it does you have much more tendency to misuse it.


The converse is also true.  If someone tells you a particular technique or methodology is bad, find out why it’s bad.  If you simply accept that it is bad and avoid using it you essentially believe some magic makes it so.


If the person espousing said technique of methodology can’t back up the claim, then I’d seriously consider simply ignoring that person.

Maxim’s of Object-Oriented Design – Layers

Good object oriented design is much more than simply modeling real-world concepts as classes (and when I say classes I mean “types” which could include “struct”), methods, and attributes.  Object-oriented design involves the interaction of the entire system, that interaction should also follow sound object-oriented principles and influence the design of the individual parts of the system.


There’s various OO design principles for classes, like the single responsibility principle, high cohesion, low coupling, acyclic dependencies principle, separation of concerns, etc.  All of these principles build on and enhance either encapsulation or abstraction.


There’s many design patterns for making easier to maintain (translating to more reliable) systems.  One such pattern is the layer.  The layer is a logical (usually also physical) grouping of types that perform related tasks.  A layer is a level of abstraction of those related tasks.  Data Access or Persistence Layers deal strictly with the task of read/writing data to a data store, for example.  A layer, is much like a class: its members must relate to a single responsibility (although a more abstract responsibility than a class), they must be cohesive, and they must have concerns only relating to the single responsibility of the layer.


Types in a layer, if property cohesive, should have very little coupling to types outside the layer.  It’s rare that a layer is completely autonomous and often makes use of another layer.  Generally layers are implemented as packages (physical grouping of code).  Packages must follow the acyclic dependencies principle, otherwise you get build-ordering nightmares. Follow the acyclic dependency principle menas that a layer that depends upon (uses) another layer should not also be depended upon by that layer. In other words, dependancies between layers should only be one way.   If you find you are getting a cyclic dependency between two layers, you’re likely mixing concerns and you should think about merging them or refactoring the cyclic dependencies into a third layer.


 

Testing the Units

In OO there’s levels of abstraction.  A class, for example, abstracts a read-world concept into a encapsulated bit of code.  A class is autonomous.  That class lives in world with other classes and interacts with them, but is autonomous.


I believe development testing should account for these abstractions, not just the interactions or behaviour of the system.  One problem I see with Test-Driven Development (TDD) and Behaviour-Driven Development (BDD) is that practitioners simply just center on interaction of the parts of the system and really don’t do any “unit testing”.  They get caught up in the mantras that are TDD and BDD and fail to see the trees for the forest and fall into testing by rote.  Unit testing tests individual units, the smallest testable part of an application[1].


Let’s look at the the BDD example on Wikipedia, where it tests the EratosthenesPrimesCalculator.  The behaviour that is tested in this example is basically the first prime number (which should be 2) the first prime number after 100 (which should be 101), the first prime number after 683 (which should be 691), and that the first 11 primes are correct.


The EratosthenesPrimesCalculator constructor interface accepts (or seems to) a signed integer.  The tests detailed only test 13 of 4,294,967,296 possibilities.  These tests may very well test the expected behaviour of one system, but don’t really test EratosthenesPrimesCalculator as a unit.  If the system only allows that behaviour, then these tests prove that it will work.  But, if at some point EratosthenesPrimesCalculator is used outside that behaviour (and that’s really the purpose of encapsulating code into classes: reuse) not much about EratosthenesPrimesCalculator has been validated.  At the very least the edge cases of EratosthenesPrimesCalculator() should be tested.  If there is a explicit contract that EratosthenesPrimesCalculator() it is to ensure, boundary cases should be included in that “very least”.  If they apply, corner cases should be pivotal to good unit testing.


I believe development testing should also be object-oriented as well, testing that individual objects work “as advertised”.  Testing interaction of classes is important, and TDD and BDD do that; but your system must have a solid foundation: it’s classes.


In relation to TDD and BDD, this testing will be done once the concrete implementations are done.  Depending on how you’ve designed your system; you could do this testing on an interface, then when concrete implementations are done, throw them at the test via the interface.


[1] http://en.wikipedia.org/wiki/Unit_Testing

A Time and Place for Code Comments

I’ve dealt with more than one person who believes all code comments are bad.

The first person I encountered who said that also asked me to explain why a particular algorithm was used instead of another because there were no comments explaining it.

But, one of my primary principles is that you should get the compiler to do as much work as possible when it’s compiling.  This has to do with preferring compile-time errors over run-time errors; but it does have an effect on comments.  The result is that they should be avoided in preference to self-commenting code because the compiler does not check them.

I had the misfortune of working with a fellow once you named his variables starting with “a” and continuing alphabetically, adding a character when he ran out of letters.  His code mike look like this:

    protected Boolean SuspendIfNeeded ( )
    {
        Boolean c = this.a.WaitOne(0, true);
 
        if (c)
        {
            Boolean d = Interlocked.Read(ref this.b.suspended) != 0;
            a.Reset();
 
            if (d)
            {
                /// Suspending…
                if (1 == WaitHandle.WaitAny(new WaitHandle[] { d, this.e }))
                {
                    return true;
                }
                /// …Waking
            }
        }
 
        return false;
    }

..very painful.


While self-commenting code makes for code that is more maintainable; there are times where the code doesn’t explain some higher-level concepts.  Domain-Driven Design helps to get you in the habit of making domain-specific design artifacts “explicit”, which goes a long way to self-commenting code; but it doesn’t address vital information like why certain algorithmic decisions were made.


This is one area where refactoring tools don’t help.  They will often help deal with XML comments; but inline comments (and comments regarding implementation details don’t belong in XML comments) can get lost unless you’re paying attention–i.e. avoid refactoring by rote.