Unable To Step Into .NET Source

I began getting a problem a while ago that I was unable to step into the .NET source in Visual Studio 2008.  It happened suddenly, and I noticed it shortly after installing SP1.  Given my observation it appears to be due to upgrading the SP1; but I couldn’t find anyone else not having the same problem.  I had another computer where it worked, so I basically put it aside.


Today I had a chance to have a closer look.  I had configured 2008 (RTM, not SP1) to get the .NET source based on Shawn Burke’s blog and had not encountered any problems.  Once I upgraded to SP1 and checked “enabled .Net source stepping”, I all I ever got when trying to step through source is a dialog asking me for the location of the CS file.


What I had configured was to place the debug symbols into a folder in the Visual Studio 2008 user directory (c:\Documents and Settings\PRitchie\My Documents\Visual Studio 2008\Symbols, for example).


Despite creating a new subdirectory for SP1, I still could not step through the source.


It wasn’t until I moved the directory down the hierarchy that I finally got some joy.  I first tried it on the root and it worked fine.  I then tried it as a sub directory within my documents and it worked fine.  I imagine that the path was longer that 260 and rather than present the user with an error with that detail it assumed it couldn’t find the file it was looking for and asked the user for the location of it.  But, I’m guessing.


Hope this helps someone else.


DotNetKicks Image

Pass-through Constructors

Pass-through constructors is a term I use to describe parameterized constructors that have none of their own logic and simply pass parameters to the base class.  For example:


    public class BaseClass

    {

        private String text;

        public BaseClass(String text)

        {

            this.text = text;

        }

    }

 

    public class DerivedClass : BaseClass

    {

        public DerivedClass(String text)

            : base(text)

        {

        }

    }


DotNetKicks Image

Pontificating Virtual Parameterized Constructors in C#

Tom Hollander recently posted about a change he required to the Enterprise Library for date/time validation.  He had to create a new class (rather than modify the Enterprise Library) that derived from another, defective class.  One of his complaints was that in order to effectively implement the base class he had to also write matching constructors that simply called the base class.  His suggestion was effectively to add the concept of virtual parameterized constructors to C#.  I detail “parameterized constructors” because C# already effectively has virtual default constructors.  In the following example the base constructor (Form()) is automatically called by the derivative:


    public class MyForm : Form

    {

        public MyForm()

        {

        }

    }


Virtual parameterized constructors are not new, and from a mere language standpoint this seems reasonable.  Pragmatically though, I believe, this is another story.  It seems logical to be able to simply inherit the parameterized constructors of the base class; but, there are so many times that this isn’t the case or some generally accepted principles that would be contravened by a language addition like this.


Let’s first look at the open/closed principle (OCP).  The OCP suggests classes should be open for extension but closed for modification.  Robert Martin suggests [1] properly designed class hierarchies that obey OCP implement an abstraction; i.e. derive from an abstract class or implement an interface.  For example:


public interface IShape

{

    void Draw(Graphics graphics);

}

 

public class Rectangle : IShape

{

    //…

    public void Draw(Graphics graphics)

    {

        ///

    }

}


 


Second, let’s look at the “prefer composition over inheritance” principle.  The effect of a language change like this on a design that prefers composition should be fairly obvious.  Here’s an example of this principle:


public interface IPolygon {

    void Draw(Graphics graphics);

}

public sealed class Polygon {

    private readonly Point[] points;

    public Polygon(Point[] points) {

        this.points = points;

    }

    public void Draw(Graphics graphics) {

        for(int i = 1; i < points.Length; i++) {

            graphics.DrawLine(Pens.Black, points[i-1], points[i]);

        }

    }

}

 

public class Rectangle : IPolygon {

    private readonly Polygon polygon;

    public Rectangle(Point location, Size size) {

        Point[] points = new Point[5];

        points[4] = points[0] = location;

        points[1] = new Point(location.X + size.Width, location.Y);

        points[2] = new Point(location.X + size.Width, location.Y + size.Height);

        points[3] = new Point(location.X, location.Y + size.Height);

        polygon = new Polygon(points);

    }

    public void Draw(Graphics graphics) {

        polygon.Draw(graphics);

    }

}


 


Obviously there is no way to use virtual parameterized constructors here.


Clearly, designs that take into account OCP and prefer-composition-over-inheritance would not benefit from a “virtual parameterized constructor” language addition.


Finally, let’s look at why a class might have many constructors causing such friction for derivatives.  There’s many reasons why a class might have many constructors.  I believe all are indications of a poorly designed class.  My first thought would be that many constructors is a result of a large class and that the large-class-code-smell should be an indication for redesign.  A large class could be in an indication of a motherclass; but in either case this is likely a single responsibility principle (SRP) violation and the class is doing much more than it should and be redesigned.  If the class isn’t large but has many constructors, this was likely done not in response to how the class should/would be used but to cover every possible way of constructing the type.  This would then be a YAGNI violation and the number of constructors should simply be pared down.


But, what about when you have to deal with poorly design hierarchies and don’t have the ability to modify them?  A valid point; but, simply for the lack of friction of writing pass-through constructors I don’t think adding to the language to support poorly designed classes is a good for the language or its developers.


While an addition like virtual parameterized constructors seems benign, its limited actual usefulness makes the effort not worth the reward.  Plus, it introduces greater abilities to create poorly designed types.


[1] http://www.objectmentor.com/resources/articles/ocp.pdf


DotNetKicks Image

visual studio jedi 2

Project Naming
When you create a project that project name is also the name of the namespace.  If you want a particular namespace, enter it with the project name.  If you don’t want your binary to contain that namespace, it’s easier to rename that in the project properties Build tab than it is to change the default namespace, edit the class files, rename the project, rename folders, etc.

Solution Naming
When creating solutions it’s often best to name the solution differently than the project that you’re creating.  When I create a project it will be contained within a solution.  Once that solution is created I will always want to add sibling projects to that solution (like a test project, a front-end (or back-end) Project, a domain/model project, infrastructure project, etc.).  When I create the project and accept the name of the project as also the name of the solution it can be a bit confusing.  For example, if I want to create an n-layer Invoicing application I may need an invoicing front-end application, in which case I may create a project named PRI.InvoicingFrontEnd.  To support that front-end I will have a infrastructure layer project (repositories, application services, etc.), a domain layer project, and a test project.  As projects within a "PRI.InvoicingFrontEnd" this doesn’t really make sense.  So, for my solution name I won’t accept the same name as the project and enter something more sensible and that doesn’t include namespaces. In this case, "Invoicing".

Repetitive tasks
Some repetitive tasks are easy to accomplish in Visual Studio.  For example, if I wanted to replace the string "wish" with "want" I’d simply to a project/solution-wide search and replace.

Some repetitive tasks are not so easy.  For example, if I want a search and replace that involves replacing multiple lines, search and replace doesn’t work.

Rectangular Selections is your friend
Many times you have rows of text that you want to incorporate into code.  Sometimes the text is outside the source code, sometimes its not.  An example may be a list of identifier names in a specification.  Sometimes this text is a single column of text that you can just paste into a CS file and search/replace with commas to create a enum.  Sometimes this text is a single column in a multi-column textual table.  This is easy to extract into code without having to come up with a complex regular expression.  Let’s say we have tabular text like this:

Name    Priority   

public enum Keys
{
    A = 65,
    Add = 107,
    Alt = 262144,
    Apps = 93,
    Attn = 246,
    B = 66,
    Back = 8,
//…
    Y = 89,
    Z = 90,
    Zoom = 251
}

Rather than

DotNetKicks Image

Dynamic Features in C#

.NET is the evolution of COM.  .NET was rumoured to be originally called COM+ 2.5.

.NET has evolved well beyond COM and while it fulfils many of the goals that COM originally tried to fulfil, .NET removes many of the COM trappings that developers have to deal with.

C# is a .NET language with C++ heritage.  C++, unlike VB is not a dynamic language.  C# was only dynamic in that it provided COM interop abilities.  You can program to any COM API you like, as long as you know what you’re talking to.  This is generally not a problem for 99% of COM libraries.  But, COM libraries offer the ability to be strongly typed and/or completely dynamic.  This means a method may return a value that simply isn’t known until runtime.  In C#, if you know what the type of that value will be a runtime, you  can simply cast it to that type and it will work fine a runtime–sort of hiding the fact that it’s really dynamic.  If you debug C# code that deals with COM APIs, you may have noticed the type of some COM objects are __ComObject–encapsulating the fact that type information doesn’t really exist until runtime.

The problem with dealing with some dynamic COM APIs is that what the actual type of some return values simply isn’t documented at runtime.  In dynamic languages you can write things like this:

var obj = GetSomeObject();
obj.SomeMethod();

"SomeMethod" simply won’t be resolved until runtime.  The above code effectively means "after you get the object from the GetSomeObject method, query the object for information about the "SomeMethod" method and if it exists, invoke it.  This is like duck typing.

I say 99% of COM APIs because most COM APIs are written to be consumed by strongly-typed languages like C++.  But, COM doesn’t require an object to have a strong type; an API can be written that completely relies upon checking for members at runtime.

This has been a problem with C# because it simply hadn’t supported that.  As I said, you can get around types with methods that return dynamic types through casting; but if that type is never concretely defined anywhere that cast would be impossible.

C# 4 adds dynamic support to the language so it can now support circumstances like this.  It also makes consuming some COM APIs much easier because you don’t have to know specific type information at compile-time.  You can get by with member names and leave it to the runtime to find the physical members for you, at run-time.

Oddly, the Visual Studio automation and extensibility API is based on this dynamic ability.  Mostly legacy because Visual Studio has been around well before .NET and has supported automation and extensibility since then.  This extensibility was based on COM.  We have Visual Studio support .NET now, but the evolution of that extensibility has continued to support COM.  Plus, Visual Studio also supports non .NET languages and thus must support extensibility with those languages and COM is generally accepted means of doing that.

For the most part, the extensibility model of Visual Studio is documented in VB.  In VB you have much more freedom to accept dynamic APIs.  Let’s have a look at some examples from the documentation:

DotNetKicks Image

DevTeach 2008 Includes over $1,000 In Free Software.

Announced recently, registering for and attending DevTeach Montreal 2008 will land you with $1,000 of free software.  Including:


  • Visual Studio 2008 Pro
  • Expression Web 2
  • TechEd Conference DVD set

The sessions have shaped up to be some of the best training money can buy, now you get a bunch of free software to boot.


For more details on the free software see: http://www.devteach.com/News.aspx


For a look at the sessions see: http://www.devteach.com/Schedule.aspx


To register go to: http://www.devteach.com/Register.aspx

.NET 4.0, Evolving .NET Development

.NET 4.0 is the first release of .NET since 2.0 that evolves .NET for every programmer.  .NET 3.0 was largely LINQ and .NET 3.5 was largely new namespaces (like WCF, WWF, etc.)

.NET 4.0 evolves the programming and design for any programmer.  It offers framework support for parallel processing (PFX will be released), Code Contracts (now DbC is a reality at the framework level, and opens the possibility of it being a reality at the language level post 2010), variance changes (co- and contra-variance on generics interfaces and delegates is now a reality).

Parallel Processing
Moore’s law has changed from single processors doubling in speed every 18 months to doubling in processing power through increased core count every 18 months.  This means for applications to make use of processing power increases they must increasingly make use of parallel processing and multi-threading.  The PFX makes this more a reality by providing a framework by which application designers can more easily write code to support multi-core processors and multi-processor computers.

With PFX writing a loop to make use of multiple processors (while still supporting single processors) will be as easy as:

    uint[] numbers = new uint[] {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20};
    Parallel.ForEach(numbers, delegate(uint number) { Trace.WriteLine(CalculateFibonacci(number)); });

Code Contracts
Design by contract is a form of writing software with verifiable interface specifications.  These specifications can be used at compile time to find code that breaks the contract and not require checking of the contract at run-time.  For example:

    [ContractInvariantMethod]
    public int Calculate()
    {
        int result = 0;
        foreach (int value in values)
        {
            this.operation(ref result, value);
        }
        return result;
    }

If anything modifies the current class within the Calculate method, an exception will be thrown at run-time.  Compilers will eventually be able to perform rudimentary checks at compile-time to ensure these contracts are abided by.  For example:

    [ContractInvariantMethod]
    public int Calculate()
    {
        int result = 0;
        foreach (int value in values)
        {
            this.operation(ref result, value);
        }
        this.date = DateTime.Now;
        return result;
    }

…may eventually cause a compile error on the assignment to this.date.  The person designing this type intended this method to be invariant, meaning it doesn’t change the state of the object to which it belongs.  This design attribute can now be guaranteed.

Being able to include more design aspects in code and code definitions is a great step forward in not only writing intention-revealing code but in the ability to write more reliable code.

Variance changes

C# has always had intuitive variance when it came to arrays.  For example, the following is valid code:

    Shape[] shapes = new Triangle[10];

Given:

    class Shape {
        //...
    }
    class Triangle : Shape {
        //...
    }
 
Generics variance was a different storey.  Prior to Visual C# 2010, this following is a compile error:
    Func<Triangle> triangle = () => new Triangle();
    Func<Shape> shape = triangle;
 
...despite Triangle being a type of shape (or otherwise known as "bigger" than Shape).  This is known as invariant. In Visual C# 2010 you can now create delegates (as well as types and methods) that are no longer invariant.  For example, a Func delegate could be created that is covariant:
        delegate T Func<out T>();
(not the new use of the out keword) ...which could make our previous code:
    Func<Triangle> triangle = () => new Triangle();
    Func<Shape> shape = triangle;
...compiler without error.
 
The same can be done for contravariance with the the new use of the in keyword:
        delegate T Action<in T>(T value);
 
For more details on generics variance, please see Eric Lippert's series on generics variance: http://blogs.msdn.com/ericlippert/archive/tags/Covariance+and+Contravariance/default.aspx

Other
Another notable improvement is side-by-side (SxS) support for multiple versions of .NET.  This allows hosting of more than one version of the CLR within a single process.  This makes writing shell extensions, for example, in C# a reality in .NET 4.0.  You shouldn’t need to target .NET 4.0, but as long as .NET 4.0 is installed you should be able to write shell extensions in a current version of .NET (like .NET 2.0) and it will be supported.  Prior to .NET 4.0, a process could only have one version of the CLR loaded into a process, making extending 3rd party native applications (like the Windows shell) very problematic because what version of the CLR that was loaded into a process would depend on the first extension loaded.  If the first extension loaded was a .NET 1.1 assembly then any other extensions loaded requiring .NET 2.0 would subsequently fail.


DotNetKicks Image

Microsoft Techdays 2008

I’ve been lax on posting about my involvement in Microsoft Techdays 2008.  I’ll be doing two sessions in Ottawa.  One is titled “Internet Explorer 8 for Developers – What you need to know”.  The other is “Blackbelt Databinding in WPF”.


The description of Internet Explorer 8 for Developers – What you need to know:


Internet Explorer 8 has plenty of exciting new features for developers, from Web Slices and Activities, to new support for HTML5, CSS2.1, and CSS3. In this session, you’ll learn how to utilize these features in your latest and greatest Web applications. You’ll also learn how features like Compatibility Mode can help preserve the user experience during development.


The description of Blackbelt Databinding in WPF:


Most rich applications present data, often lots of it. To avoid writing lots of code to support the presentation of that data, you should take full advantage of the data binding capabilities of your presentation platform. In this session we discuss and demonstrates the data binding capabilities of WPF. You will find out what data contexts are and how they work with the element hierarchy. We will show you what Bindings are, how to declare them, and what optional capabilities they support. We will demonstrate binding to data sets as well as custom objects and collections, and discuss considerations for implementing those types. In this session we show you how to use data templates to define reusable chunks of UI that can be rendered automatically for each item in a data collection. We also discuss the way you can interact with your bound data programmatically to control the presentation of that data and to make changes to it in code.


For more information on Techdays in general, see http://www.microsoft.com/canada/techdays/default.aspx.  For more information on the sessions in Ottawa, it’s location, etc., see http://www.microsoft.com/canada/techdays/sessions.aspx?city=Ottawa.


If you’re coming to any of the Ottawa sessions, track me down and say HI.