Odd query expressions

Yesterday, I was proof reading chapter 11 of the book (and chapter 12, and chapter 13 – it was a long night). Reading my own text about how query expressions work led me to wonder just how far I could push the compiler in terms of understanding completely bizarre query expressions.

Background

Query expressions in C# 3 work by translating them into “normal” C# first, then applying normal compilation. So, for instance, a query expression of:

from x in words
where x.Length > 5
select x.ToUpper()

… is translated into this:

words.Where(x => x.Length > 5)
     .Select(x => x.ToUpper())

Now, usually the source expression (words in this case) is something like a variable, or the result of a method or property call. However, the spec doesn’t say it has to be… let’s see what else we could do. We’ll keep the query expression as simple as possible – just from x in source select x.

Types

What if the source expression is a type? The query expression would then compile to TypeName.Select(x => x) which of course works if there’s a static Select method taking an appropriate delegate or expression tree. And indeed it works:

using System;

public class SourceType
{
    public static int Select(Func<string,string> ignored)
    {
        Console.WriteLine(“Select called!”);
        return 10;
    }
}

class Test
{
    static void Main()
    {
        int query = from x in SourceType
                    select x;
    }
}

That compiles and runs, printing out Select called!. query will be 10 after the method has been called.

So, we can call a static method on a type. Anything else? Well, let’s go back to the normal idea of the expression being a value, but this time we’ll change what Select(x => x) actually means. There’s no reason it has to be a method call. It looks like a method call, but then so do delegate invocations…

Delegates via properties

Let’s create a “LINQ proxy” which has a property of a suitable delegate type. Again, we’ll only provide Select but it would be possible to do other types – for instance the Where property could be typed to return another proxy.

using System;
using System.Linq.Expressions;

public class LinqProxy
{
    public Func<Expression<Func<string,string>>,int> Select { get; set; }
}

class Test
{
    static void Main()
    {
        LinqProxy proxy = new LinqProxy();
        
        proxy.Select = exp => 
        { 
            Console.WriteLine (“Select({0})”, exp);
            return 15;
        };
        
        int query = from x in proxy
                    select x;
    }
}

Again, it all compiles and runs fine, with an output of “Select(x => x)”. If you wanted, you could combine the two above approaches – the SourceType could have a static delegate property instead of a method. Also this would still work if we used a public field instead of a property.

What else is available?

Looking through the list of potential expressions, there are plenty more we can try to abuse: namespaces, method groups, anonymous functions, indexers, and events. I haven’t found any nasty ways of using these yet – if we could have methods or properties directly in namespaces (instead of in types) then we could use from x in SomeNamespace but until that time, I think sanity is reasonably safe. Of course, if you can find any further examples, please let me know!

Why is this useful?

It’s not. It’s really, really not – at least not in any way I can think of. The “proxy with delegate properties” might just have some weird, twisted use, but I’d have to see it to believe it. Of course, these are useful examples to prove what the compiler’s actually doing, and the extent to which query expressions really are just syntactic sugar (but sooo sweet) – but that doesn’t mean there’s a good use for the technique in real code.

Again, if you can find a genuine use for this, please mail me. I’d love to hear about such oddities.

Implementing deferred execution, and a potential trap to avoid

When talking about LINQ recently, I doodled an implementation of OrderBy on a whiteboard. Now, I know the real OrderBy method has to support ThenBy which makes life slightly tougher, but let’s suppose for a moment that it didn’t. Let’s further suppose that we don’t mind O(n2) efficiency, but we do want to abide by the restriction that the sort should be stable. Here’s one implementation:

public static IEnumerable<TSource> OrderBy<TSource,TKey>
    (this IEnumerable<TSource> source,
     Func<TSource,TKey> keySelector)
{
    IComparer<TKey> comparer = Comparer<TKey>.Default;
    List<TSource> buffer = new List<TSource>();
    foreach (TSource item in source)
    {
        // Find out where to insert the element -
        // immediately before the first element that
        // compares as greater than the new one.           
        TKey key = keySelector(item);
           
        int index = buffer.FindIndex
            (x => comparer.Compare(keySelector(x), key) > 0);
           
        buffer.Insert(index==-1 ? buffer.Count : index, item);
    }
       
    return buffer;
}

What’s wrong with it? Well, it compiles, and seems to run okay – but it doesn’t defer execution. As soon as you call the method, it will suck all the data from the source and sort it. List<T> implements IEnumerable<T> with no problems, so we’re fine to just return buffer… but we can very easily make it defer execution. Look at the end of this code (the actual sorting part is the same):

public static IEnumerable<TSource> OrderBy<TSource,TKey>
    (this IEnumerable<TSource> source,
     Func<TSource,TKey> keySelector)
{
    IComparer<TKey> comparer = Comparer<TKey>.Default;
    List<TSource> buffer = new List<TSource>();
    foreach (TSource item in source)
    {
        // Find out where to insert the element -
        // immediately before the first element that
        // compares as greater than the new one.           
        TKey key = keySelector(item);
           
        int index = buffer.FindIndex
            (x => comparer.Compare(keySelector(x), key) > 0);
           
        buffer.Insert(index==-1 ? buffer.Count : index, item);
    }
       
    foreach (TSource item in buffer)
    {
        yield return item;
    }
}

It seems odd to be manually iterating over buffer instead of just returned it to be iterated over by the client – but suddenly we have deferred execution. None of the above code will run until we actually start asking for data.

The moral of the story? I suspect many developers would change the second form of code into the first, thinking they’re just refactoring – when in fact they’re changing a very significant piece of behaviour. Be careful when implementing iterators!

Data pipelines as a conceptual stepping stone to higher order functions

I was explaining data pipelines in LINQ to Objects to a colleague yesterday, partly as the next step after explaining iterator blocks, and partly because, well, I love talking about C# 3 and LINQ. (Really, it’s becoming a serious problem. Now that I’ve finished writing the main text of my book, my brain is properly processing the stuff I’ve written about. I hadn’t planned to be blogging as actively as I have been recently – it’s just a natural result of thinking about cool things to do with LINQ. It’s great fun, but in some ways I hope it stops soon, because I’m not getting as much sleep as I should.)

When I was describing how methods like Select take an iterator and return another iterator which, when called, will perform appropriate transformations, something clicked. There’s a conceptual similarity between data pipelines using deferred execution, and higher order functions. There’s a big difference though: I can get my head round data pipelines quite easily, whereas I can only understand higher order functions for minutes at a time, and only if I’ve had a fair amount of coffee. I know I’m not the only one who finds this stuff tricky.

So yes, there’s a disconnect in difficulty level – but I believe that the more comfortable one is with data pipelines, the more feasible it is to think of higher order functions. Writing half the implementation of Push LINQ (thanks go to Marc Gravell for the other half) helped my “gut feel” understanding of LINQ itself significantly, and I believe they helped me cope with currying and the like as well.

I have a sneaking suspicion that if everyone who wanted to use LINQ had to write their own implementation of LINQ to Objects (or at least a single overload of each significant method) there’d be a much greater depth of understanding. This isn’t a matter of knowing the fiddly bits – it’s a case of grokking just why OrderBy needs to buffer data, and what deferred execution really means.

(This is moving off the topic slightly, but I really don’t believe it’s inconceivable to suggest that “average” developers implement most of LINQ to Objects themselves. It sounds barmy, but the tricky bit with LINQ to Objects is the design, not the implementation. Suggesting that people implement LINQ to SQL would be a different matter, admittedly. One idea I’ve had for a talk, whether at an MVP open day or a Developer Developer Developer day is to see how much of LINQ to Objects I could implement in a single talk/lecture slot, explaining the code as I went along. I’d take unit tests but no pre-written implementation. I really think that with an experienced audience who didn’t need too much hand-holding around iterator blocks, extension methods etc to start with, we could do most of it. I’m not saying it would be efficient (thinking particularly of sorting in a stable fashion) but it would be feasible, and I think it would encourage people to think about it more themselves. Any thoughts from any of you? Oh, and yes, I did see the excellent video of Luke Hoban doing Select and Where. I’d try to cover grouping, joins etc as well.)

I’m hoping that the more experience I have with writing funky little sequence generators/processors, the more natural functional programming with higher order functions will feel. For those of you who are already comfortable with higher order functions, does this sound like a reasonable hope/expectation? Oh, and is the similarity between the two situations anything to do with monads? I suspect so, but I don’t have a firm enough grasp on monads to say for sure. It’s obviously to do with composition, either way.

Visualising the Mandelbrot set with LINQ – yet again

I’ve been thinking about ranges again, particularly after catching a book error just in time, and looking at Marc’s generic complex type. It struck me that my previous attempts were all very well, and demonstrated parallelisation quite neatly, but weren’t very LINQy. In particular, they certainly didn’t use LINQ for the tricky part in the same way that Luke Hoban’s ray tracer does.

The thing is, working out the “value” of a particular point in the Mandelbrot set visualisation is actually quite well suited to LINQ:

  1. Start with a complex value
  2. Apply a transformation to it (square the current value, add the original value from step 1).
  3. Does the result have a modulus > 2? If so, stop.
  4. Have we performed as many iterations as we’re willing to? If so, stop.
  5. Take our new value, and go back to 2.

We only need two “extra” bits of code to implement this: a Complex type, and a way of applying the same transformation repeatedly.

Well, here’s the Complex type – it contains nothing beyond what we need for this demo, but it’ll do. Obviously Marc’s generic implementation is rather more complete.

public struct Complex
{
    readonly double real;
    readonly double imaginary;

    public Complex(double real, double imaginary)
    {
        this.real = real;
        this.imaginary = imaginary;
    }

    public static Complex operator +(Complex c1, Complex c2)
    {
        return new Complex(c1.real + c2.real, c1.imaginary + c2.imaginary);
    }

    public static Complex operator *(Complex c1, Complex c2)
    {
        return new Complex(c1.real*c2.real – c1.imaginary*c2.imaginary,
                           c1.real*c2.imaginary + c2.real*c1.imaginary);
    }

    public double SquareLength
    {
        get { return real * real + imaginary * imaginary; }
    }
}

Simple stuff, assuming you know anything about complex numbers.

The other new piece of code is even simpler. It’s just a generator. It’s a static method which takes an initial value, and a delegate to apply to one value to get the next. It then lazily returns the generated sequece – forever.

public static IEnumerable<T> Generate<T>(T start, Func<T, T> step)
{
    T value = start;
    while (true)
    {
        yield return value;
        value = step(value);
    }
}

Just as an example of use, remember Enumerable.Range which starts at a particular integer, then adds one repeatedly until it’s yielded as many results as you’ve asked for? Well, here’s a possible implementation, given the Generate method:

public static IEnumerable<int> Range(int start, int count)
{
    return Generate(start, x => x + 1).Take(count);
}

These are all the building blocks we require to build our Mandelbrot visualisation. We want a query which will return a sequence of bytes, one per pixel, where each byte represents the colour to use. Anything which goes beyond our maximum number of iterations ends up black (colour 0) and other values will cycle round the colours. I won’t show the drawing code, but the query is now more self-contained:

var query = from row in Enumerable.Range(0, ImageHeight)
            from col in Enumerable.Range(0, ImageWidth)
            // Work out the initial complex value from the row and column
            let c = new Complex((col * SampleWidth) / ImageWidth + OffsetX,
                                (row * SampleHeight) / ImageHeight + OffsetY)
            // Work out the number of iterations
            select Generate(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                              .Take(MaxIterations)
                                              .Count() into count
            // Map that to an appropriate byte value
            select (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

(The various constants used in the expression are defined elsewhere.) This works, and puts the Mandelbrot logic directly into the query. However, I have to admit that it’s much slower than my earlier versions. Heck, I’m still proud of it though.

As ever, full source code is available for download, should you so wish.

LambdaExpression – source code would be nice

Just taking a quick break from proof-reading to post a thought I had yesterday. Visual LINQ uses ToString() to convert an expression tree’s body into readable text. In some cases it works brilliantly, reproducing the original source code exactly – but in other cases it’s far from useful. For instance, from this expression tree representation:

(Convert(word.get_Chars(0)) = 113)

I suspect you wouldn’t have guessed that the original code was:

word[0] == ‘q’

Personally I don’t find it terrifically obvious, even though I can see how it works. Here’s a thought though – suppose the LambdaExpression class had a Source property which the compiler could optionally provide as a constructor argument, so that you could get at the original source code if you needed it. The use in Visual LINQ is obvious, but would it be useful elsewhere?

Well, suppose that LINQ to SQL couldn’t translate a particular query expression into SQL. Wouldn’t it be nice if it could report the part of the query expression it was having trouble with in the language it was originally written in? So a VB expression would give source in VB, a C# expression would give source in C# etc.

All of this would have to be optional, and I suspect some people would have intellectual property concerns about their source leaking out (most of which would be silly, due to the logic effectively being available with a call to ToString() anyway). I think it would be quite handy in a few situations though.

Visual LINQ: Watch query expressions as they happen!

Critical link (in case you can’t find it): Source Code Download

Update: Dmitry Lyalin has put together a screencast of Visual LINQ in action – it gives a much better idea of what it’s like than static pictures do. There’s music, but no speech – so you won’t be missing any important information if you mute it. 

I was going to save this until it was rather more polished, but I’ve just started reading the first proofs of C# in Depth, so it’s unlikely I’ll have much time for coding in the near future. I’m too excited about the results of Monday night to keep them until the book’s done :)

After my Human LINQ experiment, I was considering how it my be possible to watch queries occurring on a computer. I had the idea of turning LINQ to Objects into WPF animations… and it just about works. The initial version took about 3 hours on Monday night, and it’s had a few refinements since. It’s very definitely not finished, but I’ll go into that in a minute.

The basic idea is that you write a nearly-normal query expression like this:

 

VisualSource<string> words = new VisualSource<string>
    (new[] { “the”, “quick”, “brown”, “fox”, “jumped”,
             “over”, “the”, “lazy”, “dog” },
     canvas, “words”);

var query = from word in words
            where word.Length > 3
            select new { Word = word, Length = word.Length};

pipeline = query.End();

… and then watch it run. At the moment, the code is embedded in the constructor for MainWindow, but obviously it needs to be part of the UI at some point in the future. To explain the above code a little bit further, the VisualSource class displays a data source on screen, and calling End() on a query creates a data sink which will start fetching data as soon as you hit the relevant “Go!” button in the UI. Speaking of which, here’s what you see when you start it:

When you tell it to go, words descend from the source, and hit the “where” clause:

As you can see, “the” doesn’t meet the criterion of “word.Length > 3″, so the answer “False” is fading up. Fast forward a few seconds, and we’ll see the first passing result has reached the bottom, with the next word on its way down:

Results accumulate at the bottom so you can see what’s going on:

To be honest, it’s better seen “live” as an animation… but the important thing is that none of the text above is hand-specified (other than the data and the source name). If you change the query and rebuild/run, the diagram will change – so long as I’ve actually implemented the bits you use. So far, you can use:

  • select
  • where
  • group (with trivial or non-trivial elementSelector)
  • multiple “from” clauses (invoking SelectMany)

Select and Where have the most polish applied to them – they’ve got relatively fancy animations. Grouping works, but it appears to be just swallowing data until it spews it all out later – I want to build up what it’s going to return visually as it’s doing it. SelectMany could be a lot prettier than it is.

So, what’s wrong with it? Well:

  • Ordering isn’t implemented
  • Join isn’t implemented
  • GroupJoin isn’t implemented
  • The animation could do with tweaking (a lot!)
  • The code itself is fairly awful
  • The UI is unpolished (but functional) – my WPF skills are sadly lacking
  • It would be nice to lay the query out more sensibly (it gets messy with multiple sources for multiple “from” clauses)
  • Allow the user to enter a query (and their own data sources)

Despite all of that though, I’m really pleased with it. It uses expression trees to create a textual representation of the logic, then compiles them to delegates for the actual projection etc. A bit of jiggery pokery was needed to make anonymous types look nice, and I dare say there’ll be more of that to come.

Interested yet? The source code is available so you can play with it yourself. Let me know if you plan to make any significant improvements, and we could consider starting a SourceForge project for it.

My first screencast – automatic properties

I’ve recently been introduced to Dmitry Lyalin of Better Know a Framework. We’re getting together to do some screencasts, and the first one is now up. I’ll be embedding these on my C# in Depth site as well. The next one will be on object and collection initializers, then anonymous types. After that, we’ll see :)

Anyway, I’d be interested in comments. Things I know I want to do better:

  • Use a headset to get less hum on the narration (and fewer clicks etc)
  • Longer pauses between sections – they were recorded separately, but then put too close together

There’s not a lot I can do about sounding ridiculously posh, unfortunately – but don’t take that as a sign that I am posh.

I’d be particularly interested to know whether or not it’s helpful to hear the typing. I deliberately recorded the visual part with audio on (the narration was recorded before, then rerecorded after) so that you’d get the feeling this was “real” – but I don’t know whether it’s worth it or not.

Anyway, enjoy!

Human LINQ

Last night I gave a talk about C# 3 and LINQ, organised by Iterative Training and NxtGenUG. I attempted to cover all the features of C# 3 and the basics of LINQ in about an hour and a half or so. It’s quite a brutal challenge, and obviously I wasn’t able to go into much detail about anything. It went down reasonably well, but I can’t help feeling there’s a lot of room for improvement. That said, there was one part of the talk which really did go well, and made the appropriate points effectively.

I had demonstrated the following LINQ query in code:

using System;
using System.Linq;

class Test
{
    static void Main(string[] args)
    {
        var words = new[] {“the”, “quick”, “brown”, “fox”,
                “jumped”, “over”, “the”, “lazy”, “dog”};
       
        var query = from x in words
                    where x.Length > 3
                    where x[0] != ‘q’
                    select x.ToUpper();
       
       
        foreach (string word in query)
        {
            Console.WriteLine(word);
        }
    }
}

I know it would have been easy to combine the two “where” clauses, but separating them helped with the second part of the demonstration.

In the pizza break, I had prepared sheets of paper – some with the words on (“the”, “quick” etc), and some with clauses (“from x in words”, “where x.Length > 3″ etc). I asked for 5 volunteers, who I arranged in a line, facing the rest of the audience. I stood at “audience left” with the sheets of words, gave the next person the “from” clause, then the first “where” clause etc. The person at the far end didn’t have a sheet of paper, but they were acting as the foreach loop.

I suspect you can see what’s coming – we ran possibly the slowest LINQ query in the world. However, we did it reasonably accurately: when the next piece of information was needed, a person would turn to their neighbour and request it; the request would pass up the line until it got to me, whereupon I’d hand over a sheet of paper with a word on. If a “where” clause filtered out a word, they just dropped the piece of paper. When a word reached the far end, the guy shouted it out.

With this, I was able to illustrate:

  • Deferred execution (nothing starts happening until the foreach loop executes)
  • Streaming (only a single word was ever on the move at once)

Next we added an “orderby” clause in just before the end. Sure enough, we then see buffering in action – the guy representing the ordering can’t return any data to the “select” clause until he’s actually got all the data.

Finally, we removed the “orderby” clause again, but added a call to Count(). We didn’t have time to go into a lot of detail, but I think people understood why that led to immediate execution rather than the deferred execution we had earlier.

I suspect I’m not the first person to do something like this, but I’m still really pleased with it. If you’re ever talking about LINQ and people’s eyes are glazing over, it’s a fun little demo. It wasn’t perfect though; there are things I’d change:

  • Put the upper case version of the word on the back of the paper. We had to imagine the result of the projection.
  • Having two “where” clauses is useful for the first demo, but slows things down after that.
  • Possibly use fewer words – it takes quite a while, and having been through it three times, the audience may grow impatient.
  • Explain deferred execution more in terms of the result type – it’ll make it easier to contrast with immediate execution

Overall, it was a really fun night. I did a little interview with Dave McMahon afterwards, which should go up on the NxtGenUG site at some point. I suspect I was talking rather too quickly for the whole time, but we’ll see how it pans out.

Language design, when is a language "done", and why does it matter?

As per previous posts, I’ve been thinking a fair amount about how much it’s reasonable to keep progressing a language. Not only have thoughts about C# 4 provoked this, but also a few other sources:

The video is very well worth watching in its entirety – even though I wouldn’t pretend to understand everything in it. (It’s worth watching rather than just listening to, by the way – Gilad’s body language is very telling.) Here are just a few of the things which particularly caught my attention:

  • Mads, 14:45 on the “Babelification” which could occur if everyone plugs in their own type system into a pluggable language. This is similar to my concern about LISP-like macros in C#, I think.
  • Gilad, 23:40: “We’re the kind of people who love to learn new things. Most people hate to learn new things.” I don’t agree with that – but I’d say that people hate to feel they’re on a treadmill where they spent all their time learning, with no chance to actually use what they’ve learned.
  • Gilad, 28:35: “People never know when to stop [...] What happens is they do too much.”
  • Mads, 50:50: “The perfect language is the one that helps you do your task well, and that varies from task to task.”

So, what does this have to do with C#? Well, I was wondering how different people would respond when asked if C# was “done”. Eric’s certainly remarked that it’s nowhere near done – whereas prior to C# 3, I think I’d have called C# 2 “done”. C# 3 has opened my eyes a little about what might be possible – how radical changes can be made while still keeping a coherent language.

I’ve been worrying publicly about the burden of learning which is being placed on developers. I’m starting to change my thoughts now (yes, even since yesterday). I’ve started to wonder where the burden is coming from, and why it matters if C# changes even more radically in the future.

Who would make you learn or use C# 4?

Suppose the C# team went nuts, and decided that C# 4 would include:

  • x86 inline assembly
  • Optional reverse Polish notation, which could be mixed and matched with the existing syntax
  • Checked exceptions
  • Regular expressions as a language feature, but using a new and slightly different regex dialect
  • User-defined operators (so you could define the “treble clef” operator, should you wish to)
  • Making semi-colons optional, but whitespace significant. (Heck, we could remove braces at the same time – optionally.)
  • A scripting mode, where Console.WriteLine(“Hello”) would count as a complete program

I’m assuming that most readers wouldn’t want to use or even learn such a language. Would you do it anyway though? Bear it in mind.

Now suppose the C# team worked out ways of including significant pieces of obscure but powerful computer science into C# 4 instead. Lots to learn, but with great rewards. It’s backwardly compatible, but idiomatic C# 4 looks totally different to C# 3.

Here’s the odd thing: I’d be more comfortable with the first scenario than the second. Why? Because the first lets me get on with developing software, guilt-free. There’d be no pressure to learn a lunatic version of C# 4, whereas if it’s reasonably compelling I’ll have to find the time. It’s unlikely (in most companies anyway) that I’ll be given the time by my employers – there might be a training course if I’m lucky, but we all know that’s not really how you learn to use a language productively. You learn it by playing and experimenting in conjunction with the more theoretical training or reading. I like to learn new things, but I’m already several technologies behind.

What’s in a name?

Now consider exactly the same scenario, but where instead of “C# 4″ the language is named “Gronk#”. In both cases it’s still backwardly compatible with C# 3.

Logically, the name of the language should make no difference whatsoever. But it does. As a C# developer, I feel an obligation (both personal and from my employer) to keep up with C#. If you’re a C# developer who isn’t at least looking at C# 3 at the moment, you’re likely to find yourself behind the field. Compare that with F#. I’m interested to learn F# properly, and I really will get round to it some time – but I feel no commercial pressure to do so. I’m sure that learning a functional language would benefit many developers – as much (or even more) for the gains in perspective when writing C# 3 as for the likelihood of using the functional language directly in a commercial setting. But hey, it’s not C# so there’s no assumption that it’s on my radar. Indeed, I suspect that if I polled my colleagues, many wouldn’t have even heard of F#. They’re good engineers, but they have a home life which doesn’t involve obsessing over computer languages (yeah, I find it hard to believe too), and at work we’re busy building products.

We could potentially have more “freedom” if every language release came with a completely different name. It would happen to be able to build the old code, but that could seem almost incidental. (It would also potentially give more room for breaking changes, but that’s a very different matter.) There’d be another potential outcome – branching.

Consider the changes I’ve proposed for C# 4. They are mere tweaks. They keep the language headed in the same direction, but with a few minor bumps removed. Let’s call this Jon#.

Now consider a language which (say) Erik Meijer might build as the successor to C# 3. I’m sure there are plenty of features from Haskell which C# doesn’t have yet. Let’s suppose Erik decides to bundle them all into Erik#. (For what it’s worth, I don’t for one moment believe that Erik would actually treat C# insensitively. I have a great respect for him, even if I don’t always understand everything he says.)

Jon# and Erik# can be independent. There’s no need for Erik# to contain the changes of Jon# if they don’t fit in with the bigger picture. Conservative developers can learn Jon# and make their lives a bit easier for little investment. Radical free thinkers can learn Erik# in the hope that it can give them really big rewards in the long run. Everyone’s happy. Innovation and pragmatism both win.

Well, sort of.

We’ve then got two language specs, two compilers, two IDE experiences, etc. That hurts. Branching gives some freedom at the cost of maintenance – as much here as in source control.

Where do we go from here?

This has been a meandering post, which is partly due to the circumstances in which I’ve written it, and partly due to the inconclusive nature of my thoughts on the matter. I guess some of the main points are:

  • Names matter – not just in terms of getting attention, but in the burden of expected learning as well.
  • Contrary to impressions I may have given before, I really don’t want to be a curmudgeonly stifler of language innovation. I just worry about unintended effects which are more to do with day to day human reality than technical achievement.
  • There are always options and associated costs – branching being one option which gives freedom at a high price

I really don’t have a good conclusion here – but I hope plenty of people will spare me their thoughts on this slightly non-technical matter as readily as they have about specific C# 4 features.

C# 4, part 4: My manifesto and wishlist

The final part of this little series is the one where I suggest my own ideas for C# 4, beyond those I’ve already indicated my approval for in earlier posts. Before I talk about individual features, however, I’d like to put forward a manifesto which could perhaps help the decision-making process. I hasten to add that I haven’t run all the previous parts through this manifesto to make sure that I’ve been consistent, but all of these thoughts have been running around in my head for a while so I hope I haven’t been wildly out.

Manifesto for C# 4

I would welcome the following goals:

Remember it’s C#

Many suggestions have been trying to turn C# into either Ruby, LISP, or other languages. I welcome diversity in languages, and I believe in using the right tool for the job – but that means languages should stick to their core principles, too. Now, I know that sounds like I might be bashing C# 3, given how much that has borrowed from elsewhere for lambda expressions and the like, and I don’t know exactly how I square that circle internally – but I don’t want C# to become a giant toolbox that every useful feature from every language in existence is dumped into.

There are useful ideas to think about from all kinds of areas – not just existing languages – but I’d be tempted to reject them if they just don’t fit into C# neatly without redefining the whole thing.

Consider how people will learn it

I’ve mentioned this before, but I am truly worried about people learning C# 3 from scratch. One of the reasons I didn’t attempt to write about C# from first principles, instead assuming knowledge of C# 1, is that I’m not sure people can sensibly learn it that way. Now, I don’t think I can sensibly get inside the head of someone who doesn’t know anything about C#, but I suspect that I’d want to cover query expressions right at the very end, preferably after quite a while of experience in C# without them.

That might not go for every new feature – it’s probably worth knowing about automatic properties right from the start, for instance, and introducing lambda expressions at the same time as anonymous methods (if C# 3 is the definite goal) but expression trees would be pretty late in my list of topics.

I learned C# 1 from a background of Java, and it didn’t take long to understand the basics of the syntax. Many of the subtleties took a lot longer of course (and it was a very long time before I really understood the differences between events and delegates, I’m sad to say) but it wasn’t a hard move. For a long time C# 2 just meant generics as far as I was concerned – with occasional nullable types, and some of the simpler features such as differing access for getters and setters. Anonymous methods and iterator blocks didn’t really hit me in terms of usefulness until much later – possibly due to using closures in Groovy and more iterators in LINQ. I suspect for many C# 2 developers this is still the case.

My method of learning C# 3 (from the spec, often in rather more detail than normal for the sake of accuracy in writing) is sufficiently far off the beaten track as to make it irrelevant for general purposes, but I wonder how people will learn it in reality. How will it be taught in universities (assuming there are any that teach C#)? How easily will developers moving to C# from Java cope? How about from other languages?

Interestingly, the move from VB 9 to C# 3 is now probably easier than the move from Java 6 to C# 3. Even with the differences in generics between Java and C#, that probably wasn’t true with C# 2.

To get back to C# 4, I’d like the improvements to be somehow blend in so that learning C# 4 from scratch isn’t a significantly different experience to learning C# 3 from scratch. It’s bound to be slightly longer as there will certainly be new features to learn – but if they can be learned alongside existing features, I think that’s a valuable asset. It’s also worth considering how much investment will be required to learn C# 4 from a position of understanding C# 3. Going from C# 2 to C# 3 is a significant task – but it’s one which involves a paradigm shift, and of course the payoffs are massive. I’d be very surprised (and disappointed) to see the same level of change in C# 4, or indeed the same level of payoff. Conservative though this is, I’m after “quick wins” for developers – even if in some cases such as covariance/contravariance the win is far from quick from the C# design/implementation team’s perspective.

Just to put things into perspective: think about how many new technologies developers have being asked to learn in the last few years – WCF, WPF, Workflow Foundation, Cardspace, Silverlight, AJAX, LINQ, ClickOnce and no doubt other things I’ve forgotten. I feel a bit like Joseph II complaining that Mozart had written “too many notes” – and if you asked me to start exorcising any of these technologies from history I’d be horrified at the prospect. That doesn’t actually make it any easier for us though. I doubt that the pace of change is likely to slow down any time soon in terms of technologies – let’s not make developers feel like complete foreigners in a language they were happy with.

Keep the language spec understandable

I know there aren’t many people who look at the language spec, but the C# spec has historically been very, very well written. It’s usually clear (even if it can be hard to find the right section at times) and leaves rigidly defined areas of doubt and uncertainty in appropriate places, while stamping out ambiguity in others. The new unified spec is a great improvement over the “C# 1 … and then the new bits in C# 2″ approach from before. However, it’s growing at a somewhat alarming rate. As it grows, I expect it to become harder to read as a natural consequence – unless particular effort is put into countering that potential problem.

Stay technology-neutral…

Okay, this one is aimed fairly squarely at VB9. I know various people love XML literals, but I’m not a fan. It just feels wrong to have such close ties with a particular technology in the actual language, even one so widely used as XML. (Of course, there’s already a link between XML and C# in terms of documentation comments – but that doens’t feel quite as problematic.)

My first reaction to LINQ (before I understood it) was that C# was being invaded by SQL. Now that I’ve got a much better grasp of query expressions, I have no concern in that area. Perhaps it would be possible to introduce a new hierarchy construct which LINQ to XML understands with ease – or adapt the existing object/collection initializers slightly for this purpose. With some work, it may be possible to do this without restricting it to XML… I’m really just blue-skying though (and this isn’t a feature on my wishlist.)

… but bear concurrency in mind

While I don’t like the idea of tying C# to any particular technology, I think the general theme of concurrency is going to be increasingly important. That’s far from insightful – just about every technology commentator in the world is predicting a massively parallel computing landscape in the future. Developers won’t be able to get away with blissful ignorance of concurrency, even if not everyone will need to know the nuts and bolts.

Make it easier to do the right thing

This is effectively encouraging developers to “fall into the pit of success”. Often best practices are ignored as being inconvenient or impractical at times, and I’m certainly guilty of that myself. C# has a good history of enabling developers to do the right thing more easily as time progresses: automatic properties, iterator blocks and allowing different getter/setter access spring to mind as examples.

In some ways this is the biggest goal in this manifesto. It’s certainly guided me in terms of encouraging mixin and immutability support, ways of simplifying parameter/invariant checking, and potentially IDisposable implementation. I like features which don’t require me to learn whole new ways of approaching problems, but let me do what I already knew I should do, just more easily.

Wishlist of C# 4 features

With all that out of the way, what would I like to see in C# 4? Hopefully from the above you won’t be expecting anything earth-shattering – which is a good job, as all of these are reasonably minor modifications. Perhaps we could call it C# 3.5, shipping with .NET 4.0. That would really make life interesting, as people are already referring to C# 3 as C# 3.5 (and C# 2008)…

Readonly automatic properties

I’ve mentioned this before, but I’ll give more details here. I’d like to be able to specify readonly instead of protected/internal/private for the setter access, which would:

  • Mark the autogenerated backing variable as initonly in IL
  • Prevent code outside the constructor from setting the property

So, for example:

 

class ReadOnlyDemo
{  
    public string Name { get; readonly set; }

    public ReadOnlyDemo(string name)
    {
        // Valid
        Name = name;
    }
   
    public void TryToSetName(string newName)
    {
        // Invalid
        Name = newName;
    }
}

This would make it easier to write genuinely (and verifiably, as per Joe’s post) immutable classes, or just immutable parts of classes. As mentioned in previous comments, there could be interesting challenges around serialization and immutability, but frankly they really need to be addressed anyway – immutability is going to be one part of the toolkit for concurrency, whether it has language support or not. In the generated IL the property would only have a getter – calls to the setter in the constructor would be translated into direct variable sets.

This shouldn’t require a CLR change.

Property-scoped variables

I’ve been suggesting this (occasionally) for a long time, but it’s worth reiterating. Every so often, you really want to make sure that no code messes around with a variable other than through a property. This can be solved with discipline, of course – but historically we don’t have a good record on sticking to discipline. Why not get the compiler to enforce the discipline? I would consider code like this:

 

public class Person
{
    public int Age
    {
        // Variable is not accessible outside the Age property
        int age;
        get
        {
            return age;
        }
        set
        {
            if (value < 0 || value > SomeMaxAgeConstant)
            {
                throw new ArgumentOutOfRangeException();
            }
            age = value;
        }
    }

    public void SetAgeNicely(int value)
    {
        // Perfectly legal
        Age = value;
    }

    public void SetAgeSneakily(int value)
    {
        // Compilation error!
        age = value;
    }
}

 

Just in case Eric’s reading this: yes, having Age as a property of a person is a generally bad idea. Specifying a date of birth and calculating the age is a better idea. Really, don’t use this code as a model for a Person type. However, treat it as a dumb example of a reasonable idea. I need to find myself a better type to use as my first port of call when finding an example…

The variable name would still have to be unique – it would still be the name generated in the IL, for instance. Multiple variables could be declared if required. The generated code could be exactly the same as that of existing code which happened to only use the property to access the variable.

A couple of potential options:

  • The variables could be directly accessible during the constructor, potentially. This would help with things like serialization.
  • Likewise, potentially an attribute could be applied to other members which needed access to the variables. Bear in mind that we’re only trying to save developers from themselves (and their colleagues). We’re not trying to cope with intruders in a security sense. An active “I know I’m violating my own rules” declaration should cause enough discomfort to avoid the accidental issues we’re trying to avoid.

This shouldn’t require a CLR change.

Extension properties

This has been broadly talked about, particularly in view of fluent interfaces. It feels to me that there are two very different reasons for extension properties:

  1. Making fluent interfaces prettier, e.g. 19.June(1976) + 8.Hours + 20.Minutes instead of 19.June(1976) + 8.Hours() + 20.Minutes()
  2. Genuine properties, which of course couldn’t add new state to the extended type, but could access it in a different way.

Point 1 feels like a bit of a hack, I must admit. It’s using properties not because the result is “property-like” but because we want to miss the brackets off. It’s been pointed out to me that VB already allows this, and that by brackets to be missed out for parameterless methods we could achieve the same effect – but that just feels wrong. Arguably fluent interfaces already mess around with the normal conventions of what methods do and how they’re named, so using properties like this probably isn’t too bad.

Point 2 is a more natural reason for extension properties. As an example, consider a type which exposes a Size property, but not Width or Height. Changing either dimension individually requires setting the Size to a new one with the same value for the other dimension – this is often much harder to read than judicious use of Height/Width. I suspect that extension properties would actually be used for this reason less often than for fluent interfaces, but there may be any number of interesting uses I haven’t thought of.

This shouldn’t require a CLR change, but framework changes may be required.

Extension method discovery improvements

I’ve made it clear before now that the way extension methods are discovered (i.e. with using directives which import all the extension methods of all the types within the specified namespace) leaves much to be desired. I don’t like trying to reverse bad decisions – it’s pretty hard to do it well – but I really feel strongly about this one. (Interestingly, although I’ve heard many people criticising this choice, I don’t actually remember hearing the C# team defending it. Given that reservations were raised back in 2005, when there was still plenty of time to change stuff, I suspect there are reasons no-one’s thought of. I’d love to hear them some time.)

The goal would be to change from discovering extensions at a namespace to discovering extensions at a type level. (By which I mean at a “type containing extension methods” level – e.g. System.Linq.Enumerable or System.Linq.Queryably. Admittedly discovery on a basis which explicitly specifies the type to extend would also be interesting.) I don’t mind exactly how the syntax works, but the usual ideas are ones such as:

 

static using System.Linq.Enumerable;
using static System.Linq.Enumerable;
using class System.Linq.Enumerable;

 

That’s the easy part – the harder part would be working out the best way to phase out the “old” syntax. I would suggest a warning if extension methods are found and used without being explicitly mentioned by type. In C# 4 this could be a warning which was disabled by default (but could be enabled with pragmas or command line switches), then enabled by default in C# 5 (but with the same options available, this time to disable it). By C# 6 we could perhaps remove the ability to discover extension methods by namespace altogether, so the methods just wouldn’t be found any more.

The C# team could be more aggressive than this, perhaps skipping the first step and making it an enabled warning from C# 4 – but I’m happy to leave that kind of thing to them, without paying it much more attention. I know how seriously they take breaking changes.

No CLR changes required as far as I can see.

Implicit “select” at end of query expressions

I can’t say I’ve used VB9’s LINQ support, but I’ve heard about one aspect which has some appeal. In C# 3, every query expression ends with either “select” or “groupby”. The compiler is actually smart enough to ignore a redundant select clause (except for degenerate query expressions ) and indeed the language spec makes this clear. So why require it in the query expression in the first place? As a concrete example of before/after:

 

// Before
var query = from user in db.Users
            where user.Age > 18
            orderby user.Name
            select user;

// After
var query = from user in db.Users
            where user.Age > 18
            orderby user.Name;

 

This isn’t a massive deal, but it would be quite neat. I worry slightly that there could be significant costs in terms of the specification complexity, however.

Internal members on internal interfaces

Interfaces currently only ever have public members, even if the interfaces themselves are internal. This means that implementing an internal interface in an internal class still means making the implementing method public, or using explicit interface implementation (which imposes other restrictions, particularly in terms of overriding). It would be nice to be able to make members internal when the interface itself is internal – either explicitly or implicitly. Implementing such members publicly would still be allowed, but you could choose to keep the implementation internal if desired.

This may require a CLR change – not sure.

“Namespace+assembly” access restriction

It’s not an uncommon request on the C# newsgroup for the equivalent of C++’s “friend” feature – where two classes have a special relationship. In many ways InternalsVisibleTo is an assembly-wide version of this feature, but I can certainly see how it would be nice to have a slightly finer grained version. Sometimes two classes are naturally tightly coupled, even though they have distinct responsibilities. Although loose coupling is generally accepted to be a good thing, it’s not always practical. At the same time, giving extra access to all the types within the same assembly can be a little bit much.

Instead of specifying particular types to share members with, I’d propose a new access level, which would make appropriately decorated members available to other types which are both within the same assembly and within the same namespace. This would be similar to Java’s “package level” access (the default, for some reason) except without the implicit availability to derived types. (Java’s access levels and defaults are odd to say the least.)

(Of course, this wouldn’t help in assemblies which consisted of types within a single namespace.)

This would almost certainly require a CLR change.

InternalsVisibleTo simplification for strongly named assemblies

This one’s just a little niggle. In order to use the InternalsVisibleToAttribute to refer to a strongly named assembly (which you almost always have to do if the declaring assembly is strongly named), you have to specify the public key. Not the public key token as the documentation claims, but the whole public key. Not only that, but you can’t have any whitespace in it – so you can’t use a verbatim string literal to easily put it in a block. Instead, you either have to have the whole thing on one line, or use compile-time string concatenation to make sure the key is still unbroken.

It’s not often you need to look at the assembly attributes, so it’s far from a major issue – but it’s a mild annoyance which could be fixed with very few downsides.

This may require a CLR change – not sure.

Is that all?

I suspect that soon after posting this, I’ll think of other ideas. Some may be daft, some may be more significant than these, but either way I’ll do a new post for new ideas, rather than adding to this one. I’ll update this one for typos, further explanations etc. I suspect if I don’t post this now I’ll keep tweaking it for hours – which is relatively pointless as I’m really trying to provoke discussion rather than presenting polished specification proposals.