Category Archives: 5867

Creating Strong-typed Metadata Classes

This post is about an aspect of the CodeFirstMetadata library. You can find out more about this library and where to get it here and here.

You can find out more about strong-typed metadata classes in this post.

You can find out about code-first (generalized, not Entity Framework) here.

This post talks about the two existing examples to explain how strong typing works in real code and to show how instances of these examples are created.

At present, in order to create a set of strong typed classes to solve a new problem you need to create a fairly messy set of classes. Feel free to ping me if you think you have a good problem or you want to extend the existing problems and I’ll help guide you. In the long run I want to automate that process, so I probably won’t document it until then.

Because part will be automated/generated, it comes in two parts. I’m currently combining them with inheritance, rather than partial classes, to make this code approachable for non-.NET programmers, and because virtual/override are simpler concepts.

These classes all derive from a common base class – CodeFirstMetadata<T> – to provide common features like naming. Below this are code element specific classes like CodeFirstMetadataClass<T> that help with the conversion. I may later replace this with a shallow hierarchy and interfaces, so don’t get dependent on this implementation.

For a semantic log, the class, the predictable part I’ll later generate looks like:

using System.Collections.Generic;
using CodeFirst.Common;

namespace CodeFirstMetadataTest.SemanticLog
{
// TODO: Generate this base class based on expected attributes
public abstract class CodeFirstSemanticLogBase : CodeFirstMetadataClass<CodeFirstSemanticLog>
{
public CodeFirstSemanticLogBase()
{
this.Events = new List<CodeFirstLogEvent>();
}

public virtual string UniqueName { get; set; }
public virtual string LocalizationResources { get; set; }

public IEnumerable<CodeFirstLogEvent> Events { get; private set; }

}

}



 



The manual changes I’ve made, which are by far the most complex I’ve needed so far are:



using System.Linq;
// TODO: Attempt to remove this line after generating base class

namespace CodeFirstMetadataTest.SemanticLog
{

public class CodeFirstSemanticLog : CodeFirstSemanticLogBase
{

private string _uniqueName;
public override string UniqueName
{
get
{
if (string.IsNullOrWhiteSpace(_uniqueName))
{ return Namespace.Replace(".", "-") + "-" + ClassName; }
return _uniqueName;
}
set
{ _uniqueName = value; }
}


public bool IncludesInterface
{ get { return this.ImplementedInterfaces.Count() > 0; } }

public bool IsLocalized
{ get { return !string.IsNullOrWhiteSpace(this.LocalizationResources); } }

public override bool ValidateAndUpdateCore()
{
var isOk = base.ValidateAndUpdateCore();
if (isOk)
{ return CheckAndUpdateEventIds(); }
return false;
}

/// <summary>
/// This is a weird algorithm because it numbers implicit events from
/// the top, regardless of whether other events have event IDs. But
/// while I wouldn't have chosen this, I think it's important to match
/// EventSource implicit behavior exactly.
/// </summary>
private bool CheckAndUpdateEventIds()
{
var i = 0;
foreach (var evt in this.Events)
{
i++;
if (evt.EventId == 0) evt.EventId = i;
}
// PERF: The following is an O<n2> algorithm, probably a better way
var dupes = this.Events
.Where(x => this.Events
.Any(y => (y != x) && x.EventId == y.EventId));
return (dupes.Count() == 0);
}
}
}



 



EventSource, and presumably any other log system, requires a unique name, and I want to help you create that. Also, whether there is an interface and whether the class is localized have a significant impact on the template, so I simplify access to this information.



Loading strong-typed metadata is an opportunity for validation of the model. I use this to provide unique numeric ids to each of the log events, which are needed by EventSource and potentially other log mechanisms.



Mapping Between Code-first and Strong-typed Metadata



A bunch of ugly Roslyn and reflection code maps between code-first and strong typed metadata. This is the code that drove creation of the RoslynDom library – directly hitting the .NET Compiler Platform/Roslyn API within this code was monstrous.



var root = RDomFactory.GetRootFromFile(fileName);
var cfNamespace = root.Namespaces.First();
var returnType = typeof(T);
var mapping = TargetMapping.DeriveMapping("root", "root", returnType.GetTypeInfo()) as TargetNamespaceMapping;
var mapper = new CodeFirstMapper();
var newObj = mapper.Map(mapping, cfNamespace);



  • cfNamespace is the RolsynDom root
  • T is the type to return – the strong-typed metadata
  • mapping derived data about the mapping of the target– just create it as shown
  • mapper is the class that does the hard work
  • newObj is the new strong-typed metadata object


In the end, you have an object that is the strong-typed metadata for the initial code.



OK, but how does that work?



For metaprogramming:



  • I create a minimal description is in a file with a .cfcs extension
  • I lie to Visual Studio and tell it that this is a C# file (Tools/Options/Text Editor/File Extensions) I get nice IntelliSense for most features (more work to be done later).
  • MSBuild doesn’t see it as a C# file, so the .cfcs files are ignored as source in compilation
  • Generation creates .g.cs files that are included in compilation


The intent is to have this automated as part of your normal development pipeline, through one or more mechanism – build, custom tools, VS extension/PowerShell. The pipeline part is not done yet, but you can grab the necessary pieces from the console application in the example.



Getting CodeFirstMetadata



You can get this project on GitHub. I’ll add this to NuGet when the samples are in a more accessible from your Visual Studio project.

RoslynDom and Friends – Just the Facts

See this post for the Roadmap of these projects

RoslynDom

A wrapper for the .NET Compiler Platform – the roadmap has further plans

Project on GitHub

See the RoslynDomExampleTests project in the solution for the 20 things you’re most likely to do

Download via Visual Studio NuGet Package Manager if you want to play with that

RoslynDom-Provider

By Jim Christopher

A PowerShell provider for Roslyn Dom

Project on GitHub

CodeFirstMetadata

Strong-typed metadata from code-first (general sense, not Entity Framework sense)

Project on GitHub

See the ConsoleRunT4Example project in the solution along with strong-typed files and T4 usage

Roadmap for RoslynDom, CodeFirst Strong-typed Metadata and ExpansionFirst Templates

I’ve been working on three interleaved projects RoslynDom, CodeFirst Strong-typed Metadata and ExpansionFirst Templates. Also, Jim Christopher (aka beefarino) built a PowerShell provider. This post is an overview of these projects and a roadmap of how they relate to each other.


You can find the short version here.


clip_image002[7]


In the roadmap, blue indicates full (almost) test coverage and that the library has had more than one user, orange indicates preliminary released code, and grey indicates code that it’s really not ready to go and not yet available.


I’m working left to right, waiting to complete some features of the RoslynDom library until I have the full set of projects available in preliminary form.


RoslynDom Library


.NET Compiler Services, or Roslyn, does exactly what it was intended to do, which is exactly what we want it to do. It’s a very good compiler, now released as open source, and exposing all of its internals. It’s great that we get access to the internal trees, but it’s not happy code for you and I to use – it’s compiler internals.


At the same time, these trees hold a wealth of information we want – it’s more complete information than reflection, holds design information like comments and XML documentation, and it’s available even when the source code doesn’t compile.


When you and I ask questions about our code, we ask simple things – what are the classes in this file? We don’t care about whitespace, or precisely how we defined namespaces. In fact, most of the time, we don’t even care about namespaces at all. And we certainly don’t care whether a piece of information is available in the syntactic or semantic tree or whether attributes were defined with this style or that style.


RoslynDom wraps the Roslyn compiler trees and exposes the information in a programmer friendly way. Goals include


  • Easy access to the tree in the way(s) programmers think about code as a hierarchy
  • Easy access to common information about the code as parameters
  • Access to the applicable SyntaxNode when you need it
  • Access to the applicable Symbol when you need it
  • Planned: Access to the full logical model – solution to smallest code detail
    (Currently, file down to member)
  • Planned: A kludged public annotation/design time attribute system until we get a real one
    (Currently, attribute support only)
  • Planned: Ability to morph and output changes
    (Currently, readonly)

Getting RoslynDom


You can get the source code on GitHub, and there’s a RoslynDomExampleTests project which shows how to do about 20 common things.


The project is also available via NuGet. It’s preliminary, use cautiously. Download with the Visual Studio NuGet package manager.


RoslynDom-Provider


Jim Christopher created a PowerShell provider for RoslynDom. PowerShell providers allow you to access the underlying tree of information in the same way you access the file system. IOW, you can mount your source code as though it was a drive.


I’m really happy about the RoslynDom-Provider. It shows one way to use a .NET Compiler Platform/library to access the information that’s otherwise locked into the compiler trees. It’s also another way for you to find out about the amazing power of PowerShell providers. If you’re new to PowerShell, and you’re a Pluralsight subscriber, check out “Discovering PowerShell with Mark Minasi”. It uses Active Directory as the underlying problem and a few parts may be slow for a developer, but it will give you the gist of it. Follow up with Jim Christopher’s “Everyday PowerShell for Developers” and “PowerShell Gotchas.” If you’d rather read, there are a boatload of awesome books including PowerShell Deep Dives and Windows PowerShell for Developers, and too many Internet sites for me to keep straight.


Getting RoslynDomProvider


This project is available on GitHub.


Code-first Strong-typed Metadata


You can find out more about strong-typed metadata here and code-first strong-typed metadata here.


As a first step, I have samples in runtime T4. These run from the command line at present. These templates inherit from a generic base class that has a property named Meta. This property is typed to the underlying strong-typed metadata item – in the samples either CodeFirstSemanticLog or CodeFirstClass. The EventSource template and problem is significantly more complex, but avoids some extra mind twisting with a strong-typed metadata class around a class. These templates are preliminary and do not handle all scenarios.



Metaprogramming


While there are a couple of ways to solve a metaprogramming expansion or code first problem, I’ve settled on an alternate file extension. The code-first minimal description is in a file with a .cfcs extension. Because I lie to Visual Studio and tell it that this is a C# file (Tools/Options/Text Editor/File Extensions) I get nice IntelliSense for most features (more work to be done later). But because MSBuild doesn’t see it as a C# file, the .cfcs file is ignored as a source file in compilation.


Generation produces an actual source code file in a file with a .g.cs extension. This file becomes part of your project. This is the “real” code and you debug in this “real” code because it’s all the compiler and debugger know about. As a result


- You write is the minimal code that only you can write


- You understand your application through either the minimal or expanded code


- You easily recognize expanded code via a .g.cs extension


- You can place the minimal and expanded code side by side to understand the expansion


- You debug in real code


- You protect the generated code by allowing only the build server to check in these files


Again this happens because there are two clearly differentiated files in your project – the .cfcs file and the .g.cs file.


The intent is to have this automated as part of your normal development pipeline, through one or more mechanism – build, custom tools, VS extension/PowerShell. The pipeline part is not done yet, but you can grab the necessary pieces from the console application in the example.


You can also find more here.



Getting CodeFirstMetadata


You can get this project on GitHub.


I’ll add this to NuGet when the samples are in a more accessible from your Visual Studio project.


ExpansionFirst Templates


T4 has brought us a very long way. It, and CodeSmith have had the lion’s share of code generation templating in the .NET world for about a decade. I have enormous respect for people like Gareth Jones who wrote it and kept it alive and Oleg Sych who taught so many people to use it. But i think it’s time to move on. Look for more upcoming on this – my current bits are so preliminary that I’ll wait to post.


Summary


I look forward to sharing the unfinished pieces of this roadmap in the coming weeks and months.


I’d like to offer a special thanks to the folks in my April DevIntersection workshop. The challenges of explaining the .NET Compiler Platform/Roslyn pieces to you let me to take a step back and isolate those pieces from the rest of the work. While this put me way behind schedule, in the end I think it’s valuable both in simplifying the metaprogramming steps and in offering a wrapper for the .NET Compiler Platform/Roslyn.

Code-first Metadata

This is “code first” in the general sense, not the specific sense of Entity Framework. This has nothing to do with Entity Framework at all, except that team showed us how valuable simple access to code-like metadata is.


Code first is a powerful mechanism for expressing your metadata because code is the most concise way to express many things. There’s 60 years of evolution to todays’ computer languages being efficient in expressing explicit concepts based on a natural contextualization. You can’t get this in JSON, XML or other richer and less-opinionated formats.


Code first is just one approach to getting strong-typed metadata. The keys to the kingdom, the keys to your code, lie in expressing the underlying problems of your code in a strong-typed manner, which you can read about here.


The problem is that the description of the problem is wrapped up with an enormous amount of ceremony about how to do what we’re trying to do. Let’s look at this in relation to metaprogramming where the goal is generally to reduce ceremony and


Only write the code that only you can write



In other words, don’t write any code that isn’t part of the minimum definition of the problem, divorced of all technology artifacts.


For example, you can create a SemanticLog definition that you can later output as an EventSource class, or any other kind of log output – even in a different language or on a different platform.


To do this, describe the SemanticLog in the simplest way possible, devoid of technology artifacts.


namespace ConsoleRunT4Example
{
[SemanticLog()]
public class Normal
{
public void Message(string Message) { }
[Event(2)]
public void AccessByPrimaryKey(int PrimaryKey) { }
}

}

Instead of the EventSource version:


using System;
using System.Diagnostics.Tracing;

namespace ConsoleRunT4Example
{
[EventSource(Name = "ConsoleRunT4Example-Normal")]
public sealed partial class Normal : EventSource
{
#region Standard class stuff
// Private constructor blocks direct instantiation of class
private Normal() { }

// Readonly access to cached, lazily created singleton instance
private static readonly Lazy<Normal> _lazyLog =
new Lazy<Normal>(() => new Normal());
public static Normal Log
{
get { return _lazyLog.Value; }
}
// Readonly access to private cached, lazily created singleton inner class instance
private static readonly Lazy<Normal> _lazyInnerlog =
new Lazy<Normal>(() => new Normal());
private static Normal innerLog
{
get { return _lazyInnerlog.Value; }
}
#endregion


#region Your trace event methods

[Event(1)]
public void Message(System.String Message)
{
if (IsEnabled()) WriteEvent(1, Message);
}
[Event(2)]
public void AccessByPrimaryKey(System.Int32 PrimaryKey)
{
if (IsEnabled()) WriteEvent(2, PrimaryKey);
}
#endregion
}
}

Writing less code (10 lines instead of 47) because we are lazy is a noble goal. But the broader benefit here is that the first requires very little effort to understand and very little trust about whether the pattern is followed. The second requires much more effort to read the code and ensure that everything in the class is doing what’s expected. The meaning of the code requires that you know what an EventSource is.


Code-first allows you to just write the code that only you can write, and leave it to the system to create the rest of the code based on your minimal definition.

Strong-typed Metadata

Your code is code and your code is data.

Metaprogramming opens up worlds where you care very much that your code is data. Editor enhancements open up worlds where you care very much that your code is data. Visualizations open up worlds where you care very much that your code is data. And I think that’s only the beginning.

There’s nothing really new about thinking of code as data. Your compiler does it, metaprogramming techniques do it, and delegates and functional programming do it.

So, let’s make your code data. Living breathing strongly-typed data. Strong typing means describing the code in terms of the underlying problem and providing this view as a first class citizen rather than a passing convenience.

Describing the Underlying Problem

I’ll use logging as an example, because the simpler problem of PropertyChanged just happens to have an underlying problem of classes and properties, making it nearly impossible to think about with appropriate abstractions. Class/property/method is only interesting if the underlying problem is about classes, properties and methods.

The logging problem is not class/method – it’s log/log event. When you strongly type the metadata to classes that describe the problem being solved you can reason about code in a much more effective manner. Alternate examples would be classes that express a service, a UI, a stream or an input device like a machine.

I use EventSource for logging, but my metadata describes the problem in a more generalized way – it describes it as a SemanticLog. A SemanticLog looks like a class, and once you create metadata from it, you can create any logging system you want.

Your application has a handful of conceptual groups like this. Each conceptual group has a finite appropriate types of customization. Your application problem also has a small number of truly unique classes.

Treating Metadata as a First Class Citizen

In the past, metadata has been a messy affair. The actual metadata description of the underlying patterns of your application have been sufficiently difficult to extract that you’ve had no reason to care,. Thus, tools like the compiler that treated your code as data simply created the data view it needed and tossed in out as rubbish when it was done.

The .NET Compiler Platform, Roslyn, stops throwing away its data view. It exposes it for us to play with.

Usage Examples

I’m interested in strongly typed metadata to write templates for metaprogramming. I want these template to be independent of how you are running them – whether they are part of code generation, metaprogramming, a code refactoring or whatever. I also want these templates to be independent of how the metadata is loaded.

Strongly typed metadata works today in T4 templates. My CodeFirstMetadata project has examples.

I’m starting work on expansion first templates and there are many other ways to use strong-typed metadata – both for other metaprogramming techniques and completely different uses. One of the reasons I’m so excited about this project is to see what interesting things people do, once their code is in a strong-typed form. At the very least, I think it will be an approach to visualizations and ensuring your code follows expected patterns. It will be better at ensuring large scale patterns than code analysis rules. Whew! So much fun work to do!!!

Strong-typed Metadata in a T4 Template

Here’s a sample of strong typing in a T4 template

 
image
 



There’s some gunk at the top to add some assemblies and some using statements for the template itself. The important piece at the top is that the class created by this template is a generic type with a type argument – CodeFirstSemanticLog – that is a strong-typed metadata class. Thus the Meta property of the CodeFirstT4CSharpBase class is a SemanticLog class and understands concepts specific to the SemanticLog, like IncludesInterface. I’ve removed a few variable declarations that are specific to the included T4 files.

Did No One Count?

This is embarrassing, although I can explain, really officer. I wasn’t drinking, it just looked that way.

I put up a Five Levels of Code Generation and it contained these bullet points:

  • Altering code for the purpose of machines
    The path from human readable source code to machine language
  • DSL (Domain Specific Language)
    Changing human instructions of intent, generally into human readable source code
  • File Code Generation
    Creating human readable source code files from small metadata, or sometimes, altering those files
  • Application code generation or architecture expression
    Creating entire systems from a significant body of metadata

See the problem?

Now, if you read the post, you might see what I intended. You might realize that I was in the left turn lane, realized I needed something at the drugstore on the right, didn’t realize the rental car needed to have its lights turned on (mine doesn’t) on a completely empty road at midnight in a not-great neighborhood in Dallas. Really, officer, I haven’t been drinking, I’m just an idiot.

There’s five because I make a split in the first item: That was partially because that post was inspired by confusion regarding what RyuJIT means to the future of .NET. (It actually means, and only means, that your programs will run better/faster in some scenarios).

The code you write becomes something, and then it becomes native code. That “something” for us has been IL, but might be a different representation. One reason for the distinction is that there are entirely separate teams that think about different kinds of problems working on compilers and native code generation. IL is created by a compiler that specializes in parsing, symbol resolution and widely applicable optimization. Native code is created in a manner specific to the machine where it will be run. In the big picture, this has been true since .NET was released and it’s a great design.

I think language compilation and native code creation are two distinct steps. One is all about capturing the expressive code you write, and the other is all about making a device work based on its individual operating system APIs.

But I might be wrong. I might be wrong because the increasing diversity in our environments means implications of native code API’s on the libraries you use (PCL). I might be wrong because languages like JavaScript don’t use IL (although minification is not entirely different). I might be wrong because it’s only the perspective of the coder that matters, and the coder rarely cares. I might be wrong because I’m too enamored with the amazing things like multi-core background JIT and MPGO (you can see more in the Changes to the .NET Framework module of my What’s New in .NET 4.5 Pluralsight course).

The taxonomy of code generation will shape the upcoming discussions Roslyn will inspire about metaprogramming. Metaprogramming encompasses only the DSL/expansion, file, and architecture levels.

You might be rolling your eyes like the officer handing me back my license in Dallas. Yes, officer. You’re absolutely right. If I titled the post “Five Levels of Code Generation” I should have had FIVE bullet points.

The Sixth Level of Code Generation

I wrote here about the five levels I see in code generation/meta-programming (pick your favorite overarching word for this fantastically complex space).

I missed one level in my earlier post. There are actually (at least) six levels. I missed the sixth because I was thinking vertically about the application – about the process of getting from an idea about a program all the way to a running program. But as a result I missed a really important level, because it is orthogonal.

Side note: I find it fascinating how our language affects our cognition. I think the primary reason I missed this orthogonal set is my use of the word “level” which implied a breakdown in the single dimension of creating the application.

Not only can we generate our application, we can generate the orthogonal supporting tools. This includes design-time deployment (NuGet, etc), runtime deployment, editor support (IntelliSense, classification, coloration, refactorings, etc.), unit tests and even support for code generation itself – although the last might feel a tad too much like a Mobius strip.

Unit tests are perhaps the most interesting. Code coverage is a good indicator of what you are not testing, absolutely. But code coverage does not indicate what you are testing and it certainly does not indicate that you are testing well. KLOC (lines of code) ratios of test code to real code are another indicator, but still a pretty poor one and still fail to use basic to use basic boundary condition understanding we’ve had for what, 50 years? And none of that leverages the information contained in unit tests to write better library code.

Here’s a fully unit tested library method (100% coverage) where I use TDD (I prefer TDD for libraries, and chaos for spiky stuff which I later painfully clean up and unit test):

public static string SubstringAfter(this string input, string delimiter)
{
var pos = input.IndexOf(delimiter, StringComparison.Ordinal);
if (pos < 0) return "";
return input.Substring(pos + 1 );
}



.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

There are two bugs in this code.



Imagine for a minute that I had not used today’s TDD, but had instead interacted – with say… a dialog box (for simplicity). And for fun imagine it also allowed easy entry of XML comments; this is a library after all.



Now, imagine that the dialog asked about the parameters. Since they are strings – what happens if they are null or empty, is whitespace legal, is there an expected RegEx pattern, and are there any maximum lengths – a few quick checkboxes. The dialog would have then requested some sample input and output values. Maybe it would even give a reminder to consider failure cases (a delimiter that isn’t found in the sample). The dialog then evaluates your sample input and complains about all the boundary conditions you overlooked that weren’t already covered in your constraints. In the case above, that the delimiter is not limited to a length of one and I didn’t initially test that.



Once the dialog has gathered the data you’re willing to divulge, it looks for all the tests it thinks you should have, and generates them if they don’t exist. Yep, this means you need to be very precise in naming and structure, but you wanted to do that anyway, right?



Not only is this very feasible (I did a spike with my son and a couple conference talks about eight years ago), but there’s also very interesting extensions in creating random sample data – at the least to avoid unexpected exceptions in side cases. Yes, it’s similar to PEX, and blending the two ideas would be awesome, but the difference is you’re direct up-front guidance on expectations about input and output.



The code I initially wrote for that simple library function is bad. It’s bad code. Bad coder, no cookies.



The first issue is just a simple, stupid bug that the dialog could have told me about in evaluating missing input/output pairs. The code returns wrong the wrong answer if the length of the delimiter is greater than one and I’d never restricted the length to one. While my unit tests had full code coverage, I didn’t test a delimiter greater than one and thus had a bug.



The second issue is both common, insidious, and easily caught by generated unit tests. What happens if the input string or delimiter is null? Not only can this be caught by unit tests, but it’s a straightforward refactoring to insert the code you want into the actual library method – assertion, exception, or automatic return (I want null returned for null). And just in case you’re not convinced yet, there’s also a fantastic opportunity for documentation – all that stuff in our imagined dialog belongs in your documentation. Eventually I believe the line between your library code, unit tests and documentation should be blurry and dynamic – so don’t get too stuck on that dialog concept (I hate it).



To straighten one possible misconception in the vision I’m drawing for you, I am passionately opposed to telling programmers the order in which they should do their tasks. If this dialog is only available before you start writing your method – forget it. Whether you do TDD or spike the library method, whether you make the decisions (filling in the imagined dialog) up front or are retrofitting concepts to legacy code, the same process works.



And that’s where Roslyn comes in. As I said, we abandoned the research on this eight years ago as increasing the surface area of what it takes to write an app and requiring too much work in a specific order (and other reasons). Roslyn changes the story because we can understand the declaration, XML comments, the library method, the unit test name and attributes, and the actual code in the method and unit test without doing our own parsing. This allows the evaluation to be done at any time.



That’s just one of the reasons I’m excited about Roslyn. My brain is not big enough to imagine all the ways that we’re going to change the meaning of programming in the next five years. Roslyn is a small, and for what it’s worth highly imperfect, cog in that process. But it’s the cog we’ve been waiting for.

Five Levels of Code Generation

NOTE 31 Jan, 2014: I discussed a sixth level in this post http://bit.ly/1ih3vL5.
NOTE 8 Feb 2014: I discussed why there are four, not five, bullet points in this post
http://bit.ly/1cf4Pcu


I want to clarify the five levels of code generation because there’s recently been some confusion on this point with the RyuJIT release, and because I want to reference it in another post I’m writing.


Code generation can refer to…


- Altering code for the purpose of machines
The path from human readable source code to machine language



- DSL (Domain Specific Language)
Changing human instructions of intent, generally into human readable source code


- File Code Generation
Creating human readable source code files from small metadata, or sometimes, altering those files


- Application code generation or architecture expression
Creating entire systems from a significant body of metadata


Did I leave anything out?


The increasing size and abstraction of each level means we work with it fundamentally differently.


We want to know as little as possible about the path from human readable to machine readable. Just make it better and don’t bother me. The step we think about here is the compiler, because we see it. The compiler creates MSIL, an intermediate language. There’s another step of gong from IL to native code, and there’s an amazing team at Microsoft that does that – it happens to be called the code gen team inside of the BCL/CLR team. That’s not what I mean when I say code generation.


The phrase Domain Specific Language means many things to many people. I’m simplifying it to a way to abstract sets of instructions. This happens close to the point of application development – closer than a language like C#. As such, there is a rather high probability of bugs in the expression of the DSL – the next step on the path to native code. Thus most DSLs express themselves in human readable languages.


File code generation is what you almost certainly mean when you say “code generation”. Give me some stuff and make a useful file from it. This is where tools like T4, Razor, and the Visual Studio Custom Tools feature are aimed. And that’s where my upcoming tool is aimed.


Architecture expression may be in its infancy, but I have no doubt it is what we will all be doing in ten years. There’s been an enormous logjam at the 3rd Generation Language phase (3GL) for some very understandable reasons. It’s an area where you can point to many failures. The problem is not in the expression – the problem is in the metadata. It’s not architecture expression unless you can switch architectures – replace what you’re currently doing with something else entirely. That requires a level of metadata understanding we don’t have. And architectures that better isolate the code that won’t fit into the metadata format, which we have and don’t use.


RyuJIT is totally and completely at the first level. It’s a better way to create native code on a 64 bit computer that means compiling your app to 64 bit should no longer condemn it to run slower than its 32 bit friends. That’s a big deal, particularly as we’re shoved into 64 bit because of side cases like security cryptography performance.


RyuJit is either the sixth or seventh different way your IL becomes native code. I bet you didn’t know that. It’s a huge credit the team to have integrated improved approaches and you don’t even need to know about them. (Although, if you have startup problems in non-ASP.NET applications, explore background-JIT and MPGO, as well as NGen for simple cases).


The confusion when RyuJIT was released was whether it replaced the Roslyn compilers. The simple answer is “no.” Shout it from the rooftops. Roslyn is not dead. But that’s a story for another day.

Plain Old Objects and MEF

After my MEF presentation at the Hampton Roads .NET User Group someone asked me about creating objects like customers and invoices via MEF. I gave an overly quick answer to a really good question.


A lot of the IoC history involves using dependency injection for services. This is great partly because it’s a framework to isolate plain old objects from services, and services from each other. Like many of the techniques we’ve adopted in the agile timeframe, it’s not just what the technique does for us, but what the technique does to us. That’s the quick answer I gave.


But, we can go further to fully composed systems. Fully composed systems have offered mind boggling benefits in some places they’ve been tried, and they haven’t been tried in very many places yet. This is why I have such a high fascination with NetKernel and the work people like Randy Kahle (@RandyKahle) and Brian Sletten (@bsletten) are doing. And that work is similar to work Juval Lowy and I have talked about for a number of years.


However, fully composed systems with MEF, and I assume other DI tools (although I’ll be happy to be proven wrong) are hard. Without the infrastructure of something like NetKernel there’s a fair amount of work to do, and without the caching benefits of NetKernel it’s going to be tough to justify. It’s hard because everything needs an interface. Everything. And even if you generate the interfaces and plain objects, the level of infrastructure ceremony gets very unwieldy.. At least that’s my experience from using MEF to wrap everything (yes, everything) in interfaces, in order to create a fully composed MEF 1.0 system.


We could go a slightly different direction. Put everything into the container, but place plain old objects in the container as their own type, rather than via an interface. Plain old objects in this sense are objects that we can’t imagine a scenario where they’ll be reused and they have a unique, and generally multi-valued interface. A customer or invoice POCO would be examples.


Placing these objects into the container offers the immediate benefit of guaranteeing their usage is isolated We take advantage of what DI does to us, not just for us..


And if we use inference in MEF 2.0 (.NET 4.5), and probably configuration techniques with other IoC containers, we can stick another object in if we have a later reason to do it.


But here’s where strong typing bites us. Any new class that ever replaces that plain old object (the customer or invoice) has to be assignable to that class. That means it has to be that class or a class that derives from it. I’m still hanging on to the strong typing life boat because I still feel that without it I’m in a North Atlantic storm. For big systems, I think that’s still true, and while I put a lot of thought into making big systems into smaller systems, non-typed based DI is still a jump into icy water for me.


With the plain object in the container under its own type, if I get blindsided with a requirement that just doesn’t fit I can manage, I just have to write a wrapper for the non-derived object and the wrapper has to derive from the expected type. Ugly, but workable.


What I want to do is experiment with strongly typed systems with generated interfaces. I’ve already done this with traditional generation, and I want a solution that is cleaner than that. I don’t have the syntax worked out, but imagine that we never create the interface for our plain old object, we just tell the system to do it. The container uses the interface, all using objects request the object by its interface, and we humans can ignore it.


Until the day the plain old object needs attention. On that day, we make the interface explicit and do whatever we need to do.


But with the versions of .NET we have today, we can’t build this.

MEF Assembly Granularity

I’ve been contemplating how to organize MEF assemblies. I think the processing I did establishing the first cut at organization, and the shake down of that strategy, may be interesting to other people designing MEF systems.


As a quick review, MEF lets you throw parts into a MEF container and sort out how parts work together at runtime. Parts are recognized by a string identifier. I’m almost always using interfaces as the contract and the interface names as the identifiers. Parts reside within assemblies and in the common case assemblies are discovered because they are grouped into anticipated directories.


With this approach, only the part implementing the interface and the part that is using the interface need to understand the interface or explicitly reference the interface’s assembly. And since parts are discovered and loaded at an assembly level, the granularity of implementing assemblies also controls the granularity of the load. I care about the assembly granularity of contract/interface assemblies so excess stuff can be avoided and naming conflicts (resolved via namespaces) are minimized. I care about the granularity of implementation assemblies because until I attain a priority system with additional granularity, prioritization/defaults are only as granular as their containing assemblies.


At one extreme, all interfaces reside in one assembly and all implementations reside in another. It doesn’t make sense to put them into the same assembly as then hard coded references exist and ensuring isolation is difficult. At the other extreme, every interface and every implementation resides in its own assembly. I think both of these extremes are a terrible solution. That’s because this composable system (and I would think any composable system) have parts with very different roles and lineages/history. In the simplest sense for a generator – metadata providers and templates are fundamentally different and could easily be provided by different teams.


Initially I thought the primary consideration should be the implementation deployment, but Phil Spidey pointed out in the MEF discussions that the interface organization is more important, because once released to the wild it might be hard to fix.


I decided on six contract assemblies:


CommonContracts

Interfaces referenced by the template harness itself

CommonDatabaseMetadataContracts

Interfaces sharing database structure

CommonDomainMetadataContracts

Interfaces sharing business object structure

CommonNamingServiceContracts

Interfaces for a naming service

CommonOutputServiceContracts

Interfaces for outputting data, including hashing

CommonServiceContracts

Miscellaneous interfaces that don’t fit elsewhere



I’ve used a few criteria for this design:


Interfaces that are used by the system and therefore can’t easily be changed reside together in CommonContracts. The template harness also references CommonOutputServiceContracts but this is in a separate assembly because it has a distinct purpose, may evolve on a different time frame and you are far more likely to provide alternate implementations for output than for the core interfaces.


The naming service is also a separate assembly because it is a distinct purpose and some people will certainly supply alternate implementations to manage human languages other than US English. Both the output service and naming service are a few distinct interfaces that work together. I also had a few odd ball interfaces and decided to go with a grab bag of miscellaneous interfaces rather than a separate assembly for each interface. Time will tell whether that is a good decision.


I initially put the two metadata interfaces into a single assembly, but I think it’s quite likely that these interfaces will evolve separately and almost certain that they will be implemented independently.


I’d like to note that the first version of the harness, which is almost, almost done (a separate blog post) will be a CTP /alpha level release. I will take feedback on the interfaces and I do expect them to change. A core part of the composable design is that you can spin off your interfaces/implementations so while these changes will be breaking, you can uptake them at your own pace.