Category Archives: 5717

Semantic Tracing


I talk about semantic tracing in my Pluralsight course Event Tracing for Windows (ETW) in .NET. Here’s a summary.

The Semantic Logging Application Block (SLAB) documentation discusses semantic tracing here. This is great documentation, but obscures the fact that semantic tracing can be done on any platform in any language. It’s just a style of tracing.

You create a class, or a very small number of classes, dedicated to tracing (pseudo-code)

public class Logger
    public static Logger Logger = new Logger();
    public void AccessByPrimaryKey(int PrimaryKey, string TableName)
    public void CalculateGrowthRateStart(int InitialPopulation)
    public void CalculateGrowthRateEnd()
    // Numerous methods

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

Now tracing just involves knowing the logger class name and typing:


.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

With IntelliSense at every step this is about a dozen keystrokes. The only specific knowledge you need is the name of the Logger class – and with Visual Studio IntelliSense, you’ll get that easily if you include “Logger” anywhere in the name. Just imagine how simple that is!


Discoverability and easy access are just two of the benefits of this style of logging.

The call to the trace method states what is happening and helps to document the calling source code. This is especially true if you use named parameters.

The resulting trace output is strongly typed and easily filtered and sorted on specific named parameter values.

Strongly typed parameters improve performance by avoiding string creation and avoiding boxing on any platform that has cheap stacks and expensive heap memory usage (such as .NET).

All access to the underlying tracing technology is isolated. Stuck with TraceSource now, but intimidated by the potential of future performance issues? Move to semantic tracing and the future change to USE EventSource and ETW will involve a single file. Any change that doesn’t involve the signature, even a technology change, is isolated to a single class.

Another implication of isolated tracing is that details can be added and removed as they are available via the underlying technology – things like thread and user.

Details about the trace are also consistent. The calling code is unconcerned about issues like level (error, warning, information), channel, and keywords. These details can be changed at any point in the application life cycle.

The result is a trace strategy that is clear and consistent.

ETW, EventSource and SLAB

Both EventSource and SLAB encourage semantic tracing, assuming you avoid creating generic trace methods like “Log” and “Warning.” Semantic logging involves giving a nice name to the method, along with strongly typed parameters.

You might want vague methods to keep you going one afternoon when you encounter something new you want to trace, but use them as a temporary measure.

Whether a particular event is a warning, information or an error is in the nature of the action, not the nature of this particular occurrence of the action – or it’s not very well defined, and it’s not semantic.


One of the most important tools in improving your trace performance is semantic tracing. Simply put, it means that all trace calls are to a common class, or small set of classes, that have methods indicating each specific thing you will trace along with strongly typed parameters. The trace method and each trace parameter has a very clear name allowing IntelliSense to guide correct calls.

The Case of the Terrible, Awful Keyword

In the next version C# there will be a feature with a name/keyword you will probably hate.

The thread on CodePlex is even named “private protected is an abomination.”

This is the story of that name and what you can do to help get the best possible name.

The feature and why we don’t already have it

C# has a feature called protected internal. Protected internal means that the member is available to any code in the same assembly (internal) and is also available to code in any derived class (protected). In the MSIL (Intermediate Language), this is displayed as famorassem (family or assembly).

MSIL also supports famandassem (family and assembly) which allows access only from code that is in a derived class that is also in the current assembly.

Previously, every time the team has considered adding this feature in C#, they’ve decided against it because no one could think of a good name.

For the next version of C#, the team decided to implement this feature, regardless of whether they could come up with a suitable name. The initial proposal by the team was “private protected.” Everyone seems to hate that name.

The process

One of the great things about this point in language design is that the process is open. It continues to be open to insider’s like MVPs a bit earlier – which reduces chaos in the public – but the conversation is public while there’s still room for change..

In this case, the team decided on a name (private protected) and the outcry caused the issue to be reopened. That was great, because it allowed a lot of discussion. It seems clear that there is no obvious choice.

So the team took all the suggestions and made a survey. Lucian was conservative with the possible joke keywords – if it was possible that someone intended it seriously, it’s in the survey.

How you can help

Go take the survey! You get five votes, so it’s OK to not be a bit uncertain.

If you hate them all, which one annoys you least?

Do you think we need a variation of the IL name familyorassembly?

Do you think we need to include the names internal and/or protected?

Will people confuse and English usage and bit operation?

Will people confuse whether the specified scope is the access or restriction (inclusion or exclusion)?

Should the tradition of all lower case in C# be broken?

Do we need a new keyword?

Is there value in paralleling VB?

Note: In the VB language design meeting on this topic (VM LDM 2014-03-12), we chose to add two new keywords "ProtectedAndFriend" and "ProtectedOrFriend", as exact transliterations of the CLI keywords. This is easier in VB than in C# because of VB’s tradition of compound keywords and capitalizations, e.g. MustInherit, MustOverride. Lucian Wischik [[ If C# parallels, obviously Friend -> internal ]]

I don’t think there’s a promise that the elected name will be the one chosen, but the top chosen names will weigh heavily in the decision.

Go vote, and along the way, some of the suggestions are likely to bring a smile to your face.

Should the feature even be included

There are two arguments against doing the feature. On this, I’ll give my opinion.

If you can’t name a thing, you don’t understand it. Understand it before including it.

This was a good argument the first time the feature came up. Maybe even the second or third or fourth or fifth. But it’s been nearly fifteen years. It’s a good feature and we aren’t going to find a name that no one hates. Just include it with whatever name.

Restricting the use of protected to the current assembly breaks basic OOP principles

OK, my first response is “huh?”

One of the core tenets of OOP is encapsulation. This generally means making a specific class a black box. There’s always been a balance between encapsulation and inheritance – inheritance breaks through the encapsulation on one boundary (API) while public use breaks through it on another.

Inheritance is a tool for reusing code. This requires refactoring code into different classes in the hierarchy and these classes must communicate internal details to each other. Within the assembly boundary, inheritance is a tool for reuse – to be altered whenever it’s convenient for the programmer.

The set of protected methods that are visible outside the assembly is a public API for the hierarchy. This exposed API cannot be changed.

The new scope – allowing something to be seen only by derived members within the same assembly – allows better use of this first style of sharing. To do this without the new scope requires making members internal; internal is more restrictive than protected. But marking members internal gives the false impression that it’s OK for other classes in the assembly to use them.

Far from breaking OOP, the new scope allows encapsulation of the public inheritance API away from the internal mechanics of code reuse convenience. It can be both clear to programmers and enforced that one set of members is present for programming convenience and another set for extension of class behavior.

Multiple Versions of EventSource and One Version of SLAB

There are several versions of EventSource. .NET 4.5 has the initial version, which is improved in .NET 4.5.1. You want to use the 4.5.1 version if you possibly can. The diagnostics, in particular are significantly better.

The CLR team has also done a pre-release version of EventSource on NuGet. This version has a couple of important features, including even better diagnostics and Windows Event Viewer support. That’s right, you have to use the latest and greatest NuGet version if you have events you want admins to see in EventViewer.

Switching from EventSource in .NET 4.5.1 to the NuGet version of EventSource just requires changing the references and namespaces.

The Patterns and Practices team provides SLAB (Semantic Logging Application Block) that uses EventSource extensibility to offer a more familiar experience. In particular, SLAB offers traditional listeners so you can avoid ETW complexities during development. At runtime, you probably want ETW performance, but the tools still stink and you probably don’t want to use them during development.

The P&P team thinks all this multiple version of EventSource stuff is confusing. Well, they are right, it is confusing.

Rather than offer a version of SLAB for the NuGet version of EventSource, the P&P team decided to stick with a single version of SLAB that targets the .NET 4.5.1 version of EventSource.

If you want a version of SLAB that targets the NuGet version of EventSource, download the SLAB source code and make the same changes you’d make to your own code – change the references and namespaces. Here Grigori Melnik says SLAB 1.1 will improve the test suite to better support this modification.

Some ETW Basics

I wrote a summary of ETW here. I’ll be posting more to coincide with my new Pluralsight course on ETW tracing.

This is just a quick post so I can link back to these basics.

Tracing is what you include in your code so you know what’s happening during development and more importantly in production. Supporting players like .NET and the operating system also have trace output.

ETW stands for Event Tracing for Windows and is the tracing mechanism built into the Windows operating system.

ETW has controllers that turn tracing on and off, providers that create trace event entries, and consumers that let you view traces. Managing ETW tracing requires tools.

.NET provides good ETW support starting with .NET 4.5 with the EventSource class. This creates an ETW provider. You need external tools (not yet available in Visual Studio) to create and view traces.

ETW tracing with the EventSource class is fundamentally different than Trace and TraceSource. It is a different pipeline, configured with ETW tools, and can be turned on and/off without restarting your app.

ETW does the slow part of tracing on a separate thread. Trace, TraceSource, Log4Net and most other trace solutions do not do this, or not with the massive efficiency and blazingly fast performance of ETW.

Semantic tracing logically decouples from trace technology, isolates trace decisions like severity, is strongly typed (at the app), and states what is happening in the app, not details of what the trace should say.

Semantic tracing can be done with any trace technology, allowing you to switch later to ETW if you can’t do it now.

The Semantic Logging Application Block (SLAB from P&P) uses EventSource, avoids ETW tools during development, eases the transition to ETW, and has docs on using a semantic approach to tracing.

MEF and Startup Time

I got this question via email:

I am trying to determine why an application (I didn’t initially design it) that uses MEF and PRISM is slow (example: it takes more than 1 minute to login).  When I looked at the code I noticed the following:

1. Many of the classes that are decorated with the [Export] attribute and have the [ImportingConstructor] attribute on their constructors do significant work in their constructors (large queries that take several seconds to finish, calls to services, etc.).

2. MEF instantiate the above classes when the application starts.

When MEF instantiates these classes at start time, shouldn’t the constructors of these classes (i.e., the ones with the [ImportingConstructor] attribute) be as simple as possible? That is, avoid doing any lengthy operations so as to minimize the time it takes for MEF to finish instantiating all the classes that participate in composition?

Of course you don’t want your application doing all this work at startup and making your user wait! It’s great that you tracked the performance problem to something that is fixable.

There are a couple of parts to the answer.

Simplifying importing constructors

Importing constructors should be as simple as possible. That’s not just for performance. When things go bad in the creation of classes in MEF (or pretty much any DI tool), sorting out the problem can be a nightmare. Put more simply: do what you can to ease debugging object construction, and that means simplifying constructors.

In most cases, the easiest way is to create properties for the result of each query and have the query performed the first time it’s needed- a simple check against null followed by the query only when needed. Depending on the complexity of your application and the number of times this occurs, this is an easy or a rather tedious refactoring.

Delayed instantiation

The second point is that there is no inherent reason for MEF to be instantiating everything at startup. MEF instantiates classes when they are needed and something is requesting an instance of this class. These chains can become rather long and complicated when an instance of one class needs an instance of another, and another, and another. This is a general problem for DI containers.

I don’t know if this is happening inside PRISM or in code that you might have more control over, but this can happen in UI code where one screen or menu requires a reference to another, and another, and another.

It is possible that when you solve the problem with the importing constructors, you will still have a performance problem because of excessive linking – too many instances created. It will still be a design problem, and still not a MEF problem.

If this is deep in PRISM, I don’t know what to say, I don’t know PRISM.

If you encounter this in code you control, look at the Lazy class. Yep, there’s a Lazy class in .NET, I love that. While it’s slightly more difficult to use, it instantiates a simple lightweight object that has the capacity to create the actual instance at the time you first need it. This allows you to delay creation until you actually need it. Once the underlying instance is created, all requests for the value of the Lazy returns that first instance – no further instances are created.

MEF has other features for controlling lifetime, but they are more commonly used when you need additional instances in specific scenarios.

Other possible problems

A minute is a really long time in our world. More than a minute to login is a really, really long time.

It’s possible that you have additional issues. I suggest that you also check for pooling issues if you’re directly accessing a database (such as creating excess connections). Several seconds for a query against a local database is fairly long and you may also have indexing issues.

If the queries really are that long and when you are making calls to services, your users may still feel pain, even if it happens a different point in the application. You may need to shift to asynchronous calls for some of this work. I know async still feels like a high dive with no water in the pool, but it’s where we’re slowly going.

If this is a WPF application, or another application that loads a lot of assemblies (DLLs) and that has a predictable start-up sequence, explore multi-core JIT. If you have access to Pluralsight, you can find a discussion here in the clip titled “Multi-core background JIT.” It’s almost unknown, but you can get a significant boost in your assembly load time by adding a single line of code. If you don’t have other problems, this can improve overall startup times by about 20%.

It’s almost comical, but that one line of code is the name of a file. No matter how much we want this feature to just work, .NET can’t do something to the target machine without permission. You have to tell it where to put the information it uses to orchestrate JIT. That is, unless you’re using ASP.NET. Since ASP.NET is in the habit of putting files on its server, multi-core background JIT is automatically enabled for ASP.NET.

Your problem isn’t MEF, but…

I appreciate the question because you aren’t blaming MEF for other design problems. However, in our current server focused world, I want to point out that classic MEF is a little heavy. It is an amazingly powerful tool, but on a server with throughput issues and on a low power device it may not be the best tool.

Microsoft has also provided a different lightweight version of MEF. It’s had at least three names, and is currently on NuGet as MEF 2.

If you read my blog, you may have gathered between the lines that I think the core CLR teams are doing absolutely amazing, astounding things and that their communication with the community, their ability to toot their own horn, sucks. You didn’t know one line of code could improve your app load 20%, right? One of the ways that communication sucks is a lack of clarity on the future of MEF. It is inconceivable to me that as a tool as important as MEF could not have a long and bright future in both the rich and lightweight versions. But I wish the silence around MEF was not so loud.

I hope you’ll add to the comments what you tried that worked and didn’t work!

The Sixth Level of Code Generation

I wrote here about the five levels I see in code generation/meta-programming (pick your favorite overarching word for this fantastically complex space).

I missed one level in my earlier post. There are actually (at least) six levels. I missed the sixth because I was thinking vertically about the application – about the process of getting from an idea about a program all the way to a running program. But as a result I missed a really important level, because it is orthogonal.

Side note: I find it fascinating how our language affects our cognition. I think the primary reason I missed this orthogonal set is my use of the word “level” which implied a breakdown in the single dimension of creating the application.

Not only can we generate our application, we can generate the orthogonal supporting tools. This includes design-time deployment (NuGet, etc), runtime deployment, editor support (IntelliSense, classification, coloration, refactorings, etc.), unit tests and even support for code generation itself – although the last might feel a tad too much like a Mobius strip.

Unit tests are perhaps the most interesting. Code coverage is a good indicator of what you are not testing, absolutely. But code coverage does not indicate what you are testing and it certainly does not indicate that you are testing well. KLOC (lines of code) ratios of test code to real code are another indicator, but still a pretty poor one and still fail to use basic to use basic boundary condition understanding we’ve had for what, 50 years? And none of that leverages the information contained in unit tests to write better library code.

Here’s a fully unit tested library method (100% coverage) where I use TDD (I prefer TDD for libraries, and chaos for spiky stuff which I later painfully clean up and unit test):

public static string SubstringAfter(this string input, string delimiter)
var pos = input.IndexOf(delimiter, StringComparison.Ordinal);
if (pos < 0) return "";
return input.Substring(pos + 1 );

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

There are two bugs in this code.

Imagine for a minute that I had not used today’s TDD, but had instead interacted – with say… a dialog box (for simplicity). And for fun imagine it also allowed easy entry of XML comments; this is a library after all.

Now, imagine that the dialog asked about the parameters. Since they are strings – what happens if they are null or empty, is whitespace legal, is there an expected RegEx pattern, and are there any maximum lengths – a few quick checkboxes. The dialog would have then requested some sample input and output values. Maybe it would even give a reminder to consider failure cases (a delimiter that isn’t found in the sample). The dialog then evaluates your sample input and complains about all the boundary conditions you overlooked that weren’t already covered in your constraints. In the case above, that the delimiter is not limited to a length of one and I didn’t initially test that.

Once the dialog has gathered the data you’re willing to divulge, it looks for all the tests it thinks you should have, and generates them if they don’t exist. Yep, this means you need to be very precise in naming and structure, but you wanted to do that anyway, right?

Not only is this very feasible (I did a spike with my son and a couple conference talks about eight years ago), but there’s also very interesting extensions in creating random sample data – at the least to avoid unexpected exceptions in side cases. Yes, it’s similar to PEX, and blending the two ideas would be awesome, but the difference is you’re direct up-front guidance on expectations about input and output.

The code I initially wrote for that simple library function is bad. It’s bad code. Bad coder, no cookies.

The first issue is just a simple, stupid bug that the dialog could have told me about in evaluating missing input/output pairs. The code returns wrong the wrong answer if the length of the delimiter is greater than one and I’d never restricted the length to one. While my unit tests had full code coverage, I didn’t test a delimiter greater than one and thus had a bug.

The second issue is both common, insidious, and easily caught by generated unit tests. What happens if the input string or delimiter is null? Not only can this be caught by unit tests, but it’s a straightforward refactoring to insert the code you want into the actual library method – assertion, exception, or automatic return (I want null returned for null). And just in case you’re not convinced yet, there’s also a fantastic opportunity for documentation – all that stuff in our imagined dialog belongs in your documentation. Eventually I believe the line between your library code, unit tests and documentation should be blurry and dynamic – so don’t get too stuck on that dialog concept (I hate it).

To straighten one possible misconception in the vision I’m drawing for you, I am passionately opposed to telling programmers the order in which they should do their tasks. If this dialog is only available before you start writing your method – forget it. Whether you do TDD or spike the library method, whether you make the decisions (filling in the imagined dialog) up front or are retrofitting concepts to legacy code, the same process works.

And that’s where Roslyn comes in. As I said, we abandoned the research on this eight years ago as increasing the surface area of what it takes to write an app and requiring too much work in a specific order (and other reasons). Roslyn changes the story because we can understand the declaration, XML comments, the library method, the unit test name and attributes, and the actual code in the method and unit test without doing our own parsing. This allows the evaluation to be done at any time.

That’s just one of the reasons I’m excited about Roslyn. My brain is not big enough to imagine all the ways that we’re going to change the meaning of programming in the next five years. Roslyn is a small, and for what it’s worth highly imperfect, cog in that process. But it’s the cog we’ve been waiting for.

How are Event Parameters Best Used to Create an Intuitive Custom EventSourceTrace

Jason asked a really, really good question on StackOverflow. I’m answering it here, because it’s a wordy answer. The good news, is the answer is also in my about-to-be-released ETW Pluralsight video on ETW (everything is in their hands, just not posted yet, hopefully next Thursday!). I’ve also got some additional blog posts coming, but today, let me just answer Jason’s question.

“it is unclear how some of the properties are best used to create an intuitive custom trace”

Jason goes on to categorize Event attributes as “intuitive” and “non-intuitive”. I’m throwing out that distinction and covering all of them. And the most important advice might be the last on Message.


ETW supports four basic channels and the potential for custom channels. EventSource does not support custom channels (if you have a user story, contact me or the team). The default channel and the only one currently supporting in-line manifests is the Debug channel.

The Channel parameter exists only in the NuGet version and only for the purpose of accessing the additional channels, primarily the admin channel to access EventViewer for messages to admins. I was one of the people that fought for this capability, but it is for a very limited set of cases. Almost all events logically write to your channel – the default channel – the Debug channel.

To write to EventViewer, you need to write to the Admin channel and install a manifest on the target computer. This is documented in the specification, in my video, and I’m sure a couple of blog posts. Anything written to the admin channel is supposed to be actionable by ETW (Windows) guidelines.

Use Operational and Analytic channels only if it is part of your app requirements or you are supporting a specific tool.

In almost all cases, ignore the Channel parameter on the Event attribute and allow trace events to go to the Debug channel.


For the Admin Channel

If you are writing to the admin channel, it should be actionable. Information is rarely actionable. Use warning when you wish to tell them (not you, not a later dev, but ops) that you want them to be concerned. Perhaps that response times are nearing the tolerances of the SLA. Use error to tell them to do something. Perhaps that someone in the organization is trying to do something they aren’t allowed to do. Tell them only what they need to know. Few messages, but relatively verbose and very clear on what’s happening, probably including response suggestions. This is “Danger, danger Will Robinson” time.

For the Debug Channel

This is your time-traveling mind meld with a future developer or a future version of yourself.

I’m lucky enough to have sat down several times with Vance, Dan and Cosmin and this is one of the issues they had to almost literally beat into my head. The vast majority of the time, your application can, and probably should run with the default information turned on.

If you’re looking at an event that clearly represents a concern you have as a developer – something you want to scare a later developer because it scares you – like a serious failed assert – use warning. If someone is holding a trace file with ten thousand entries, what are the three things or the ten things you think tell them where the problem is? If they are running at the warning (not informational level) what do they really, truly need to know?

If it’s an error, use the error level.

If it’s a massively frequent, rarely interesting event, use verbose. Massively frequent is thousands of times a second.

In most cases, use the default informational level for the Level parameter of the Event attribute. Depending on team philosophy, ignore it or record it.


If you have verbose events, they need to be turned on and off in an intelligent fashion. Groups of verbose events need keywords to allow you to do this.

Warnings and Error levels do not need keywords. They should be on, and the reader wants all of them.

The danger of missing an event so vastly outweighs the cost of collecting events that informational events should be turned on without concern for keywords. If keywords aren’t going to be used to filter collection, their only value is filtering the trace output. There are so many other ways to filter the trace, keywords are not that helpful.

In most cases, use the Keywords parameter of the Event attribute only for verbose events and use them to group verbose events that are likely to be needed together. Use Keywords to describe the anticipated debugging task where possible. Events can include several Keywords.


On the roller coaster of life, we just entered one of the scary tunnels – the murky world of ETW trace event naming. As far as ETW is concerned, your event is identified with a numeric ID. Period.

Consumers of your trace events have a manifest – either because it’s in-line (default for Debug channel, supported by PerfView and gradually being supported by WPR/WPA) or installed on the computer where the trace is consumed. The manifest does not contain an event name that is used by consumers.

Consumers, by convention, make a name from your Task and Opcode.

EventSource exists to hide the weirdness (and elegance) of ETW. So it takes the name of your method and turns it into a task. Unless you specify a task. Then it uses your task as the task and ignores the name of your method. Got it?

In almost all cases, do not specify a Task parameter for the Event attribute, but consider the name of your method to be the Task name (see Opcode for exception).


I wish I could stop there, but Jason points out a key problem. The Start and Stop opcodes can be very important to evaluating traces because they allow calculation of elapsed time. When you supply these opcodes, you want to supply the Task to ensure proper naming.

And please consider the humans. They see the name of the method, they think it’s the name displayed in the consumer. For goodness sakes make it so. If you specify a task and opcode, ensure that the method name is the concatenation. Please

This is messy. I’m working on some IDE generation shortcuts to simplify EventSource creation and this is a key reason. I think it will help, but it will require the next public release of Roslyn.

Almost never use an Opcode parameter other than Start/Stop.

When using Start/Stop Opcodes, also supply a Task and ensure the name of the method is the Task concatenated with the Opcode for the sake of the humans.


The version parameter of the Event attribute is available for you and consumers to communicate about whether the right version of the manifest is available. Versioning is not ETW’s strength – events rarely changed before we devs got involved and now we have in-line manifests (to the Debug channel). You can use it, and the particular consumer you’re using might do smart things with it. And even so, the manifest is or is not correctly installed on any machines where installed manifests are used.

Overall, I see some pain down this route.

The broad rule for versioning ETW events is don’t. That is do not change them except to add additional data at the end (parameters to your method and WriteEvent call). In particular, never rearrange in a way that could give different meaning to values. If you must remove a value, force a default or marker value indicating missing. If you must otherwise alter the trace output, create a new event. And yes, that advice sucks. New events with “2” at the end suck. As much as possible, do up front planning (including confidentiality concerns) to avoid later changes to payload structure.

Initially ignore the Version parameter of the Event attribute (use default), but increment as you alter the event payload. But only add payload items at the end unless you can be positive that no installed manifests exist (and I don’t think you can).


Did you notice that so far I said, rarely use any of the parameters on the Event attribute? Almost never use them.

The Message parameter, on the other hand, is your friend.

The most important aspect of EventSource is documenting what the event needs from the caller of the code. It’s the declaration of the Event method. Each item passed should be as small as possible, non-confidential, and have a blazingly clear parameter name.

The guy writing against your event sees an available log method declaration like “IncomingDataRequest(string Entity, string PrimaryKey).” Exactly how long does it take him to get that line of code in place? “IncomingRequest(string msg)” leaves the dev wondering what the message is or whether it’s even the correct method. I’ve got some stuff in my upcoming video on using generics to make it even more specific.

Not only does special attention to Event method parameters pay off by speeding the writing of code that will call the Event method (removing all decision making from the point of the call), but (most) consumers see this data as individual columns. They will lay this out in a very pretty fashion. Most consumers allow sorting and filtering by any column. Sweet!

This is what Strongly Typed Events are all about.

Parameters to your method like “msg” do not cut it. Period.

In addition to the clarity issues, strings are comparatively enormous to be sticking into event payloads. You want to be able to output boatloads of events – you don’t want big event payloads filling your disks. Performance starts sucking pretty quickly if you also use String.Format to prepare a message that might never be output.

Sometimes the meaning of the parameter is obvious from the name of the event. Often it is not. The contents of the Message parameter is included in the manifest and allows consumers to display a friendly text string that contains your literals and whatever parts of the event payload seem interesting. Sort of like String.Format() – the “Message” parameter is actually better described as a “format” parameter. Since it’s in the manifest, it should contain all the repeatable parts. Let the strongly typed data contain only what’s unique about that particular call to the trace event.

The Message parameter uses curly braces so you feel warm and fuzzy. That’s nice. But the actual string you type in the parameter is passed to the consumer, with the curly braces replaced with ETW friendly percent signs. Do not expect the richness of String.Format() to be recognized by consumers. At least not today’s consumers.

By splitting the data into strongly typed chunks and providing a separate Message parameter, the person evaluating your trace can both sort by columns and read your message. The event payload contains only data, the manifest allows your nice wordy message. Having your beer and drinking it too.

Not sold yet? If you’re writing to a channel that uses installed manifests, you can also localize the message. This can be important if you are writing to the admin channel for use in EventViewer.

Almost always use Message so consumers can provide a human friendly view of your strongly typed event payload.


There are four basic rules for EventSource usage:

  • Give good Event method names
  • Provide strongly typed payload data – consider confidentiality – and work to get payload contents right the first time (small where possible)
  • Use the Message parameter of the event attribute for a nice human friendly message
  • For every other Event attribute parameter – simplify, simplify, simplify. Use the defaults unless you are trying to do something the defaults don’t allow

.NET 4.5.1 and Security Updates

I’d just like to add a quick little opinion piece on this very important announcement from the .NET framework team.

On Oct 16, 2013, the team announced that NuGet would be a release mechanism for the .NET framework. On that day, we should have all started holding our breath, getting prescriptions for Valium, or moving to Washington or Colorado (although technically that would not have helped until Jan 1).

A .NET framework release vehicle that is not tied to a security update mechanism? Excuse me? What did you say? Really? No? Are you serious?

Today’s announcement is that security update mechanism.

Obviously, this mechanism has been in the works and it was just a matter of getting everything tied and ready to go, which for various reasons took .NET 4.5.1.

So, there are now several really important points to make about .NET framework/CLR and its relation to NuGet

  • The concept of the “.NET Framework” is now a bit fuzzy. Did you notice I no longer capped the word “framework?” It’s been a couple years coming and a lot of background work (like PCL), but you have control over what “.NET framework” means in your application in a way that could never have been imagined by the framers of the Constitution.
  • The NuGet vehicle for delivering pieces of the framework/CLR supports a special Microsoft feed, PCL, and has already been used for really interesting things like TPL Dataflow and TraceEvent. It looks mature and ready from my vantage.
  • Gone are the days when NuGet was only pre-release play-at-your-peril software. Look for real stuff, important stuff, good stuff, stuff you want from the CLR/.NET framework team to be released on NuGet, as well as ongoing releases from other teams like ASP.NET and EF.
  • They’ve made some very important promises in today’s announcement. They must fulfill these promises (no breaking changes, period). We must hold their feet to the fire. In place upgrades are a special kind of hell, the alternative is worse, and even if you disagree on that point, this is the game we are playing.
  • Upgrade to .NET 4.5.1 as soon as your team feels it can. Please.

Thank you to the team for the incredible work of the last few years. Thank you. Thank you.

Explanation of Finding All Unique Combinations in a List

I showed an algorithm for finding all unique combinations of items in a list here.

I didn’t know whether the explanation would be interesting, so I simply offered to add it if someone wanted to see it. Someone did, so here goes.


Imagine first the problem on paper. Make a column for each item in the list – four is a good place to start and then you can generalize it upwards. The list will look something like this:


I’ll update this grid to use the number 1 as an x, and add a column for no items, because the empty set is important for the problem I’m solving:


And then assign each column to a bit. This makes each of the numbers from 0 to 15 a bit mask for the items to select to make that entry in the unique set.



The code creates a List of Lists because the problem I was solving was combinations of objects, not strings. I should have used IEnumerable here, as the returned list won’t be changed.

public static List<List<T>> GetAllCombos<T>(this List<T> initialList)
   var ret = new List<List<T>>();

I’m not sure about the mathematical proof for this, but the number of items is always 2^N (two to the power of N) or 2^N – 1 if you’re ignoring the empty set.

   // The final number of sets will be 2^N (or 2^N - 1 if skipping empty set)
   int setCount = Convert.ToInt32(Math.Pow(2, initialList.Count()));

When I started this post, I realized I’d left the Math.Pow function in this first call, so I’ll explain the difference. Math.Pow takes any value as a double data type and raises it to the power of another double. Doubles are floating points, making this a very powerful function. But in the special case of 2^N, there’s a much faster way to do this – perhaps two orders of magnitude faster. This doesn’t matter if you are only calling it once, but it is sloppy. Instead, I should have done a bit shift.

   var setCount = 1 << initialList.Count();

This bit shift operator takes the value of the left operand (one) and shifts it to the left. Thus if the initial count is zero, no shift is done, and there will be one resulting item, the empty list. If there are two items, the single bit that is set (initially 1) is shifted twice, and the result is 4:


Since each number is an entry in my set of masks, I iterate over the list (setCount is 16 for four items):

   // Start at 1 if you do not want the empty set
   for (int mask = 0; mask < setCount; mask++)

For my purposes, I’m creating a list – alternatively, you could build a string or create something custom from the objects in the list (a custom projection).

      var nestedList = new List<T>();

I then iterate over the count of initial items (4 for four items) – this corresponds to iterating over the columns of the grid:

      for (int j = 0; j < initialList.Count(); j++)

For each column, I need the value of that bit position – the value above the letter in the grids above. I can calculate this using the bit shift operator. Since this operation will be performed many times, you definitely want to use bit shift instead of Math.Pow here:

          // Each position in the initial list maps to a bit here
          var pos = 1 << j;

I use a bitwise And operator to determine whether the bit is set, for each item in the list. If the mask has the bit in the position j set, then that entry in the initial list is added to the new nested list.

         if ((mask & pos) == pos) { nestedList.Add(initialList[j]); }

Finally, I add the new nested list to the lists that will be returned, and finish out the loops.

   return ret;


Five Levels of Code Generation

NOTE 31 Jan, 2014: I discussed a sixth level in this post
NOTE 8 Feb 2014: I discussed why there are four, not five, bullet points in this post

I want to clarify the five levels of code generation because there’s recently been some confusion on this point with the RyuJIT release, and because I want to reference it in another post I’m writing.

Code generation can refer to…

– Altering code for the purpose of machines
The path from human readable source code to machine language

– DSL (Domain Specific Language)
Changing human instructions of intent, generally into human readable source code

– File Code Generation
Creating human readable source code files from small metadata, or sometimes, altering those files

– Application code generation or architecture expression
Creating entire systems from a significant body of metadata

Did I leave anything out?

The increasing size and abstraction of each level means we work with it fundamentally differently.

We want to know as little as possible about the path from human readable to machine readable. Just make it better and don’t bother me. The step we think about here is the compiler, because we see it. The compiler creates MSIL, an intermediate language. There’s another step of gong from IL to native code, and there’s an amazing team at Microsoft that does that – it happens to be called the code gen team inside of the BCL/CLR team. That’s not what I mean when I say code generation.

The phrase Domain Specific Language means many things to many people. I’m simplifying it to a way to abstract sets of instructions. This happens close to the point of application development – closer than a language like C#. As such, there is a rather high probability of bugs in the expression of the DSL – the next step on the path to native code. Thus most DSLs express themselves in human readable languages.

File code generation is what you almost certainly mean when you say “code generation”. Give me some stuff and make a useful file from it. This is where tools like T4, Razor, and the Visual Studio Custom Tools feature are aimed. And that’s where my upcoming tool is aimed.

Architecture expression may be in its infancy, but I have no doubt it is what we will all be doing in ten years. There’s been an enormous logjam at the 3rd Generation Language phase (3GL) for some very understandable reasons. It’s an area where you can point to many failures. The problem is not in the expression – the problem is in the metadata. It’s not architecture expression unless you can switch architectures – replace what you’re currently doing with something else entirely. That requires a level of metadata understanding we don’t have. And architectures that better isolate the code that won’t fit into the metadata format, which we have and don’t use.

RyuJIT is totally and completely at the first level. It’s a better way to create native code on a 64 bit computer that means compiling your app to 64 bit should no longer condemn it to run slower than its 32 bit friends. That’s a big deal, particularly as we’re shoved into 64 bit because of side cases like security cryptography performance.

RyuJit is either the sixth or seventh different way your IL becomes native code. I bet you didn’t know that. It’s a huge credit the team to have integrated improved approaches and you don’t even need to know about them. (Although, if you have startup problems in non-ASP.NET applications, explore background-JIT and MPGO, as well as NGen for simple cases).

The confusion when RyuJIT was released was whether it replaced the Roslyn compilers. The simple answer is “no.” Shout it from the rooftops. Roslyn is not dead. But that’s a story for another day.