Category Archives: 17819

Semantic Tracing


I talk about semantic tracing in my Pluralsight course Event Tracing for Windows (ETW) in .NET. Here’s a summary.

The Semantic Logging Application Block (SLAB) documentation discusses semantic tracing here. This is great documentation, but obscures the fact that semantic tracing can be done on any platform in any language. It’s just a style of tracing.

You create a class, or a very small number of classes, dedicated to tracing (pseudo-code)

public class Logger
    public static Logger Logger = new Logger();
    public void AccessByPrimaryKey(int PrimaryKey, string TableName)
    public void CalculateGrowthRateStart(int InitialPopulation)
    public void CalculateGrowthRateEnd()
    // Numerous methods

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

Now tracing just involves knowing the logger class name and typing:


.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

With IntelliSense at every step this is about a dozen keystrokes. The only specific knowledge you need is the name of the Logger class – and with Visual Studio IntelliSense, you’ll get that easily if you include “Logger” anywhere in the name. Just imagine how simple that is!


Discoverability and easy access are just two of the benefits of this style of logging.

The call to the trace method states what is happening and helps to document the calling source code. This is especially true if you use named parameters.

The resulting trace output is strongly typed and easily filtered and sorted on specific named parameter values.

Strongly typed parameters improve performance by avoiding string creation and avoiding boxing on any platform that has cheap stacks and expensive heap memory usage (such as .NET).

All access to the underlying tracing technology is isolated. Stuck with TraceSource now, but intimidated by the potential of future performance issues? Move to semantic tracing and the future change to USE EventSource and ETW will involve a single file. Any change that doesn’t involve the signature, even a technology change, is isolated to a single class.

Another implication of isolated tracing is that details can be added and removed as they are available via the underlying technology – things like thread and user.

Details about the trace are also consistent. The calling code is unconcerned about issues like level (error, warning, information), channel, and keywords. These details can be changed at any point in the application life cycle.

The result is a trace strategy that is clear and consistent.

ETW, EventSource and SLAB

Both EventSource and SLAB encourage semantic tracing, assuming you avoid creating generic trace methods like “Log” and “Warning.” Semantic logging involves giving a nice name to the method, along with strongly typed parameters.

You might want vague methods to keep you going one afternoon when you encounter something new you want to trace, but use them as a temporary measure.

Whether a particular event is a warning, information or an error is in the nature of the action, not the nature of this particular occurrence of the action – or it’s not very well defined, and it’s not semantic.


One of the most important tools in improving your trace performance is semantic tracing. Simply put, it means that all trace calls are to a common class, or small set of classes, that have methods indicating each specific thing you will trace along with strongly typed parameters. The trace method and each trace parameter has a very clear name allowing IntelliSense to guide correct calls.

Multiple Versions of EventSource and One Version of SLAB

There are several versions of EventSource. .NET 4.5 has the initial version, which is improved in .NET 4.5.1. You want to use the 4.5.1 version if you possibly can. The diagnostics, in particular are significantly better.

The CLR team has also done a pre-release version of EventSource on NuGet. This version has a couple of important features, including even better diagnostics and Windows Event Viewer support. That’s right, you have to use the latest and greatest NuGet version if you have events you want admins to see in EventViewer.

Switching from EventSource in .NET 4.5.1 to the NuGet version of EventSource just requires changing the references and namespaces.

The Patterns and Practices team provides SLAB (Semantic Logging Application Block) that uses EventSource extensibility to offer a more familiar experience. In particular, SLAB offers traditional listeners so you can avoid ETW complexities during development. At runtime, you probably want ETW performance, but the tools still stink and you probably don’t want to use them during development.

The P&P team thinks all this multiple version of EventSource stuff is confusing. Well, they are right, it is confusing.

Rather than offer a version of SLAB for the NuGet version of EventSource, the P&P team decided to stick with a single version of SLAB that targets the .NET 4.5.1 version of EventSource.

If you want a version of SLAB that targets the NuGet version of EventSource, download the SLAB source code and make the same changes you’d make to your own code – change the references and namespaces. Here Grigori Melnik says SLAB 1.1 will improve the test suite to better support this modification.

What does Blazingly Fast Tracing Mean

I wrote a summary of ETW here and some basics here. I’ll be posting more to coincide with my new Pluralsight course on ETW tracing.

ETW tracing is blazingly fast. The operating system tosses the trace work like accessing files onto another thread. It does this at a very low level allowing very, very fast tracing when tracing is turned on. If tracing is turned off, the performance hit is even smaller. But why worry about tracing turned off!?!?! The goal is creating a trace system you can leave on in production!

OK, that’s a good sound bite, but while I think you should have some tracing that you leave on all the time, you might also want some high frequency event that you have on only some of the time, such as when you’re doing a particular kind of evaluation.

Here are some rough performance metrics I did a while ago – this is on my laptop with a fair amount of open applications.




Object Overload


Event Off


100 million/sec


50 million/sec

Event Watched


2 million/sec


1 million/sec


Don’t get carried away with the details of these numbers, and do your own metrics if you feel the details are important. A nanosecond is one billionth of a second. Dedicating the CPU to tracing (a loop) allows about 2 million trace calls/sec with tracing turned on, 100 million with tracing turned off. That’s really, really fast.

But, Ok, you aren’t going to dedicate 100% of your CPU to tracing.

More or less, if you allow 1% of your system resources for tracing, with it turned on, you can do about twenty thousand trace events per second (20,000!). OK, there’s latency and throughput, but I’ll argue at 1% you just don’t care. Doing a super high throughput system with 10,000 transactions/second? You can probably still afford a few trace calls per request – yes, in production (benchmark that in your specific scenario).

By the way, the specification suggests that the hit of using the fallback overload that takes parameter array of objects might have an impact of 10-20 fold. I didn’t see that. The spec explains how to add more overloads – but it’s ugly, requires unsafe code and not something I think you initially need to worry about. It’s about double, and you can add required overloads when you need tracing at half a millionth of a second, instead of a millionth of a second.

The spec also suggests that you might see a performance increase using a locally cached copy of the logger, rather than repeated hits to a static property on the log class. I have not been able to measure any performance improvement, so this unnecessarily complicates your code (unless you are using DI, which I recommend for many, many things, but not tracing).

When you turn tracing off, the impact drops by nearly two orders of magnitude.

You can’t get anywhere close to this kind of performance out of any other trace technique I know of. Certainly not from Trace.Write, TraceSource, or Log4Net.

Did No One Count?

This is embarrassing, although I can explain, really officer. I wasn’t drinking, it just looked that way.

I put up a Five Levels of Code Generation and it contained these bullet points:

  • Altering code for the purpose of machines
    The path from human readable source code to machine language
  • DSL (Domain Specific Language)
    Changing human instructions of intent, generally into human readable source code
  • File Code Generation
    Creating human readable source code files from small metadata, or sometimes, altering those files
  • Application code generation or architecture expression
    Creating entire systems from a significant body of metadata

See the problem?

Now, if you read the post, you might see what I intended. You might realize that I was in the left turn lane, realized I needed something at the drugstore on the right, didn’t realize the rental car needed to have its lights turned on (mine doesn’t) on a completely empty road at midnight in a not-great neighborhood in Dallas. Really, officer, I haven’t been drinking, I’m just an idiot.

There’s five because I make a split in the first item: That was partially because that post was inspired by confusion regarding what RyuJIT means to the future of .NET. (It actually means, and only means, that your programs will run better/faster in some scenarios).

The code you write becomes something, and then it becomes native code. That “something” for us has been IL, but might be a different representation. One reason for the distinction is that there are entirely separate teams that think about different kinds of problems working on compilers and native code generation. IL is created by a compiler that specializes in parsing, symbol resolution and widely applicable optimization. Native code is created in a manner specific to the machine where it will be run. In the big picture, this has been true since .NET was released and it’s a great design.

I think language compilation and native code creation are two distinct steps. One is all about capturing the expressive code you write, and the other is all about making a device work based on its individual operating system APIs.

But I might be wrong. I might be wrong because the increasing diversity in our environments means implications of native code API’s on the libraries you use (PCL). I might be wrong because languages like JavaScript don’t use IL (although minification is not entirely different). I might be wrong because it’s only the perspective of the coder that matters, and the coder rarely cares. I might be wrong because I’m too enamored with the amazing things like multi-core background JIT and MPGO (you can see more in the Changes to the .NET Framework module of my What’s New in .NET 4.5 Pluralsight course).

The taxonomy of code generation will shape the upcoming discussions Roslyn will inspire about metaprogramming. Metaprogramming encompasses only the DSL/expansion, file, and architecture levels.

You might be rolling your eyes like the officer handing me back my license in Dallas. Yes, officer. You’re absolutely right. If I titled the post “Five Levels of Code Generation” I should have had FIVE bullet points.

How are Event Parameters Best Used to Create an Intuitive Custom EventSourceTrace

Jason asked a really, really good question on StackOverflow. I’m answering it here, because it’s a wordy answer. The good news, is the answer is also in my about-to-be-released ETW Pluralsight video on ETW (everything is in their hands, just not posted yet, hopefully next Thursday!). I’ve also got some additional blog posts coming, but today, let me just answer Jason’s question.

“it is unclear how some of the properties are best used to create an intuitive custom trace”

Jason goes on to categorize Event attributes as “intuitive” and “non-intuitive”. I’m throwing out that distinction and covering all of them. And the most important advice might be the last on Message.


ETW supports four basic channels and the potential for custom channels. EventSource does not support custom channels (if you have a user story, contact me or the team). The default channel and the only one currently supporting in-line manifests is the Debug channel.

The Channel parameter exists only in the NuGet version and only for the purpose of accessing the additional channels, primarily the admin channel to access EventViewer for messages to admins. I was one of the people that fought for this capability, but it is for a very limited set of cases. Almost all events logically write to your channel – the default channel – the Debug channel.

To write to EventViewer, you need to write to the Admin channel and install a manifest on the target computer. This is documented in the specification, in my video, and I’m sure a couple of blog posts. Anything written to the admin channel is supposed to be actionable by ETW (Windows) guidelines.

Use Operational and Analytic channels only if it is part of your app requirements or you are supporting a specific tool.

In almost all cases, ignore the Channel parameter on the Event attribute and allow trace events to go to the Debug channel.


For the Admin Channel

If you are writing to the admin channel, it should be actionable. Information is rarely actionable. Use warning when you wish to tell them (not you, not a later dev, but ops) that you want them to be concerned. Perhaps that response times are nearing the tolerances of the SLA. Use error to tell them to do something. Perhaps that someone in the organization is trying to do something they aren’t allowed to do. Tell them only what they need to know. Few messages, but relatively verbose and very clear on what’s happening, probably including response suggestions. This is “Danger, danger Will Robinson” time.

For the Debug Channel

This is your time-traveling mind meld with a future developer or a future version of yourself.

I’m lucky enough to have sat down several times with Vance, Dan and Cosmin and this is one of the issues they had to almost literally beat into my head. The vast majority of the time, your application can, and probably should run with the default information turned on.

If you’re looking at an event that clearly represents a concern you have as a developer – something you want to scare a later developer because it scares you – like a serious failed assert – use warning. If someone is holding a trace file with ten thousand entries, what are the three things or the ten things you think tell them where the problem is? If they are running at the warning (not informational level) what do they really, truly need to know?

If it’s an error, use the error level.

If it’s a massively frequent, rarely interesting event, use verbose. Massively frequent is thousands of times a second.

In most cases, use the default informational level for the Level parameter of the Event attribute. Depending on team philosophy, ignore it or record it.


If you have verbose events, they need to be turned on and off in an intelligent fashion. Groups of verbose events need keywords to allow you to do this.

Warnings and Error levels do not need keywords. They should be on, and the reader wants all of them.

The danger of missing an event so vastly outweighs the cost of collecting events that informational events should be turned on without concern for keywords. If keywords aren’t going to be used to filter collection, their only value is filtering the trace output. There are so many other ways to filter the trace, keywords are not that helpful.

In most cases, use the Keywords parameter of the Event attribute only for verbose events and use them to group verbose events that are likely to be needed together. Use Keywords to describe the anticipated debugging task where possible. Events can include several Keywords.


On the roller coaster of life, we just entered one of the scary tunnels – the murky world of ETW trace event naming. As far as ETW is concerned, your event is identified with a numeric ID. Period.

Consumers of your trace events have a manifest – either because it’s in-line (default for Debug channel, supported by PerfView and gradually being supported by WPR/WPA) or installed on the computer where the trace is consumed. The manifest does not contain an event name that is used by consumers.

Consumers, by convention, make a name from your Task and Opcode.

EventSource exists to hide the weirdness (and elegance) of ETW. So it takes the name of your method and turns it into a task. Unless you specify a task. Then it uses your task as the task and ignores the name of your method. Got it?

In almost all cases, do not specify a Task parameter for the Event attribute, but consider the name of your method to be the Task name (see Opcode for exception).


I wish I could stop there, but Jason points out a key problem. The Start and Stop opcodes can be very important to evaluating traces because they allow calculation of elapsed time. When you supply these opcodes, you want to supply the Task to ensure proper naming.

And please consider the humans. They see the name of the method, they think it’s the name displayed in the consumer. For goodness sakes make it so. If you specify a task and opcode, ensure that the method name is the concatenation. Please

This is messy. I’m working on some IDE generation shortcuts to simplify EventSource creation and this is a key reason. I think it will help, but it will require the next public release of Roslyn.

Almost never use an Opcode parameter other than Start/Stop.

When using Start/Stop Opcodes, also supply a Task and ensure the name of the method is the Task concatenated with the Opcode for the sake of the humans.


The version parameter of the Event attribute is available for you and consumers to communicate about whether the right version of the manifest is available. Versioning is not ETW’s strength – events rarely changed before we devs got involved and now we have in-line manifests (to the Debug channel). You can use it, and the particular consumer you’re using might do smart things with it. And even so, the manifest is or is not correctly installed on any machines where installed manifests are used.

Overall, I see some pain down this route.

The broad rule for versioning ETW events is don’t. That is do not change them except to add additional data at the end (parameters to your method and WriteEvent call). In particular, never rearrange in a way that could give different meaning to values. If you must remove a value, force a default or marker value indicating missing. If you must otherwise alter the trace output, create a new event. And yes, that advice sucks. New events with “2” at the end suck. As much as possible, do up front planning (including confidentiality concerns) to avoid later changes to payload structure.

Initially ignore the Version parameter of the Event attribute (use default), but increment as you alter the event payload. But only add payload items at the end unless you can be positive that no installed manifests exist (and I don’t think you can).


Did you notice that so far I said, rarely use any of the parameters on the Event attribute? Almost never use them.

The Message parameter, on the other hand, is your friend.

The most important aspect of EventSource is documenting what the event needs from the caller of the code. It’s the declaration of the Event method. Each item passed should be as small as possible, non-confidential, and have a blazingly clear parameter name.

The guy writing against your event sees an available log method declaration like “IncomingDataRequest(string Entity, string PrimaryKey).” Exactly how long does it take him to get that line of code in place? “IncomingRequest(string msg)” leaves the dev wondering what the message is or whether it’s even the correct method. I’ve got some stuff in my upcoming video on using generics to make it even more specific.

Not only does special attention to Event method parameters pay off by speeding the writing of code that will call the Event method (removing all decision making from the point of the call), but (most) consumers see this data as individual columns. They will lay this out in a very pretty fashion. Most consumers allow sorting and filtering by any column. Sweet!

This is what Strongly Typed Events are all about.

Parameters to your method like “msg” do not cut it. Period.

In addition to the clarity issues, strings are comparatively enormous to be sticking into event payloads. You want to be able to output boatloads of events – you don’t want big event payloads filling your disks. Performance starts sucking pretty quickly if you also use String.Format to prepare a message that might never be output.

Sometimes the meaning of the parameter is obvious from the name of the event. Often it is not. The contents of the Message parameter is included in the manifest and allows consumers to display a friendly text string that contains your literals and whatever parts of the event payload seem interesting. Sort of like String.Format() – the “Message” parameter is actually better described as a “format” parameter. Since it’s in the manifest, it should contain all the repeatable parts. Let the strongly typed data contain only what’s unique about that particular call to the trace event.

The Message parameter uses curly braces so you feel warm and fuzzy. That’s nice. But the actual string you type in the parameter is passed to the consumer, with the curly braces replaced with ETW friendly percent signs. Do not expect the richness of String.Format() to be recognized by consumers. At least not today’s consumers.

By splitting the data into strongly typed chunks and providing a separate Message parameter, the person evaluating your trace can both sort by columns and read your message. The event payload contains only data, the manifest allows your nice wordy message. Having your beer and drinking it too.

Not sold yet? If you’re writing to a channel that uses installed manifests, you can also localize the message. This can be important if you are writing to the admin channel for use in EventViewer.

Almost always use Message so consumers can provide a human friendly view of your strongly typed event payload.


There are four basic rules for EventSource usage:

  • Give good Event method names
  • Provide strongly typed payload data – consider confidentiality – and work to get payload contents right the first time (small where possible)
  • Use the Message parameter of the event attribute for a nice human friendly message
  • For every other Event attribute parameter – simplify, simplify, simplify. Use the defaults unless you are trying to do something the defaults don’t allow

.NET 4.5.1 and Security Updates

I’d just like to add a quick little opinion piece on this very important announcement from the .NET framework team.

On Oct 16, 2013, the team announced that NuGet would be a release mechanism for the .NET framework. On that day, we should have all started holding our breath, getting prescriptions for Valium, or moving to Washington or Colorado (although technically that would not have helped until Jan 1).

A .NET framework release vehicle that is not tied to a security update mechanism? Excuse me? What did you say? Really? No? Are you serious?

Today’s announcement is that security update mechanism.

Obviously, this mechanism has been in the works and it was just a matter of getting everything tied and ready to go, which for various reasons took .NET 4.5.1.

So, there are now several really important points to make about .NET framework/CLR and its relation to NuGet

  • The concept of the “.NET Framework” is now a bit fuzzy. Did you notice I no longer capped the word “framework?” It’s been a couple years coming and a lot of background work (like PCL), but you have control over what “.NET framework” means in your application in a way that could never have been imagined by the framers of the Constitution.
  • The NuGet vehicle for delivering pieces of the framework/CLR supports a special Microsoft feed, PCL, and has already been used for really interesting things like TPL Dataflow and TraceEvent. It looks mature and ready from my vantage.
  • Gone are the days when NuGet was only pre-release play-at-your-peril software. Look for real stuff, important stuff, good stuff, stuff you want from the CLR/.NET framework team to be released on NuGet, as well as ongoing releases from other teams like ASP.NET and EF.
  • They’ve made some very important promises in today’s announcement. They must fulfill these promises (no breaking changes, period). We must hold their feet to the fire. In place upgrades are a special kind of hell, the alternative is worse, and even if you disagree on that point, this is the game we are playing.
  • Upgrade to .NET 4.5.1 as soon as your team feels it can. Please.

Thank you to the team for the incredible work of the last few years. Thank you. Thank you.

Another Look at Event Source Performance

EDIT: Added/changed things in the section near the end on Optimization #4 on 2013-11-06

In this post, Muhammad Shujaat Siddiqi talks about high volume ETW performance and the following three optimizations.

Optimization # 1: Use IsEnabled() before WriteEvent()

Optimization # 2: Convert static field fetch to local member fetch

Optimization #3: Minimize the number of EventSources in the application

Optimization # 4: Avoid fallback WriteEvent method with Object Array Parameter

He does a great job explaining each of them, but I disagree with the second optimization and wanted to add some comments on the others based on some rough metrics I ran to check performance.

To start with the summary, EventSource plays very nice with ETW and so maintains amazing, absolutely amazing, performance characteristics.

Optimization # 1: Use IsEnabled() before WriteEvent()

If all you’re doing is calling WriteEvent, there’s no preparatory work, then I haven’t seen a significant performance difference. Basically, the first thing WriteEvent does is check whether it’s enabled, and the one extra method call is awfully hard to measure.

However, if you’re doing preparatory work that has any expense, you should absolutely use this. Preparatory work most often happens in complex calls, where a public method marked with the NonEvent attribute calls a private method that is the actual Event definition. In this case, you need to check whether the class is enabled in that public method, before the preparatory work (I realize this might be obvious, but it’s important).

As Muhammad pointed out, if you have keywords or levels associated with the event, you can check them. But the overarching principle is simplicity, so I think only applies in the far more rare verbose/keyword scenario than in the more common scenario where you use the defaults except for provider name and event id.

That said, if you always use IsEnabled() as soon as you arrive in your EventSource class, you will remember to do it on the rare occasion it matters.

Optimization # 2: Convert static field fetch to local member fetch

Muhammad recommends instantiating an instance of the logger in each class that uses it, instead of accessing a static field in the logger class. I tested instantiating the instance vs. a call to the static method for the reverse reason – because some documentation suggested it was slower to instantiate rather than reuse. I could not measure a difference, so arguably this is a matter of opinion. I stated that disagree with Muhammad, not that he is wrong.

I believe there are two principles that override performance, since the perf difference is somewhere between small and non-existent.

The first is that I want to minimize the greatest possible extent code (textual) dependency between my application classes and my logger. Unfortunately, for some technical reasons EventSource does not easily support interfaces, and therefore does not easily support dependency injection (look for an upcoming blog post on that). So, you’re stuck with the dependency of a direct call, or the actual class name. Since the direct call is easier to fix later if you choose to introduce the complexity of DI with EventSource, I believe it is preferable.

I also do not want multiple copies of the logger in case a future way I am logging requires shared state. So, my preferred way to instantiate blocks creation of multiple instances:

[EventSource(Name = "KadGen-EtwSamples-NormalEventSource")]

public class NormalEventSource : EventSource


    private static readonly Lazy<NormalEventSource> Instance = new Lazy<NormalEventSource>(() => new NormalEventSource());

    private NormalEventSource()



    public static NormalEventSource Log


        get { return Instance.Value; }

    }    static public NormalEventSource Log = new NormalEventSource();



    public void AccessByPrimaryKey(int PrimaryKey, string TableName)


        if (IsEnabled()) WriteEvent(1, PrimaryKey, TableName);






.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

Please determine the best approach for your team, and be consistent.

Optimization #3: Minimize the number of EventSources in the application

I didn’t actually measure this, because I think performance here is irrelevant.

Each EventSource class maps to a single ETW provider with its name, id and manifest.

Designing your providers is one of the most critical aspects of creating your ETW system. I believe one per component and some sharing (with associated hazards), and Muhammad suggests one per type, which is far more than I would use. Regardless, there is no other aspect – not criticality level, not channel, not opcodes, not keywords – that is in the ballpark of provider names (ids) when it comes to your design.

Later in a consumer, you will almost certainly sort and filter on the provider names. You will collect data based on provider names, and the more you have the more chance that you’ll miss key events.

Create exactly the number of EventSources you need and no more. When in doubt, take the path that results in a simpler strategy.

Optimization # 4: Avoid fallback WriteEvent method with Object Array Parameter

EDIT: Added/changed things in this section 2013-11-6

This is the place you can change the performance characteristic of your ETW tracing – for the good or for the bad.

As Muhammad explains well, the WriteEvent method has a finite number of overloads. If you pass parameters that do not match one of these overloads, a paramarray of objects is used. Now, I didn’t anything close to the 10-20 times proposed in the spec, but it double the time, which is enough to get my attention.

There are two scenarios where you’ll wind up with the object overload. The simplest to fix is passing enums. Accept the enum into your EventSource method as the corresponding enum so that the manifest correctly manages the enum. And then cast the enum to an int in the WriteEvent call like this:


public void AccessByPrimaryKey(int PrimaryKey, string TableName, ConnectionState ConnectionState)


    if (IsEnabled()) WriteEvent(1, PrimaryKey, TableName, (int)ConnectionState);



If I check the overloads for WriteEvent, there is an (int, string, int) overload so this call will not hit the object overload. But, it’s quite easy to create a collection of parameters that don’t match one of the overloads. I do not believe that it is appropriate to rearrange parameters to your public method just to match an overload – and the parameters to the WriteEvent call must match the order of parameters passed to your public method.

The fix for this isn’t quite as trivial – it’s easy, but ugly. This is explained in the EventSource specification, and section Creating fast WriteEvent overloads explains how to create the overloads. I always track down the spec as the attachment to this post on Vance Morrison’s blog.


Muhammad provided a very good, extremely readable, accessible discussion of potential EventSource/ETW performance issues.

· None of these performance tips will be observable in your application if you use a listener other than the blazingly fast ETW.

· Based on the metrics I ran a while back, I believe the only one of these performance tips worth worrying about is avoiding the object overload, and it isn’t as bad as anticipated in the spec.

· There are other considerations, at least consistency, that are important for the other three items.

As you dig deeper into ETW, side issues like this come up. The isolation of all EventSource aspects into the single class – inherent in its design – means these complexities always affect only a single location in your application if you encounter them.

EventSource: A New Way to Trace

Module 3 Highlights

This is the next installment in my series of high points in .NET 4.5, which parallels my Pluralsight course “What’s New in .NET 4.5”. I think the course offers important insight and details into the release, and if you’ve already watched it, this will give a second explanation of the most important features. If you don’t watch it, you can follow along with the most important features I cover.

First two installments of the course highlights are here and here.

What’s New?

Tracing becomes more and more critical as our applications move onto remote servers and into the cloud. Even with other tools available, desktop applications still need tracing to determine what the user was doing when a problem appears. Debug mode is rarely an option in production.

Microsoft’s answer is high speed unified tracing within the operating system: Event Tracing for Windows. ETW has been around since Vista, so it’s hardly a news flash. What’s new in .NET 4.5 is that ETW goes from being a royal pain to use to being one of the most simple and elegant features of .NET. This change makes ETW support and the new EventSource class is the sleeper feature of .NET 4.5 – the most important feature you never heard of.

So, let me explain, and hopefully inspire you to completely rethink your tracing strategy. Warning, if you’re fantasies get too carried away, I’ll knock a few over when I cover shortcomings near the end of the post.

While I won’t cover it hear, the Semantic Logging Block Patterns and Practices is using unifies ETW with traditional trace output locations and plus a couple of holes. So, if I get your attention, you may also want to explore what they’ve done – same underlying tools and syntax, with some important additions.


ETW is a high performance tracing system built into the operating system. It’s a fully composite system with some extreme extensibility options. There are four moving parts – ETW itself, providers, controllers and consumers. Because it’s a completely interchangeable system, not only can you come up with your own providers, controller and consumers to support ETW, you can replace ETW with something else filling it’s role – allowing alternate tracing options such as Silverlight support. That’s amazing, but the important parts to .NET 4.5 are providers, consumers, and controllers.

Consumers are important because the OS creates the smallest possible binary output to meet its goal of not hurting performance with tracing. There are a number of task specific consumers, including Visual Studio tooling, such as WCF tools and the Concurrency Visualizer. A good general purpose tool to consider is PerfView. And if you want, or need, to write your own tooling, you can use PerfView’s underlying libraries to access the binary stream. Or, if you need to party on raw trace events, you can get the binary stream transformed into XML output.

Controllers turn ETW on and off. Gone are the days of stopping your server, changing a configuration setting, and restarting the server – only to repeat in order to turn tracing off. Controllers communicate to providers whether they are on or off, and can do this to running applications. Tracing can be turned on before your application starts, as part of startup, or as the application runs. Controllers also communicate filtering information such as severity level to providers.

Providers create trace events. The operating system, .NET and other libraries are already fully on board. Right now you can fire up PerfView and get information on your application, including crashes. The purpose of the new EventSource class is to let you add events to this holistic stream.

Providers create trace events, controllers turn tracing on and off, and consumers do something with the resulting trace. Central to the design is a stream of trace events (actually a few streams, called channels, see more below). It doesn’t matter if the trace event is created by your application, the operating system, the .NET framework, WCF, or anything else.

This creates a fundamentally different approach to tracing because you don’t have much work to do. Almost everything you want to know is already in the trace system.

A New Way of Thinking about Tracing

If you’ve done tracing before, you probably worked to create a reasonably complete picture of what was happening in your application – you took responsibility for providing all of the relevant information. That’s changed. Since other providers are already dumping information into ETW you can think of the trace stream as a river with all of the information you can imagine flowing passed.

Your job is simply to complete the picture. The river has no context without your help. The surface is just a series of waves with no relation to what’s actually happening. Throw in key buoys about what you’re doing, and suddenly the part of the river adjacent to your buoy makes sense.

The markers you throw out can contain information like the current record. But remember the earlier caveat – what you put into the ETW stream with EventSource will be visible to whomever asks for it. You can’t put in sensitive information. Let me say that again – do not put any information into the ETW stream that can’t be seen by people with access to your servers.

Strong Typing and Manifests

One of the important ways ETW differs from some other approaches to tracing is that the information you provide is strongly typed. If you record primary keys and tables, you can find a trace vent with a particular primary key from a particular table. You can then watch that thread or process for information on the process that occurs.

To keep tracing blazingly fast, the operating system creates the trace as a binary stream of goo. Turning goo into strongly typed events requires en event specific definition, which is called a manifest in ETW.

Prior to .NET 4.5 you could create your own manifest or skip the strongly typed benefits of ETW. It’s icky work to create your own manifest and you have to install it on the machine that interprets the trace. This is really problematic if you version your trace event structure (and trust me, you will) because the machine may need to interpret traces from many other machines.

.NET 4.5 and the EventSource class creates the manifest for you and sidesteps manifest issues by using an “in-line manifest.” That’s just another event that contains the manifest. As long as the consumer knows how to grab that special manifest event and use the attached manifest (and PerfView does) all your manifest problems are solved.


The ETW goodness in .NET 4.5 is wrapped up in the EventSource class. The second best source of information on ETW (I like my Pluralsight video) is Vance Morrison’s blog posts, such as this.

While Vance’s posts do a great job, and this post is important, this particular post doesn’t mention that you should probably define your event id’s and provider name. Before you add tracing to your own application, be sure to understand those two features.

It’s Not All Joy – Bad Tooling

I said I might knock over your fantasies a bit. The biggest problem with ETW is a lack of good tooling. Please help pressure the Visual Studio team to supply an awesome consumer and at least a decent controller. Interpreting traces is hard and we don’t all need to work out the tricks.

PerfView is the best tool I know of now for the things you’ll need to do. But imagine Microsoft releasing the tool one dev made for their own use and it sort of spiraled into general use by those brainiac softies, and then (thank goodness) they released it to use. You won’t be too far off the mark. But from that you might guess that the important parts are open source, so you can use the underlying library to build your own tool. You’ll find the TraceEvent library on the BCL team’s CodePlex site.

It’s Not Complete – Channels, etc.

The other issue with ETW right now is hard to complain about. Once they got to the edges of what they needed and problems they understood – largely around profiling the .NET framework itself – they slowed down. I don’t think this is laziness, but an honest desire to get it right. I think the most obvious place you’ll see this is the lack of channel support.

ETW supports four channels. While you can define your own, I haven’t yet seen a good use case. The well-known channels are Admin, Operational, Analytic and Debug. EventSource tracing aims at the last – events by devs for devs. Your IT folks (and probably you as well) use the Windows Event Viewer to gauge system healthy. At present, trace events you create with the EventSource class do not show up in Event Viewer because they aren’t admin events.

I was initially appalled by this omission, annoyed that it felt like they ran out of time. But I’ve evolved my thinking. Event Viewer and trace events for the purpose of system health monitoring and warm fuzzies for your IT staff, are inherently different than profiling and debugging information (which overlap). Specifically, you want bells and whistles for filtering and you want to filter extra events at the point of consumption (assuming they aren’t so extreme that they affect performance).

Your IT folks probably don’t see it that way. They probably want your apps to tell them what they need to know when they need to know it. “Just the facts, ma’am.” And maybe strong typing doesn’t matter very much. Let’s start a conversation about the best ways to get information into the admin channel.

Semantic Logging Application Block

I’m just getting into this, but I’m excited about what I’m hearing about the Semantic Logging Application Block (other than its name, and it’s even worse acronym, but I’ll survive). You can get info on the CTP here and here.

I’m really happy that P&P is building on the good work of the BCL team and building on it in ways that make sense to business programmers. What most catches my eye is that you’re EventSource derived classes can be used to write to ETW or more traditional trace listeners. I hope to have time to explore the Semantic Logging Application Block in the coming weeks and post a more thorough discussion.

Module 2 High Points: Portable Class Libraries

I’m continuing to channel how excited I am about my Pluralsight course “What’s New in .NET 4.5” into a series of blog posts with the most important thing(s) in each module. Sure, I won’t mind if these posts inspire you to watch my course, but I’m also doing this so folks that can’t watch my course can still get the most important information.

Module 2: Changes to the Core Framework

There is a whole lot of information in this module, and I might come back to the most important thing in each clip. But for right now, I’m going to maintain discipline and focus on the single most important thing, which has two parts.

That would be the changes in how we deliver applications – how libraries are arranged and how MSIL becomes native code on the target machine. I’ll focus on how libraries are arranged, with just a smidgen on native code creation because it deserves a dedicated post.

Portable Class Libraries

You might have heard about Portable Class Libraries. It’s a new type of class library you can create that will run on the platforms you choose. Right, that’s platformS, plural. In the box you’ve got .NET main framework, Silverlight, XBox, Windows Store and Phone. Online, you’ve got more, including Azure and updates to other platforms.

In these libraries – these DLLs – you can only do things that work across all the selected platforms – the limited intersection of behavior. You’re not going to write your entire application this way, but you can create key libraries with business rules, calculations, maybe service access, maybe View Models – you get the idea. This, of course, directly useful. It also causes both you and Microsoft to rethink the concept of a DLL.

Your Libraries

Using Portable Class Libraries will force you It will instill new discipline in your designs, and I’d encourage you to begin moving everything possible into Portable Class Libraries just for this reason.

Isolated designs have been on our whiteboards for years – like View Models without any UI artifacts. But enforcement was by .NET libraries – and they were designed for many reasons other than correctly dissecting your code. No one that created reflection had a clue that it would look fundamentally different in a Windows Store application.

Visual Studio provides the appropriate IntelliSense and the compiler structures the correct libraries. We coders work organically. There are things we learn to do reflexively, and you want the classes and methods you reach for to be the right ones – generally that means the techniques with the broadest reach. As you learn what works within your Portable Class Libraries, you’ll learn what works across multiple platforms.

I’ll warn you that the jury isn’t entirely in on some critical details, such as using async across some of the different platforms, but many core features work flawlessly today, and the others are sorting themselves out.

Microsoft Libraries

Prior to .NET 4.5, the .NET libraries were fairly large units and completely different versions were created for alternate platforms. Just ask a phone developer how painful this pattern was as new sets of redesigned libraries reflected changes in how .NET was interpreted for the platform.

Starting with apps targeting .NET 4.5, there’s a new approach to core assemblies. This reflects platform differences, such as how local storage works, and isolates commonalities. Sometimes, such as the new Zip libraries in .NET 4.5, libraries can be designed with portability in mind.

It doesn’t make sense to restructure legacy assemblies to meet today’s vision, because today’s arrangement will be tomorrow’s legacy. Instead a huge effort behind the scenes created smaller leaner libraries that can be pulled together, and a redirect system that means you no longer care where any piece of functionality resides.

You can see this in Object Browser. View Containers to show the assemblies (from the gear for settings) and select “Portable Class Subset” (not legacy) from the dropdown. Now open mscorlib. Huh!!! It’s empty. That’s a bit of a surprise.

Now open System.Runtime – which is one of the many libraries that didn’t previously exist. It’s got all that mscorlib stuff. What you need at runtime will be there – but in an assembly location that’s platform dependent.


Nowhere is the importance of PCL more evident than with the newly designed reflection. It needed an overhaul. It sits at a critical junction of many other things – often causes performance problems of design compromise to avoid performance issues. For example, if you ask for the members of a type, assemblies containing all base types must be loaded. Also, GetMembers and its siblings have horrendous enum dependent overloads to help you filter the results.

Microsoft redesigned reflection for Windows Store applications. I’ll skip the details, but the GetMembers method is replaced with a DeclaredMembers property that returns an IEnumerable you can party on with LINQ. It doesn’t load any base class assemblies – if you want that, walk the tree so you understand what’s happening. The team also split the name of the type separate from all other functionality. The name is often enough and avoids loading even the class’s assembly.

This quick sketch of the massive change lets me illustrate why you want to move reflection based code into PCL projects.

.NET main framework projects need to support the old and new reflection approach. Instead of ditching the old stuff and creating a new namespace, they extended existing reflection to include the new features. I’ll leave it to the video to explain the inheritance hierarchy sleight of hand that let’s this happen. But the resulting classes have all the features of the old and new reflection approaches.

While I agree with this decision to allow smooth transition of existing reflection code with no breaking changes, I think it makes it nearly impossible to learn and use the new reflection approach – unless you create a PCL targeting main framework and Windows Store apps and do as much of your reflection work as possible in this assembly.

You’re already doing all your reflection in utility methods – right? No? Well, then this is also a moment for reflection, er, um, no, um, meditation, contemplation, deliberation, musing, rumination on the nature of an evolving framework and how to structure code to survive this change. That’s more posts, and I’ll get to them, but for now, reflection of your poster child of constructive change you’ll need to roll with.

JIT and NGen

It’s a separate blog post series, but I just can’t talk about this module without talking about JIT and NGetn. Your actual delivery is changing. There are now six, or is it seven, different ways IL can turn into native code on your target machine. It’s a pretty big set of changes, and I promise a blog post that cuts through some of the detail I hit in the video to more simply explain how to choose your approach. But don’t worry, for most people, most of the time plain old JIT remains the best approach (maybe with a little chocolate sprinkles).

Core Library Changes

Does my focus on the library creation and deployment mean that there weren’t any meaningful changes to the libraries themselves – new classes and methods? No, it doesn’t mean that at all.

There were a lot of changes to the core frameworks, and I hope to come back and give the most important couple of items from each clip in this module – changes to reflection, big arrays, console improvements, and, most exciting of all, real Zip support. These are great improvements, but none of them stand out as changing your development the way PCL does.

Module 1 High Points – Side by Side Implications

I’m really excited about my new “What’s New in .NET 4.5” course on Pluralsight.

There’s more information in the course than I could stuff into my blog in a year, but I want to cover a few high points here. I wrote the course as “What I want every coder and manager to know about this release,” but I know you won’t all get a chance to watch it. So, I’ll cover what I can here, starting with a series of short posts with the most important point, or maybe two, in each of the eight modules.

Module 1 – Introduction

Yes, the introduction is more than an agenda slide J

As an in-place release, there are two versions of the .NET 4 libraries – no, wait three versions – and probably more in the future. That’s right, I said the .NET 4 libraries – it doesn’t matter whether your targeting .NET 4.0 or 4.5. I know that sounds a little crazy, but I’m afraid we’re going to have to get used to the craziness.

You probably don’t control the version of the .NET framework that’s running in production. Another application installation, or the user, or an operating system upgrade (Windows 8 or Server 2012), or at some future point Windows Update could upgrade a target computer to a new version of .NET 4 without you touching your deployed application in any way.

That should scare you, but it’s part of the brave new world. Microsoft is tooling itself for fast incremental releases, led by the fantastic advancements in the Azure space. There’s a good side of this and a bad side of this, and in the end I’m not sure it matters very much what I think.

So, you’ve got to learn about this release, whether or not you intend to target .NET 4.5 in the foreseeable future. And we need to ferret out ways for ongoing validation of our working systems – validated algorithms, in-place production testing, and widespread use of tracing (oh wait! that’s module 3 on ETW support).

I said there were three, versions of the .NET 4.0 framework in the wild – there are probably more with hot fixes, but these are the three you need to know about.

– .NET 4.0 released with VS 2010 (4.0.30319.1)

– .NET 4.5 released with VS 2012, regardless of target platform (4.0.30319.17929)

o Please don’t run this version this pre-January 2013 updates version if you can help it

o You may not be able to ensure your user’s install this recommended upgrade

o There’s a corresponding security update that is separate and required

– .NET 4.5 reliability (and security) update released January 8, 2013 (4.0.30319.18033)

If you’re not thoughtful, you may be testing against a different version of the framework than the one your users are running.

On a particular machine, you can determine whether the updates are installed through Windows Update history, the registry, or the version number on something like mscorlib.dll in the GAC (oh wait! that’s module 8 on breaking changes). You can also test for the existence of the type System.ReflectionContext if you want to know whether you’re working with a version above the VS 2010 libraries (4.0.30319.1). Test for a type, not a member (reflection considerations are in module 2 on the core framework changes) and not the specific version number, which might be fragile going forward.

There are at least four ways the new version of .NET will affect your code:

– When new libraries are installed in production (what I’m talking about in this post)

– When you recompile with the VS 2012 compilers (regardless of your target version)

– When you target .NET 4.5 for its awesome new features

– When you rerun custom tools

– When a computer you exchange data with upgrades libraries

The five point survival guide is test, test, test, test, test, test. And that’s regardless of when you plan to explicitly target .NET 4.5.

Overall .NET 4.5 is awesome and the compatibility of the new libraries is good. But, if your one of the unlucky ones that encounter a problem, the probabilities don’t matter at all.

Whether it’s through my course or other venues – if you’re responsible for production code, you really need to learn about .NET 4.5 and start considering how you’re going to accommodate the new fast moving cycles.