Category Archives: 6097

How are Event Parameters Best Used to Create an Intuitive Custom EventSourceTrace

Jason asked a really, really good question on StackOverflow. I’m answering it here, because it’s a wordy answer. The good news, is the answer is also in my about-to-be-released ETW Pluralsight video on ETW (everything is in their hands, just not posted yet, hopefully next Thursday!). I’ve also got some additional blog posts coming, but today, let me just answer Jason’s question.

“it is unclear how some of the properties are best used to create an intuitive custom trace”

Jason goes on to categorize Event attributes as “intuitive” and “non-intuitive”. I’m throwing out that distinction and covering all of them. And the most important advice might be the last on Message.


ETW supports four basic channels and the potential for custom channels. EventSource does not support custom channels (if you have a user story, contact me or the team). The default channel and the only one currently supporting in-line manifests is the Debug channel.

The Channel parameter exists only in the NuGet version and only for the purpose of accessing the additional channels, primarily the admin channel to access EventViewer for messages to admins. I was one of the people that fought for this capability, but it is for a very limited set of cases. Almost all events logically write to your channel – the default channel – the Debug channel.

To write to EventViewer, you need to write to the Admin channel and install a manifest on the target computer. This is documented in the specification, in my video, and I’m sure a couple of blog posts. Anything written to the admin channel is supposed to be actionable by ETW (Windows) guidelines.

Use Operational and Analytic channels only if it is part of your app requirements or you are supporting a specific tool.

In almost all cases, ignore the Channel parameter on the Event attribute and allow trace events to go to the Debug channel.


For the Admin Channel

If you are writing to the admin channel, it should be actionable. Information is rarely actionable. Use warning when you wish to tell them (not you, not a later dev, but ops) that you want them to be concerned. Perhaps that response times are nearing the tolerances of the SLA. Use error to tell them to do something. Perhaps that someone in the organization is trying to do something they aren’t allowed to do. Tell them only what they need to know. Few messages, but relatively verbose and very clear on what’s happening, probably including response suggestions. This is “Danger, danger Will Robinson” time.

For the Debug Channel

This is your time-traveling mind meld with a future developer or a future version of yourself.

I’m lucky enough to have sat down several times with Vance, Dan and Cosmin and this is one of the issues they had to almost literally beat into my head. The vast majority of the time, your application can, and probably should run with the default information turned on.

If you’re looking at an event that clearly represents a concern you have as a developer – something you want to scare a later developer because it scares you – like a serious failed assert – use warning. If someone is holding a trace file with ten thousand entries, what are the three things or the ten things you think tell them where the problem is? If they are running at the warning (not informational level) what do they really, truly need to know?

If it’s an error, use the error level.

If it’s a massively frequent, rarely interesting event, use verbose. Massively frequent is thousands of times a second.

In most cases, use the default informational level for the Level parameter of the Event attribute. Depending on team philosophy, ignore it or record it.


If you have verbose events, they need to be turned on and off in an intelligent fashion. Groups of verbose events need keywords to allow you to do this.

Warnings and Error levels do not need keywords. They should be on, and the reader wants all of them.

The danger of missing an event so vastly outweighs the cost of collecting events that informational events should be turned on without concern for keywords. If keywords aren’t going to be used to filter collection, their only value is filtering the trace output. There are so many other ways to filter the trace, keywords are not that helpful.

In most cases, use the Keywords parameter of the Event attribute only for verbose events and use them to group verbose events that are likely to be needed together. Use Keywords to describe the anticipated debugging task where possible. Events can include several Keywords.


On the roller coaster of life, we just entered one of the scary tunnels – the murky world of ETW trace event naming. As far as ETW is concerned, your event is identified with a numeric ID. Period.

Consumers of your trace events have a manifest – either because it’s in-line (default for Debug channel, supported by PerfView and gradually being supported by WPR/WPA) or installed on the computer where the trace is consumed. The manifest does not contain an event name that is used by consumers.

Consumers, by convention, make a name from your Task and Opcode.

EventSource exists to hide the weirdness (and elegance) of ETW. So it takes the name of your method and turns it into a task. Unless you specify a task. Then it uses your task as the task and ignores the name of your method. Got it?

In almost all cases, do not specify a Task parameter for the Event attribute, but consider the name of your method to be the Task name (see Opcode for exception).


I wish I could stop there, but Jason points out a key problem. The Start and Stop opcodes can be very important to evaluating traces because they allow calculation of elapsed time. When you supply these opcodes, you want to supply the Task to ensure proper naming.

And please consider the humans. They see the name of the method, they think it’s the name displayed in the consumer. For goodness sakes make it so. If you specify a task and opcode, ensure that the method name is the concatenation. Please

This is messy. I’m working on some IDE generation shortcuts to simplify EventSource creation and this is a key reason. I think it will help, but it will require the next public release of Roslyn.

Almost never use an Opcode parameter other than Start/Stop.

When using Start/Stop Opcodes, also supply a Task and ensure the name of the method is the Task concatenated with the Opcode for the sake of the humans.


The version parameter of the Event attribute is available for you and consumers to communicate about whether the right version of the manifest is available. Versioning is not ETW’s strength – events rarely changed before we devs got involved and now we have in-line manifests (to the Debug channel). You can use it, and the particular consumer you’re using might do smart things with it. And even so, the manifest is or is not correctly installed on any machines where installed manifests are used.

Overall, I see some pain down this route.

The broad rule for versioning ETW events is don’t. That is do not change them except to add additional data at the end (parameters to your method and WriteEvent call). In particular, never rearrange in a way that could give different meaning to values. If you must remove a value, force a default or marker value indicating missing. If you must otherwise alter the trace output, create a new event. And yes, that advice sucks. New events with “2” at the end suck. As much as possible, do up front planning (including confidentiality concerns) to avoid later changes to payload structure.

Initially ignore the Version parameter of the Event attribute (use default), but increment as you alter the event payload. But only add payload items at the end unless you can be positive that no installed manifests exist (and I don’t think you can).


Did you notice that so far I said, rarely use any of the parameters on the Event attribute? Almost never use them.

The Message parameter, on the other hand, is your friend.

The most important aspect of EventSource is documenting what the event needs from the caller of the code. It’s the declaration of the Event method. Each item passed should be as small as possible, non-confidential, and have a blazingly clear parameter name.

The guy writing against your event sees an available log method declaration like “IncomingDataRequest(string Entity, string PrimaryKey).” Exactly how long does it take him to get that line of code in place? “IncomingRequest(string msg)” leaves the dev wondering what the message is or whether it’s even the correct method. I’ve got some stuff in my upcoming video on using generics to make it even more specific.

Not only does special attention to Event method parameters pay off by speeding the writing of code that will call the Event method (removing all decision making from the point of the call), but (most) consumers see this data as individual columns. They will lay this out in a very pretty fashion. Most consumers allow sorting and filtering by any column. Sweet!

This is what Strongly Typed Events are all about.

Parameters to your method like “msg” do not cut it. Period.

In addition to the clarity issues, strings are comparatively enormous to be sticking into event payloads. You want to be able to output boatloads of events – you don’t want big event payloads filling your disks. Performance starts sucking pretty quickly if you also use String.Format to prepare a message that might never be output.

Sometimes the meaning of the parameter is obvious from the name of the event. Often it is not. The contents of the Message parameter is included in the manifest and allows consumers to display a friendly text string that contains your literals and whatever parts of the event payload seem interesting. Sort of like String.Format() – the “Message” parameter is actually better described as a “format” parameter. Since it’s in the manifest, it should contain all the repeatable parts. Let the strongly typed data contain only what’s unique about that particular call to the trace event.

The Message parameter uses curly braces so you feel warm and fuzzy. That’s nice. But the actual string you type in the parameter is passed to the consumer, with the curly braces replaced with ETW friendly percent signs. Do not expect the richness of String.Format() to be recognized by consumers. At least not today’s consumers.

By splitting the data into strongly typed chunks and providing a separate Message parameter, the person evaluating your trace can both sort by columns and read your message. The event payload contains only data, the manifest allows your nice wordy message. Having your beer and drinking it too.

Not sold yet? If you’re writing to a channel that uses installed manifests, you can also localize the message. This can be important if you are writing to the admin channel for use in EventViewer.

Almost always use Message so consumers can provide a human friendly view of your strongly typed event payload.


There are four basic rules for EventSource usage:

  • Give good Event method names
  • Provide strongly typed payload data – consider confidentiality – and work to get payload contents right the first time (small where possible)
  • Use the Message parameter of the event attribute for a nice human friendly message
  • For every other Event attribute parameter – simplify, simplify, simplify. Use the defaults unless you are trying to do something the defaults don’t allow

A Quiet Conversation about DDD and Data First Design

At the MVP Summit I had the pleasure to sit down at a party for some one on one time with Don Smith. I’m trying to think of my blog as a nice little corner to talk, rather than a soapbox. I want to make an share something that is not shouted from the rooftops. Eeegads, I don’t want to start another debate on this.

A few months ago, the EF team started a wiki where Ward Bell and I felt quite attacked for suggesting that DDD is not always the best approach. And thus, it is with some trepidation that I touch this topic. But today’s Database Weekly has a column on it and I really feel there’s stuff worth hearing. If you’re here in my nice intimate corner, you can hear it.

The question of whether to start with a database or a domain (business object) model makes no sense. The answer is easy: start with the one most likely to bring you success, and don’t ignore the impedance mismatch problem.

A well structured application has a good domain model and a good (relational) database and a good strategy to cross the impedance mismatch boundary. That boundary exists because neither the domain nor the database should drive the structure of the other.

A database might be a more successful starting point if you have good, stubborn, or available DBA’s or if your DBA’s are good analysts. If you’re a small shop – which do you build better and have you ever tried building it the other way? Database first is also often a good starting point if you have an existing database. Even if the database is bad, it contains the existing business, and it’s my belief we should never close our eyes to a way the business has already expressed itself if we can get a hold of it (it’s not in code). While we should consider available expressions of the business, we should not blindly accept any piece without exploring also exploring its problems.

A domain might be a more successful starting point if you have good, stubborn, or available coders, or if your coders are good analysts. If you’re a small shop – which do you build better and have you ever tried building it the other way? Domain first (DDD) can also be a good starting point if you have an existing database. If you build a domain model that you constantly validate against the existing database you can base your thinking on experience while not being stuck in that experience. While we should consider available expressions of the business, we should not blindly accept any piece without exploring also exploring its problems.

If it’s an even match, consider DDD. The issues are more subtle and getting them out of the way might be helpful to your project.

The monumental disservice that resulted from the EF wiki (which has thankfully now died a formal death) is that this decision appeared to be a religious one or one that marked you in one camp, or perhaps to some even something about your level of coding. All of that is stupid.

Do DDD or database first based on what makes sense in your specific scenario

Whichever way you start, attention to the impedance mismatch will minimize negative consequences to the other side of the boundary

It comes down to the obvious. It’s your team, it’s your project. Make decisions based on your reality, not dogma. Learn from the debates in our industry. Don’t pick sides and follow blindly (even my side.)

So, now we can go back to the rest of the party. If this kicks off another brawl, I suggest slipping out by the side door.

Isolating Metadata

In code generation, metadata is the information about your application, generally about your database and definitions to express your data as business objects. If you use Entity Framework, your metadata is the edmx file which is displayed via the designers. If you’re using CodeSmith, the metadata is more subtle. Metadata can also be about the process itself. CodeBreeze in particular has a very rich set and extensible set of information about your application.

Since metadata itself is data – information – we can store it many ways. I’ve used XML for years. CodeSmith has used a couple of mechanisms including XML. Entity Framework uses XML. Metadata can also come directly from a database, although I think this is a remarkably bad idea and one of my code generation principles is not to do that – you need a level of indirection and isolation surrounding your database.

What I haven’t talked about before how valuable it is to have another layer of indirection between your metadata storage structure – your XML schema – and your templates. In my XSLT templates I could provide this only through a common schema – you can morph your XML into my schema so that’s indirection – right?

No, that’s not really indirection. It’s great to be back in .NET classes with real tools for isolation and abstraction. Now I use a set of interfaces for common metadata constructs such as objects, properties and criteria. I can then offer any number of sets of metadata wrappers that implement these interfaces via a factory.




The template programs only against the interfaces. The template could care less whether I am using entity framework, my own metadata tools, or something entirely different. I can write the same template and use it against Entity Framework’s edmx file or any other metadata format. That’s powerful stuff. Especially since you already heard that the template will run against C# or VB. That means in my world the only reason to have more than one set of templates against an architecture like CSLA is that they are pushing the boundaries and actually doing different things.

But if you don’t like this new templating style, you can use classes based on exactly the same interfaces in CodeSmtih (at least) and again free your framework and metadata extraction. You’ll still need VB/C# versions there, but you’re metadata input can use the same interfaces.

The interfaces is expressed in sets of classes that know how to load themselves from a data source. Each set uses a different metadata source – different XML structures or other format.

Isolated metadata removes your templates from caring what the metadata source is – beyond being something that could successfully fill a set of classes that implement the data interfaces. This is a very important step and one we need to work together to get right. What do you think I’ve left out of the current design?

Why You Care About System.AddIn

When I was fighting with AppDomains to support XML Linq code generation in my new Workflow based code generator, Bill McCarthy said “Hey did you look at System.AddIn” and I said “No, silly I’m not writing add-ins.”

Well, a few months later, I’m still trying to make it work, and have come to think it’s worth the trouble. So, first, what System.AddIn namespace offers, then why it’s so painful, then what I’m doing to fix your pain.

Simply put, System.AddIn provides abstracted/isolated app domain access. AppDomains are the boundary at which some security stuff happens, and the unit which must be unloaded as a group. You can load individual assemblies into an app domain, but to unload them, you need to unload the entire app domain.

There are a few scenarios where this is important – sandboxing code you’re running as a plug-in to your application being the one the designers had in mind. I want to use it so I can load my code generator and have it recognize changes in .NET assemblies that are generating code. With my first tool, I never solved this problem because I didn’t think brute force code generation prior to XML literals in Visual Basic made very much sense. You had a lot of the problems with XSLT (whitespace) and a nearly complete inability to search your templates (since we cannot search separately in quoted text. XML literal code generation is the best way yet to generate code – as powerful as XSLT and as easy as CodeSmith. Anyway, I can get carried away on that – it’s why I was willing to invest heavily in System.AddIn.

Along the way, I gained great respect for the complex model that supplies isolation/abstraction. If you’ve ever played  with plug-ins you know that the first version of your app and its plug-ins is OK, but keeping things in sync while multiple synergistic applications evolve is nearly impossible. The isolation model means the host only speaks to an adapter, and the add-in only needs to speak to an adapter. The adapter’s functionality and the contract can change in whatever manner is needed. This model, combined with the app domain management may lead System.AddIn to have an important role in your application if your application needs to provide variants for individual clients.

Hopefully you have a good idea what sorts of things clients are going to want to customize, and you place this into an API you hit via the add-in model. If you got it 75% correct out of the chute, it would be a miracle, so the capacity for change built into the isolation model is what actually makes this work

Literally, you load code on the fly, with whatever security limitations you want, with the ability to unload at your convenience, and pick the correct code from what’s available in a specific directory location. Cool huh!

In WinForms, the WinForms threading model prohibits UI’s in the add-in. I understand this is fixed in WPF, although I haven’t yet written a WPF add-in user interface.

So, now that you have some idea why System.AddIn is worth the trouble, why is it so painful. How could I have possibly spent so long getting it running in a sample (I just output a single quoted string right now). To provide the isolation there is a minimum of seven projects/assemblies involved. These must be deployed in a very specific directory structure for the AddIn system to find the pieces it needs when it needs it. Then there is the error reporting problem – I’ve blogged about a particularly nasty “The target application domain has been unloaded” error.  So, once you hold your mouth just right, and all your code is perfect, it’s cool. But how many of you right perfect code? And what’s this about an easy maintenance model if you have to change SEVEN assemblies to alter the API.

I’m working on an article and tool for my column in Visual Studio Magazine that will take either metadata for the API, or the interface and build the simple pass through model. This gets you started. Later when you have interface changes, the isolation model pays for itself, but at that point you understand what’s happening.

It’s going to be a pretty cool example of the “overwrite until edited” mode that my tool supports. Before I’ve used this for editable files that were pretty much empty. Now, I want to separate changes due to metadata changes – that could be significant – from those for actual mapping you did in the adapters. With luck partial methods will lead to a pretty robust set of code you can alter as you need, while still generating the main API stream.

I find it very cool to see so many fragments coming together.


Looking at the List (2 of 6 or 7)

Here’s the next round!


11.  Property dialogs

In addition to designing objects for their actual visual interface, we design certain types of objects for how they will behave in the property dialog – whether visible and what editors they have available.


12.  Designers (Workflow & UI)

In addition to designing objects for their actual runtime behavior, we design them for how they will behave at design time – visual designers, avoiding issues with instantiating base classes (WinForms disallowing abstract/MustInherit base classes), etc.


13.  Design Patterns

Where possible, we design to patterns. Meaning that in addition to the details of our technology, we try to design to the lore of repeatable patterns.


14.  Unit testing

In addition to designing for runtime behavior, we design to test. This is particularly evident with BDD/TDD’s use of MVC patterns because they test well. But it’s also true of other applications. If we are testing them well, we wrote them to allow good testing. Testing also raises scope issues.


15.  Refactoring

The creation of our classes is dynamic in today’s world. The time of CRC cards when we actually thought we should get the properties close to correct first time out are gone. Renaming, switching parameter order, and more complex refactoring are common.


16.  Interfaces (contracts)

This was actually in Booch’s book, and of all the things here perhaps doesn’t belong. However, in my world of the 1990’s we did not think in contracts. Interfaces are contracts in our world and they are arguably more important to get correct because of versioning issues than anything else about our objects.


17.  Multiple assemblies

Assembly boundaries have become a critical point of visibility. Protected scope is more public than interna/Friend scope. Also, we do not have a protected and internal scope, only a protected or friend scope.


18.  InternalsVisibleTo attribute

Assembly boundary scope visibility can be broken via the InternalsVisibleTo attribute. While not widely used today except in testing, this is an important break to scoping.


19.  Overloads

Overloads means the same method can have multiple parameters sets – meaning multiple signatures. This means it’s more difficult to define exactly what a specific method does. This is also an area where few programmers understand details of what happens, and generics alter the impact of the rules.


20.Perf and virtual table issues

We shouldn’t program for this because the impact is too small. Unfortunately, Microsoft did and we are faced with an inflexible List class and a System.Collections.ObjectModel.Collection with few features. The impact on our code is we have to determine future needs to select the correct class.


Looking at the List (1 of 6 or 7)

I posted a list of ways that development has changed since the days we thought we knew how to design applications. I want to clarify a few things on this. This is about design or approaching architecture – it goes beyond OOD per se. I started out from that perspective because we believed when we were doing OOD that we could get our heads and hands around designing our applications. We can’t anymore, and it’s the changes I’m identifying here that keep us from that holistic approach.


I’m sharing with you a process of getting back to that holistic approach. We first have to understand the problem.


But calling this a problem is itself problematic. The items on this list are the things that make development great. When I asked Bill McCarthy about this list he said “oh you mean your list of the fun stuff about programming”. This really is the stuff to celebrate. But in the meantime, it’s making us a bit insane.


I want to walk through each of the 60 items. To do this, I’m going to split the list into groups of ten so it doesn’t get too overwhelming:


1.  Parallel entities

Instead of creating objects that do their jobs, we create sets of objects that work together to supply required plumbing before we even arrive at the point our business objects can start working. We have many entities and they have identical or nearly identical structures. In addition to a vertical design, we have a horizontal design that is at least as important.


2.  N-Tier

The horizontal structure stretches across many layers, at least potentially. These layers are essential to proper functioning and performance, and they are quite likely to evolve over the lifetime of the application.


3.  Sheer magnitude

Some design aspects, including visual drawings and CRC cards break down when the number of objects is very high – on the order of hundreds, not dozens.


4.  Application code generation

Generating code means we design certain things at the template, not object level. It also means we can change our design during the application life cycle which significantly changes up front planning.


5.  SOA (Service Oriented Architecture)

Service oriented architectures means we’re writing about tasks and basically designing significant portions of our applications one step higher than business objects. These services interact, not the objects within them.


6.  Semantics and canonical messages

Semantics and canonical messaging becomes very important in diverse organizations. The concept of an object is tightly coupled to its name in traditional design and must be decouple to provide a canonical view.


7.  Workflow

Like SOA, Workflow uses objects for task sequencing and works more at a task level of thinking than an object level. However, unlike SOA, these services need to be small grained to allow flexible combinations, instead of large grained for communications.


8.  Rules engines

Rules engines mean that our objects may not know what will happen when they run. They do not even know their own dependencies.


9.  Aspect oriented programming

Aspect oriented programming means we run code in unique ways based on attributes – especially delegates on attributes. .NET has a poor AOP model and this is largely a future looking item.


10.Impact of libraries

We no longer live and breathe isolated applications, but applications where much of the code is long lived and reused. These libraries have to be planned and maintained for the greater good, not the benefit of individual applications.



New Items for the List

While annotating the existing list, I came up with several new Items for the list of things that have changed since the days we thought we knew how to design object based applications (the last 20 years):

48.  Database demands (normalization/denormalization, primary keys, replication keys)

49.  Serialization

50.  Chatty/non-chatty interfaces

51.  FxCop

52.  Using/Import

53.  Generic constraints

54.  Readonly source code files (an implication of code generation)

55.  Events

56.  Lambda expressions, anonymous methods and closures

57.Expression trees (lambda expressions )

58.Attached dependency properties

59.Data binding

60.Late binding


Why Your Development is Crazy

Your development is crazy, or at least stressed. More likely its downright insane. We are struggling to deliver applications. And we’re really smart and we work amazingly hard. Seriously – when was the last time you met more than a stray dumb programmer? We all feel dumb, but there are damn good reasons we feel that way.

I’ve written about how hard programming is, but I stumbled across more specifics when I asked a question in a specific way for a speech abstract:

What’s changed about object orientation since we thought we knew how to design objects in the days of Grady Booch’s 1991 design book and Nancy Wilkinson’s book on CRC cards?

I knew things have changed, but what blew me away was how long this list became and that every time I show it I get a few more items. I added another one this morning and there is no doubt in my mind that this list will soon exceed 50 items.

Let me clarify how I intend this list. It’s extra things we either have to explicitly think about during design and/or things that when we do not think about threaten our applications. Don’t over analyze the list right now. Just soak it in. Add in the comments if you think I missed one. Then I’ll come back in later posts to give a sentence or two about why I think each item belongs on the list since I’ve been the gatekeeper. Finally, I think there are answers emerging from this list in how we need to shift design of our applications.

First, let’s get the list (in no particular order) on the table:


1.  Parallel entities

2.  N-Tier

3.  Sheer magnitude

4.  Application code generation

5.  SOA (Service Oriented Architecture)

6.  Semantics and canonical messages

7.  Workflow

8.  Rules engines

9.  Aspect oriented programming

10.Impact of libraries

11.Property dialogs

12.Designers (Property & UI)

13.Design Patterns

14.Unit testing


16.Interfaces (contracts)

17.Multiple assemblies

18.InternalsVisibleTo attribute


20.Perf and virtual table issues




24.Partial classes

25.Partial methods

26.Extension methods

27.Lambda expressions

28.Anonymous types

29.Declarative – XAML


31.Declarative – LINQ


33.Dynamic languages


35.Unstructured data

36.Generative programming

37.Social networking


39.Reporting (filtering, authorization)



42.Attributes during programming

43.Threading/parallel processing

44.Data transfer objects

45.Visual modeling/model-driven design

46.Design for evolving architectures, maintainability and extensibility

47.Poorly written/flakey tools (especially designers)


Thanks to the user groups in Mitchell (South Dakota), South Bend (Indiana), and Fort Collins (Colorado) for their support and contributions.