Category Archives: 10065

MEF and Startup Time

I got this question via email:

I am trying to determine why an application (I didn’t initially design it) that uses MEF and PRISM is slow (example: it takes more than 1 minute to login).  When I looked at the code I noticed the following:

1. Many of the classes that are decorated with the [Export] attribute and have the [ImportingConstructor] attribute on their constructors do significant work in their constructors (large queries that take several seconds to finish, calls to services, etc.).

2. MEF instantiate the above classes when the application starts.

When MEF instantiates these classes at start time, shouldn’t the constructors of these classes (i.e., the ones with the [ImportingConstructor] attribute) be as simple as possible? That is, avoid doing any lengthy operations so as to minimize the time it takes for MEF to finish instantiating all the classes that participate in composition?

Of course you don’t want your application doing all this work at startup and making your user wait! It’s great that you tracked the performance problem to something that is fixable.

There are a couple of parts to the answer.

Simplifying importing constructors

Importing constructors should be as simple as possible. That’s not just for performance. When things go bad in the creation of classes in MEF (or pretty much any DI tool), sorting out the problem can be a nightmare. Put more simply: do what you can to ease debugging object construction, and that means simplifying constructors.

In most cases, the easiest way is to create properties for the result of each query and have the query performed the first time it’s needed- a simple check against null followed by the query only when needed. Depending on the complexity of your application and the number of times this occurs, this is an easy or a rather tedious refactoring.

Delayed instantiation

The second point is that there is no inherent reason for MEF to be instantiating everything at startup. MEF instantiates classes when they are needed and something is requesting an instance of this class. These chains can become rather long and complicated when an instance of one class needs an instance of another, and another, and another. This is a general problem for DI containers.

I don’t know if this is happening inside PRISM or in code that you might have more control over, but this can happen in UI code where one screen or menu requires a reference to another, and another, and another.

It is possible that when you solve the problem with the importing constructors, you will still have a performance problem because of excessive linking – too many instances created. It will still be a design problem, and still not a MEF problem.

If this is deep in PRISM, I don’t know what to say, I don’t know PRISM.

If you encounter this in code you control, look at the Lazy class. Yep, there’s a Lazy class in .NET, I love that. While it’s slightly more difficult to use, it instantiates a simple lightweight object that has the capacity to create the actual instance at the time you first need it. This allows you to delay creation until you actually need it. Once the underlying instance is created, all requests for the value of the Lazy returns that first instance – no further instances are created.

MEF has other features for controlling lifetime, but they are more commonly used when you need additional instances in specific scenarios.

Other possible problems

A minute is a really long time in our world. More than a minute to login is a really, really long time.

It’s possible that you have additional issues. I suggest that you also check for pooling issues if you’re directly accessing a database (such as creating excess connections). Several seconds for a query against a local database is fairly long and you may also have indexing issues.

If the queries really are that long and when you are making calls to services, your users may still feel pain, even if it happens a different point in the application. You may need to shift to asynchronous calls for some of this work. I know async still feels like a high dive with no water in the pool, but it’s where we’re slowly going.

If this is a WPF application, or another application that loads a lot of assemblies (DLLs) and that has a predictable start-up sequence, explore multi-core JIT. If you have access to Pluralsight, you can find a discussion here in the clip titled “Multi-core background JIT.” It’s almost unknown, but you can get a significant boost in your assembly load time by adding a single line of code. If you don’t have other problems, this can improve overall startup times by about 20%.

It’s almost comical, but that one line of code is the name of a file. No matter how much we want this feature to just work, .NET can’t do something to the target machine without permission. You have to tell it where to put the information it uses to orchestrate JIT. That is, unless you’re using ASP.NET. Since ASP.NET is in the habit of putting files on its server, multi-core background JIT is automatically enabled for ASP.NET.

Your problem isn’t MEF, but…

I appreciate the question because you aren’t blaming MEF for other design problems. However, in our current server focused world, I want to point out that classic MEF is a little heavy. It is an amazingly powerful tool, but on a server with throughput issues and on a low power device it may not be the best tool.

Microsoft has also provided a different lightweight version of MEF. It’s had at least three names, and is currently on NuGet as MEF 2.

If you read my blog, you may have gathered between the lines that I think the core CLR teams are doing absolutely amazing, astounding things and that their communication with the community, their ability to toot their own horn, sucks. You didn’t know one line of code could improve your app load 20%, right? One of the ways that communication sucks is a lack of clarity on the future of MEF. It is inconceivable to me that as a tool as important as MEF could not have a long and bright future in both the rich and lightweight versions. But I wish the silence around MEF was not so loud.

I hope you’ll add to the comments what you tried that worked and didn’t work!

Plain Old Objects and MEF

After my MEF presentation at the Hampton Roads .NET User Group someone asked me about creating objects like customers and invoices via MEF. I gave an overly quick answer to a really good question.

A lot of the IoC history involves using dependency injection for services. This is great partly because it’s a framework to isolate plain old objects from services, and services from each other. Like many of the techniques we’ve adopted in the agile timeframe, it’s not just what the technique does for us, but what the technique does to us. That’s the quick answer I gave.

But, we can go further to fully composed systems. Fully composed systems have offered mind boggling benefits in some places they’ve been tried, and they haven’t been tried in very many places yet. This is why I have such a high fascination with NetKernel and the work people like Randy Kahle (@RandyKahle) and Brian Sletten (@bsletten) are doing. And that work is similar to work Juval Lowy and I have talked about for a number of years.

However, fully composed systems with MEF, and I assume other DI tools (although I’ll be happy to be proven wrong) are hard. Without the infrastructure of something like NetKernel there’s a fair amount of work to do, and without the caching benefits of NetKernel it’s going to be tough to justify. It’s hard because everything needs an interface. Everything. And even if you generate the interfaces and plain objects, the level of infrastructure ceremony gets very unwieldy.. At least that’s my experience from using MEF to wrap everything (yes, everything) in interfaces, in order to create a fully composed MEF 1.0 system.

We could go a slightly different direction. Put everything into the container, but place plain old objects in the container as their own type, rather than via an interface. Plain old objects in this sense are objects that we can’t imagine a scenario where they’ll be reused and they have a unique, and generally multi-valued interface. A customer or invoice POCO would be examples.

Placing these objects into the container offers the immediate benefit of guaranteeing their usage is isolated We take advantage of what DI does to us, not just for us..

And if we use inference in MEF 2.0 (.NET 4.5), and probably configuration techniques with other IoC containers, we can stick another object in if we have a later reason to do it.

But here’s where strong typing bites us. Any new class that ever replaces that plain old object (the customer or invoice) has to be assignable to that class. That means it has to be that class or a class that derives from it. I’m still hanging on to the strong typing life boat because I still feel that without it I’m in a North Atlantic storm. For big systems, I think that’s still true, and while I put a lot of thought into making big systems into smaller systems, non-typed based DI is still a jump into icy water for me.

With the plain object in the container under its own type, if I get blindsided with a requirement that just doesn’t fit I can manage, I just have to write a wrapper for the non-derived object and the wrapper has to derive from the expected type. Ugly, but workable.

What I want to do is experiment with strongly typed systems with generated interfaces. I’ve already done this with traditional generation, and I want a solution that is cleaner than that. I don’t have the syntax worked out, but imagine that we never create the interface for our plain old object, we just tell the system to do it. The container uses the interface, all using objects request the object by its interface, and we humans can ignore it.

Until the day the plain old object needs attention. On that day, we make the interface explicit and do whatever we need to do.

But with the versions of .NET we have today, we can’t build this.

New Hampshire, Vermont, Rocky Mountain Tech Trifecta and New Mexico Sample Code

I’m also uploading my “Step 6” sample. I am not uploading Step1 as that was just exploring the default internet project. As I said in the talk, the Step 6 sample will be a bit tricky to understand without some text. I still hope to write the e-book, but I’m going to try to get some blog posts up in the meantime. You can get the code here. It’s with no warranty of any kind. It’s a sample for goodness sakes.

One of the reasons I delayed is because I spent some time looking at the MVC Scaffolding project last week. I think that still has a ways to go. In particular, it creates too much code for my tastes, isn’t DI centric, and has the fatal flaw of not encouraging regeneration. At the end of the day I decided it’s just not that relevant to this work, although I’ll also try to find time to work on some scaffolding ideas within the NuGet/Scaffolding framework.


New Hampshire, Vermont, Rocky Mountain Tech Trifecta and New Mexico User Group Slides

I’m uploading a single deck that has the basic slide deck I showed in all these locations, except Nashua. Many of the slides overlap, but if you want that slide deck let me know. Note that the extensibility slides are preliminary, I haven’t figured out how to really express the complexity of the flow and this is my attempt to explain the functionality, and validation isn’t in the right place. I hope to update blog about this flow soon. You can get the slides here. They are my copyright and if you wish to use any of them, please ask.

At the end of this deck is a very rough draft of some slides on a debugging MEF talk. I haven’t had a chance to present this, these slides are very rough, and I pretty much hack my way through MEF debugging, but these notes might help someone. Another thing I should blog L


Killer Feature for VNext – Language Embedded DSL

VS 2010 is nearly out the door, so it’s time to start fantasizing about killer features in the next version of .NET and Visual Studio.

The feature I want to see is “embedded DSL.”

Like many killer features, success comes from doing in a great way something you can already do halfway. So, I’ll demonstrate this feature in relation to T4 and a MEF scenario where it’s useful today. But before I get there…

I want to see DSL embedded in a language – maybe VB since it oddly enough has become the platform for experimentation, and since we’ve already started embedding other stuff in the language (XML). DSL implies particular syntax issues, but ultimately its metadata for code generation. I want to do code generation that is under my control in a fragmented way within the scope of normal code. Here’s a potential syntactic example:

Public Sub New()
‘ Normal Code
End Sub
{Property Name=”DisplayName” Type=String}
{Property Name=”DataName” Type=String}
{End PropertyPattern}

‘ More normal code

In this simplistic DSL, the Property metadata is just a pretty syntax for filling data into an interface. Wait, wait, you can’t stuff data into an interface. Right. You need an implementation which you can discover in real time (MEF anyone?) and that implementation can provide defaults for all the other values! And an include pattern here should let you reuse metadata defined once in your application.

PropertyPattern refers to a generation template – like an extended version of T4 – that can output real code in response to the metadata that’s passed. It’s an extended version because it’s strongly typed to the metadata defined via the metadata/DSL interface. The pattern is also discovered (MEF anyone?), which, well just trust me on this, allows a full governance model (customizing and governance on templates is one of my specialties, but let’s not geek out on that right in the middle of a hot fantasy). In simple terms, the governance model means a fallback mechanism through project (assembly), to group, to organization, to defaults/in the box.

Before I go further, let me say this is not a replacement for application generation. Application generation and code generation aren’t the same thing. Application generation is either a closely linked set of templates where the interrelations are as important as the templates, or application generation is architecture generation which is a new and emerging field.

Back, to the fantasy, because it’s an important part of a bigger picture of changing how we write applications…

Is this fantasy just Kathleen on too much MEF?

Actually no MEF required at all this morning… let me show you how to do this today – no compiler changes, no new dependencies, you can do this right now in VS 2008 (if you download the DSL Toolkit so you have T4, which you already did anyway, right?).

Well, OK, just a little MEF to get our morning started… the scenario is a MEF scenario, although there is no MEF in the solution. When you work with MEF and you want to create a MEF friendly interface for later discovery, you create an interface that a part will later fulfill, a separate interface of composition metadata, and a custom export attribute, which allows you to define a part just by implementing the first interface and tossing on the attribute. Whew! This means there is an annoying tedious (but technically elegant and necessary) pattern that I’ve implemented perhaps a billion times. What’s worse, the pattern obscures information about the interface, which of course I should also include in XML Documentation so I’m not in the code in the first place checking out the info – but see, more internal redundancy. And oh, by the way, if you screw up the pattern, the bugs can be very hard to track. Let’s fix it…

[NOTE: The implementation jumped back and forth between VB and C#. In VB, I’d implement this as XML literals which makes a more concise syntax with way less code in the included template, but T4 barfed at the XML literals and I didn’t feel like bothering with it today. Without XML literals, C# has better syntax because of the collection initializers which don’t make it into VB until 2010. There is no relation between the syntax of the DSL and the output syntax. I happen to be outputting VB.]

I created a T4 template. If you’re in VS2010, this is a normal Text Template, not a preprocessed one. If you’re in 2008, create a text file and give it a .tt extension. Also copy in the MefInterfaceDsl.t4 file that I’ve attached into your project. (I tested in VS 2008, in VS 2010, you may need to change the extension to .tx) Yes, your DSL will be written in T4 and yes, your DSL T4 template will have a different extension than the supporting one. That’s because Visual Studio outputs code for a file with a .tt extension and does not for one with a .t4 extension. You want output only from your DSL T4.

Here’s the template, then more talk:

<#@ template debug=”false” hostspecific=”false” language=”C#v3.5″ #>
<#@ assembly name=”System.Core” #>
<#@ output extension=”.vb” #>
<#@ include file=”MefInterfaceDsl.t4″ #>
         new Interface()
            Name = “ISearchModelBase”,
            Scope = Scope.Public,
            CompositionInfo =
               new Property() {Name=”TargetType”, PropertyType=”Type”}
            Members =
               new Property() {Name=”DisplayName”, PropertyType=”string”},
               new Property() {Name=”DataName”, PropertyType=”string”}

Wow, is that really a DSL? Yes, but I won’t claim it’s a very good one. It’s a hack to allow the concept to work in VS 2008 and VS 2010, in hopes that we can get an elegant syntax in VS 20Next.

The initial four lines are a necessary T4 distraction. I’ve stated that the template language is C# with 3.5 extensions (not necessary in VS 2010). I’ve included core because it makes 3.5 work. I’ve stated that output will be in .vb. The actual syntax for the output is in the include file. Remember this is a working sample, if you wish to use this, you’ll want to enhance the MefInterfaceDsl.t4, including rewriting it to output C# if that’s your current flavor preference.

The include file has a class named Interface, Property and a few others. Initializers and collection initializers build the graph for the ISearchModelBase interface which has two properties, and one composition metadata property. The Interface class has an Output method that returns a string with the code output. Visual Studio places this output in a dependent file (select Show All Files to see this in VB). I included it below so you don’t have to run a project just to see the output.

Since the artifact is generated, I don’t have to remember the pattern and I’ll never be bit by forgetting to set AllowMultiple=false. Since the DSL/metadata is in the project beside the artifact, I can find an work with it (this would not be appropriate for application generation DSL/metadata whose artifacts spanned many projects, solution, and platforms).

So why is a normal T4 template a DSL? Because a DSL is a way to define generation information (metadata) in a way that is friendly to the human, followed by artifact generation.

I want this extended to be a true embedded part of the language to avoid these limitations and supply these features (and probably a bunch more stuff I haven’t thought of):

  • – Can be any part of any normal code file
    • Not limited to entire files (although the T4 output can be partials)
    • The DSL/metadata is a holistic part of the code
  • – Artifact patterns (templates) are discoverable (can be anywhere by anyone)
  • – DSL/metadata patterns are discoverable (default values and pattern extensions
  • – Intellisense on the DSL
  • – Better syntax (drop the new and other class residuals)
  • – No reliance on file location (the T4 support file must be in relation to your project)
  • – No ugly opening stuff about T4 unrelated to the task at hand
  • – Standard extensible metadata/DSL patterns provided
  • – Standard extensible artifact patterns provided


Here’s the output:

Option Strict On
Option Explicit On
Option Infer On

Imports System
Imports System.Collections.Generic
Imports System.Linq
Imports System.ComponentModel.Composition

public Interface ISearchModelBase
   Property DisplayName As string
   Property DataName As string
End Interface
public Interface ISearchModelBaseComposition
   Readonly Property TargetType As Type
End Interface

< MetadataAttribute() > _
< AttributeUsage(AttributeTargets.Class, AllowMultiple:=False) > _
public Class SearchModelBaseAttribute
   Inherits ExportAttribute
   Implements ISearchModelBaseComposition

   Public Sub New( ByVal targetType As Type)
      _targetType = targetType
   End Sub

   Private _targetType As Type
   Public Readonly Property TargetType As Type Implements ISearchModelBaseComposition.TargetType
         Return _targetType
      End Get
   End Property

End Class

MEF and Cardinality Composition Failures

You can check here for a quick description of MEF

I’m giving a half dozen MEF talks this summer and I’m frequently asked “what happens if a part isn’t available”. The old answer was “the system crashes, how could it do anything else?” This conversation definitely deflates the upbeat mood of a MEF talk. Recently, MEF has changed, making that answer obsolete.

MEF is a composition container which satisfies imports by tracking down associated exports. The correct number of exports to satisfy each import is called the cardinality and can be one, zero to one or zero to many. Thus an import can fail because there are too few or too many matching exports.

In the MEF previews 1-5 (inclusive) and in the Visual Studio CTPs and beta1, MEF throws an exception when a failure occurs.

MEF Preview 6, released last Monday, introduces “stable composition.” With stable composition, the container can know about, but not expose parts. If a part fails on a cardinality rule, the MEF container remembers the part, but keeps it hidden. If additional composition occurs (such as through the Refresh method) additional attempts to fully compose the part occur. If its cardinality is fulfilled, the part becomes available for additional composition. You can think of this as “if a missing sub part shows up later, the containing part will become available.”

There are both good and bad aspects of this, and it definitely affects how you think of and write your MEF systems. In general, it will make MEF apps more stable. If you design a plug-in model, and the creator of a plug-in fails to properly deploy (or a confused user deletes some but not all of a plug-in via File Explorer), your system will not crash. The containing part, which would probably fail if run, doesn’t appear in composition. This makes your system more robust against errors that are beyond your control. It also allows the late composition strategy, although I’m not yet clear on good scenarios for it.

The down side is that you may have more challenges finding certain types of composition errors because you will not receive an exception – you need to catch the current state of the composition container. And if you don’t consider this behavior when writing your app, you can get officially bad behavior.

In a plug-in design such as the directory composition model, any part in your system can fail on a cardinality (and could previously have crashed) because all parts can be made of other parts with dependencies you don’t know about.

For example, consider creating a main menu as a part and that menu is made up of menu items and sub menus and one of the menu items cannot be composed due to a deeply nested cardinality failure. If each main menu item is a part and you import the menu items as a collection using the ImportMany attribute, you’re fine. Your application will simply not display the failed menu item.As deep as the nesting goes, each layer that is a collection is naturally protected because cardinality failures just remove one part from the collection.

If instead you create your main menu to explicitly expect a particular menu item (such as a Tools sub menu) and that menu has a required Import for a part and so on down to a point of cardinality failure, then the failure cascades up the chain. This happens because the leaf cardinality failure means that part is not available, causing a cardinality failure at the next level. In this hypothetical case, the failure cascades all the way up to the main menu which does not now exist. The application either runs without a main menu, or the application fails because the main menu is missing.

You can avoid this by considering the intent of each import and providing appropriate protection. Some parts are optional and your application can run just fine without them. These should either be in a collection (ImportMany in recent previews) or not required. Other parts are important, but not important enough to cause their container to fail. These can be managed via asserts, and communication with support or the user. If parts are critical to the application running, then you need to check that they exist after composition and shut down the application as gracefully as possible.

I think this is a good change, but at least until more patterns emerge, you need to consider what would happen if any import is not successfully satisfied. Should the containing part also fail to be composed?

MEF is a very sharp knife. It cuts up the tomatoes and carrots really well and we can have a dandy stew. Or we can wind up in the emergency room. Understanding MEF is important to safe MEF use.

You can find more on the Preview 6 changes in this post by Nicholas Blumhardt who is a member of the MEF team. I’d like to thank Glenn Block for his discussions with me on this change.

MEF Assembly Granularity

I’ve been contemplating how to organize MEF assemblies. I think the processing I did establishing the first cut at organization, and the shake down of that strategy, may be interesting to other people designing MEF systems.

As a quick review, MEF lets you throw parts into a MEF container and sort out how parts work together at runtime. Parts are recognized by a string identifier. I’m almost always using interfaces as the contract and the interface names as the identifiers. Parts reside within assemblies and in the common case assemblies are discovered because they are grouped into anticipated directories.

With this approach, only the part implementing the interface and the part that is using the interface need to understand the interface or explicitly reference the interface’s assembly. And since parts are discovered and loaded at an assembly level, the granularity of implementing assemblies also controls the granularity of the load. I care about the assembly granularity of contract/interface assemblies so excess stuff can be avoided and naming conflicts (resolved via namespaces) are minimized. I care about the granularity of implementation assemblies because until I attain a priority system with additional granularity, prioritization/defaults are only as granular as their containing assemblies.

At one extreme, all interfaces reside in one assembly and all implementations reside in another. It doesn’t make sense to put them into the same assembly as then hard coded references exist and ensuring isolation is difficult. At the other extreme, every interface and every implementation resides in its own assembly. I think both of these extremes are a terrible solution. That’s because this composable system (and I would think any composable system) have parts with very different roles and lineages/history. In the simplest sense for a generator – metadata providers and templates are fundamentally different and could easily be provided by different teams.

Initially I thought the primary consideration should be the implementation deployment, but Phil Spidey pointed out in the MEF discussions that the interface organization is more important, because once released to the wild it might be hard to fix.

I decided on six contract assemblies:


Interfaces referenced by the template harness itself


Interfaces sharing database structure


Interfaces sharing business object structure


Interfaces for a naming service


Interfaces for outputting data, including hashing


Miscellaneous interfaces that don’t fit elsewhere

I’ve used a few criteria for this design:

Interfaces that are used by the system and therefore can’t easily be changed reside together in CommonContracts. The template harness also references CommonOutputServiceContracts but this is in a separate assembly because it has a distinct purpose, may evolve on a different time frame and you are far more likely to provide alternate implementations for output than for the core interfaces.

The naming service is also a separate assembly because it is a distinct purpose and some people will certainly supply alternate implementations to manage human languages other than US English. Both the output service and naming service are a few distinct interfaces that work together. I also had a few odd ball interfaces and decided to go with a grab bag of miscellaneous interfaces rather than a separate assembly for each interface. Time will tell whether that is a good decision.

I initially put the two metadata interfaces into a single assembly, but I think it’s quite likely that these interfaces will evolve separately and almost certain that they will be implemented independently.

I’d like to note that the first version of the harness, which is almost, almost done (a separate blog post) will be a CTP /alpha level release. I will take feedback on the interfaces and I do expect them to change. A core part of the composable design is that you can spin off your interfaces/implementations so while these changes will be breaking, you can uptake them at your own pace.