Perhaps this post should have been out before my previous one the topic. Anyway, continuing the journey on Managed Extensibility Framework (MEF) in .NET, let us see how we can attach metadata to an exported type and how it can be retrieved on the importing side, all in short steps (I am not going to spend much time and dig deep in to each of the available options; this objective here is simply to highlight the available options). I cannot stress enough the importance of metadata in MEF and discuss various scenarios where it proves useful. Nevertheless, there are four ways by which you can associate metadata to an export:

  1. Using an interface definition with properties representing the metadata
  2. Using a raw IDictionary <string, object>
  3. Using a helper class that receives an IDictionary<string, object> and provides a convenient wrapper
  4. Using a custom class for strongly-typed metadata

The first three approaches evolve around IDcictionary<string, object> type and as such, metadata is limited to key-value pair. Let us see how the first one works: you define an interface with read-only properties that represent the metadata. Once that is done, you can straightaway go and use ExportMetadata on the export parts:

It is very important that the interface properties are read-only and theirs name matches with the string key specified in ExportMetadata attribute. Also, you do not have to create a class that implements the metadata view interface; the MEF automatically creates one which accessible via the Metadata property on the Lazy<ProviderBase, IMetaView> type.

The second does not require you to define an interface to wrap the metadata properties; rather, MEF exposes the raw metadata dictionary directly on the Lazy<ProviderBase, IMetaView> type. It is the export and import developers’ responsibility that they use pre-agreed keys (as an explicit contract) and values types for the metadata dictionary. A simple misspell of the key or an incorrect data value for example, might result in erroneous metadata processing.

The next option for providing metadata is sort of a blended version of the first and second ones. You need to create a concrete class with properties representing the metadata but that class should have a public constructor that accepts IDictionary<string, object> as the only parameter. It is up to you how that class should interpret and expose the dictionary of export metadata. It is also not necessary that the class’s property names should match the keys in the dictionary received by the constructor. The key-value pair is all at your disposal for how you want to make use of it.

Here is the metadata view class. Just to demonstrate that the class can interpret and expose the provided metadata values by its own logic, I am simply exposing Name and Version metadata values in a different name to the importing type.

The final option gives you a strongly typed way to declare, define and consume metadata. As a first step, as in option 1, define an interface with read-only properties that would act as the metadata view. Then, define an attribute (class derived from System.Attribute) and mark it with MetadataAttribute attribute. This custom attribute should implement the interface defined in the previous step. These properties can be populated via constructor parameters (shown below) or direct property assignment at the calling site (export types). The final step is to decorate the export types with this custom attribute and supply the metadata values:

The custom metadata class is here below. Please note that this class does not have to implement the above metadata view interface as long as it exposes read-only properties matching the names and types of the properties present in that interface (duck typing).

Hope you found this post useful.

I have been playing around with Managed Extensibility Framework (MEF) of .NET 4.5 for a while now. Overall, it is a great framework for designing many plug-in/extensibility scenarios of .NET applications, web, desktop or even Windows Store (Metro). One area that got interesting in my MEF experiment was the way in which metadata worked in the case of inherited exports. As you know, in general, there are three ways to attach metadata to export parts (classes decorated with Export or InheritedExport attribute):

  1. Via ExportMetadata attribute
  2. Define a class deriving from ExportAttribute type, define one or more read-only properties representing the metadata and mark that class with MetadataAttribute type
  3. Define a class deriving from Attribute type, define one or more read-only properties representing the metadata and mark that class with MetadataAttribute type

In my experiment, the first two options were like breeze. The third turned out be a bit challenging to get it correct; I was not sure if the behavior I noticed was the intended one or a bug. Here is what I did:

  1. Define a class deriving from Attribute type and mark it with MetadataAttribute. This class will have read-only properties each representing the required export metadata. In fact, the metadata properties were wrapped in an interface and the custom attribute class implemented that interface.
  2. Define my base export class marked with InheritedExport attribute and also the custom metadata attribute I created in step 1

Here is the code:


Then, I went on to apply this custom attribute to my base export part that also has InheritedExport attribute on it. I defined two more classed deriving from the base export and applied metadata with the custom attribute. At this point, there are three export parts, each with its own metadata tagged via the custom attribute – CustomMetadata. Here is the code:


I setup a simple composition container to test out the metadata:


However, the output of the above took me for a surprise and I spent hours trying to fix this but later I inferred from various blog posts, Stack Overflow responses and the MEF CodePlex site that this behavior is “by design” in MEF! I was expecting the respective mask and symbol metadata values of each export part to be printed; rather I got the mask and symbol of the base export part printed for all three.


As you can see, the metadata supplied at each inherited export type was completely ignored. Rather, the metadata specified at the base class was carried on to the inherited classes too (contradicting to what I read at One solution to this issue is to stay away from the InheritedExport and explicitly apply Export attribute on each export part specifying the base export type:


And the corresponding output is:


The other solution is to have the custom metadata attribute extend ExportAttribute (in addition to the metadata interface) as shown below:


Then apply this attribute on each export part (without explicit Export attribute, since the new custom attribute extends the ExportAttribute type):


The output remains the same: each export part correctly gets the metadata supplied via the new custom attribute.

With v4.5, zip archival handling comes natively to .NET. No longer do you have to rely on 3rd party components to create, manage and extract zip files, though most of them are free.

The types you require to manage zip archives reside in System.IO.Compression namespace, in the System.IO.Compression.dll assembly and those types are ZipArchive and ZipArchiveEntry. The first one represents the zip file as a single entity while the latter represents an individual file in that zip. Note that ZipArchive acts like a pass-through stream, which means you should have a backing store (storage stream) such as a file on the disk.

ZipArchive & ZipArchiveEntry

Creating a zip file and packaging one or more files into it is relatively straight-forward. The following code creates a new zip file,, containing a single content file CS Summary.ppt.

Code Snippet

You may provide a relative path in CreateEntry if you would like to keep files in hierarchical way in the archive. The following code iterates a zip file’s contents and gets few basic details about each file in it:

Code Snippet

Extracting a file from the zip archive is as easy as packaging one into the archive but just in opposite direction:

Code Snippet

If you notice the above code listings for reading and writing zip files, we are dealing with multiple stream objects, even to create/read a single archive and write/extract a single entry into/from that archive. To ease the whole thing, .NET 4.5 has few convenient types with which you can create and read zip files with fewer lines of code. Basically, these types add extension methods to ZipArchive and ZipArchiveEntry types. Note that you have to add reference to the System.IO.Compression.FileSystem.dll assembly. The following code creates a new zip with a single file in it:

Code Snippet

Of course, you can add as many files as you want to the archive by calling CreateEntryFromFile multiple times. As of this writing, this method doesn’t support adding an entire folder to the zip just by specifying the folder name for the first parameter (or by any other means).

Extracting a file from zip is as easy the following code which extracts the first available file in the archive (assuming it is not a folder) and saves it to the file system:

Code Snippet

MSDN Reference: System.IO.Compression Namespace

Windows Server AppFabric version 1.1 has worthy improvements over its predecessor version. The first from my v1.1 favorites list is the ability to retrieve data from data sources when the client-requested data is not available in the cache ("cache miss") and also save modified cache data back to the data source – a complete round-trip (Microsoft calls this "read-through & write-behind"). My second one is the option to compress the cached data exchanged between the cache client and the cache server thus improving the network performance.


Similar to WCF, AppFabric 1.1 supports allows multiple cache configuration sections on the client side and choose which one to use in the code. If not chosen programmatically, the section named "default" is used automatically when the cache client starts. This latter is useful when testing cache clients in multiple environments like DEV, TESTING, STAGING and PROD since it requires just a single configuration change. Here is a sample client-side cache configuration section:

Multiple Client Cache Config Sections

Load Cache Client Config Using Code

Without the above code, cache client will load the default section. If no default cache configuration section is specified and your code doesn’t explicitly load a named section either, a runtime error will occur when you use the data cache factory.

Alright, back to the main topic of this post – data cache store provider. AppFabric 1.0 let you rely only on "cache-aside" programming model to handle any cache-miss scenarios. That is, your application is responsible for loading data from the data store (and store it in cache) if that data is not found in the cache. With v1.1, you can let AppFabric itself "fetch" the data from the data source whenever "cache miss" occurs, keep the fetched data in the cache and send it back to the client. Of course, we should fill the "how" part of the fetching process. Turning 180°, AppFabric v1.1 also makes it possible to save updated cache data (if happens) back to the data store. Like the read process, we have to plug in the "how" part of the persistence process.

AppFabric exposes read-ahead and write-behind mechanisms using familiar Provider design pattern. Implement the abstract class Microsoft.ApplicationServer.Caching.DataCacheStoreProvider in your assembly and hook it up with AppFabric cache. Note the following:

  • Data cache store provider implementation assembly and its dependents should be deployed to GAC (strong naming implied)
  • Provider is attached to a cache at the time of cache creation (New-Cache) or later (Set-CacheConfig)
  • Provider class exposes a public constructor with two arguments: a string for the cache name and a Dictionary<string, string> for provider-specific parameters

Let’s see a sample data cache store provider using NorthWind database:

Provider Sample Part 1

AppFabric invokes the constructor in two occasions: (1) when the cache-cluster starts (2) when you attach the provider with a cache using either New-Cache or Set-CacheConfig. The Delete method is invoked when you remove a cache item from the cache (DataCache.Remove()).

Now the crucial part: the read and write methods. Let’s implement the data store read methods first. Note that there are two overloaded read methods:

Provider Sample Part 2

As said earlier, the reads methods are invoked when a cache item with requested key is not found in the cache (from my experiment so far, AppFabric usually calls the first overload). In order to make both methods behave consistently, the collection overload of read method internally calls the simple read method. A null return value from the simple read throws an exception on the cache-client! Let us complete the write methods as well.

Provider Sample Part 3

Write methods are invoked when cached items are updated by cache clients (for example, calling DataCache.Put()). Unlike read, write and delete methods are not immediately called when cache items are updated or removed. Rather, AppFabric calls them in (configurable) regular intervals.

The final piece the Dispose() method:

Provider Sample Part 4

Pretty straightforward! :-)

Assuming the provider assembly builds successfully and provided a strong name, you can associate it with a new cache as below (you can also use Set-CacheConfig cmdlet to enable or disable the provider for a cache):

New-Cache "ProviderEnabledCache" -ReadThroughEnabled true -WriteBehindEnabled true -WriteBehindInterval 60 -ProviderType "MsSqlDataCacheStoreProvider.CoreDataCacheProvider, CoreDataCacheStoreProvider, Version=, Culture=neutral, PublicKeyToken=21b666fac19955ad" -ProviderSettings @{"conStr"="Data Source=(local)\SQLEXPRESS;Initial Catalog=Northwind;Trusted_Connection=True"}

If everything went ok, the new cache should have been created with a data cache store provider enabled and attached to it. The most common reason for failed provider configuration is that one or more dependent assemblies (other than .NET framework assemblies) of the provider assembly missing in the cache host’s GAC. Needless to say, you have to deploy this provider assembly (including its dependents) on all cache hosts.

Points to note:

  • When the GAC is updated with a newer provider assembly, you have to restart the cache cluster for the new bit to get effect.
  • You do not have to implement both read and write methods. For example, if your cache has only read-through option enabled, you may just have empty write methods. Similarly, when you have only write-behind enabled, your read methods can be placeholder implementations.
  • Cache clients are not notified of uncaught exceptions thrown from write methods.
  • Provider class should have a public instance constructor that accepts a string and Dictionary<string, string> as parameters. Otherwise, the cache cluster will not start or will render unpredictable behavior.

There are quite a many gems hidden in C# and .NET Framework. Here are some:

?? operator

C# 2 introduced the concept of nullable value types. If you assign a nullable value type to a non-nullable variable and if the former is null, you will get a runtime error. Meet ?? operator, also known as null-coalescing operator. The operator simply returns a default value when the nullable value type is null otherwise the actual value itself. You can think of it as a specialized ternary operator for nullable-value types.

Null-coalescing Operator

Partial Methods

Again, C# 2 introduced the concept of partial classes, primarily to separate designer-generated code from user-written code so that the designer (Visual Studio’s) can regenerate the code without affecting user’s part. There is one side-benefit too: multiple developers can work on the same class split across multiple files. C# 3 introduced partial methods as an evolvement of partial classes. In short, partial method is one that is declared (without body) but its actual implementation is left to the developer and the implementation is optional. The former aspect may look very similar to abstract methods but the latter is how partial methods differ from abstract methods.

Say you are implementing a business component that performs complex processing. At key stages of this process you want to give the developer an option to do detailed logging or tracing on how the actual logic is running or monitor the code’s performance. In order to do this, you can interleave logging method calls but you do not care how the actual logging is done, if at all it is done. Here is a sample of how the business component class may look like:

Partial Methods

The MortgageCore class defines a partial method named TraceMessage which is actually implemented in the subsequent partial class definition. The following shows the reverse-engineered definition of the MortgageCore.IsMortgateRequestValid() method from the compiled output:

Without the partial method implementation, the method looks like:

As you can see, the compiler has removed all the invocations to the partial method after the partial method implementation has been removed. Partial methods may look like delegates too: unless any handlers are hooked, calling a delegate doesn’t have any effect, which is similar to partial method behavior. But again, the C# complier hard-wires delegate calls into assembly while empty partial methods are optimized away without leaving any traces of their existence. Keep in mind the following:

  • They can only exist inside a partial class
  • They are implicitly private
  • No multiple definitions for partial methods
  • Can accept parameters including ref types but not out types
  • Cannot have a return type; only void

As you might have probably guessed, most of the restrictions are due to the fact that the complier removes partial method references if they are not defined.

Duck Typing

Quite popular in dynamic languages like Ruby, duck typing allows the C# compiler to treat a type with characteristics of another type like that type itself (confusing, I know! :-)). Take a look at the sample below:

The class DuckType doesn’t implement IEnumerabe or IEnumerator but has members with the same name as that of those interfaces. This lets the DuckType class to be treated like one that actually implemented these two interfaces and hence can be used in foreach loop:

Produces the output:

Duck Typing Example

Obvious, right?


Well, I have a laundry list of items to cover under this topic and so it might become a multi-part series.

Yesterday, I came across a question in one of the .NET newsgroups asking how to get the IP address of the DNS servers available in the network. Though there is no direct way of getting this information (yet, at least to my knowledge) via .NET 2.0 class library, there is at least one indirect way:

For a change, the code snippet is in Visual Basic .NET :-)

Imports System.Net.NetworkInformation
Imports System.Net

Dim nics As NetworkInterface()
Dim dnsIPs As IPAddressCollection

nics = NetworkInterface.GetAllNetworkInterfaces()

For Each nic As NetworkInterface In nics
If (nic.OperationalStatus = OperationalStatus.Up) Then
dnsIPs = nic.GetIPProperties().DnsAddresses
For Each dnsIp As IPAddress In dnsIPs
End If

Checking for operational status is not required unless you want to loop through inactive network interfaces as well.

It is very common in Windows applications to store application-wide data such as database connection strings, application title and folder path,etc that should be available for the duration of the application instance. The general strategy for storing such data is to have public classes with public properties/fields and access them from anywhere in the application. However, with Windows Presentation Foundation (WPF), the framework itself provides an application-wide “storage bag”, Application.Properties, that could be used for the very same purpose. This bag is an app-domain specific thread-safe key-value based IDictionary instance.

using System.Windows;
... ...
Application.Current.Properties["conStr"] = "my connection string";
... ...
string conStr = (string) Application.Current.Properties["conStr"]; // From anywhere in the application

Since the key value is of type object, a casting is required when retrieving the data from Application.Properties.

Today I saw a question in the ASP.NET newsgroup asking about
implementing a static (non-hyperlinked) site map path. As you know, the SiteMapPath
control in ASP.NET 2.0 displays a breadcrumb showing the current spot in the
site map navigation defined in the web.sitemap XML file. Parent nodes in
the current navigation path are shown as links to the respective pages, each
separated by the path separation character (defined by the SiteMapPath.PathSeparator
property). However, if you would like a static breadcrumb, that is, without
making parent nodes as links but as plain text, just hook into the
ItemDataBound event of the SiteMapPath control and clear the navigation URL (originally taken from the web.sitemap file) for every node.

Assuming a simple SiteMapPath markup as below:

<asp:SiteMapPath ID="SiteMapPath1" runat="server" CurrentNodeStyle-Font-Bold="true" NodeStyle-Font-Size="Small" OnItemDataBound="SiteMapPath1_ItemDataBound" PathDirection="RootToCurrent" PathSeparator=" > " PathSeparatorStyle-Font-Size="Small" RenderCurrentNodeAsLink="false">

Have the following for the ItemDataBound event:

protected void SiteMapPath1_ItemDataBound (object sender, SiteMapNodeItemEventArgs e)
    if (e.Item.ItemType == SiteMapNodeItemType.Parent || e.Item.ItemType == SiteMapNodeItemType.Root)
      // Control index may vary if additional controls have been defined in the node template
      ((HyperLink)e.Item.Controls[0]).NavigateUrl = "";

The result would look something like this (assuming an apporpriate web.sitemap file is in place):

Without this change, the same SiteMapPath control would look like (parent pages hyperlinked):

There was a question from an internal group asking for a way to programmatically
change the log on name (and of course password) of a Windows service – this is the
user account, local or domain, under which the service process runs when started.
Unfortunately, the .NET framework does not provide any class to accomplish this.
You might be tempted to use System.ServiceProcess.ServiceProcessInstaller
class but it allows configuring log on details (via Username and Password properties)
only when installing a new Windows service, not for an already existing one. The
alternatives are WMI and P/Invoke. Here is the sample code:

WMI Version:

using System.Management;
// Add reference to System.Management assembly

... ...

// Set SQL Server service to run as NETWORK

ManagementObject mo
= new ManagementObject("Win32_Service.Name='MSSQLSERVER'");

// 7th parameter is user name and 8th one
is password

// Pass null for a parameter
if the corresponding property setting shouldn't be changed

mo.InvokeMethod ("Change",
new object[]
{null, null,
null, null,
null, null,
@"LocalSystem", null, null,
, null});


You can change other settings such as display name, start mode, desktop interactivity,
etc as well for a service process. Refer to the documentation of Win32_Service.Change()
for more details.

P/Invoke Version:

using System.ServiceProcess;

using System.Runtime.InteropServices;

... ...

[DllImport ("advapi32.dll", CharSet =
CharSet.Unicode, SetLastError =

private static extern bool

ChangeServiceConfig (SafeHandle hService, UInt32 nServiceType, UInt32
nStartType, UInt32 nErrorControl,
lpBinaryPathName, String lpLoadOrderGroup,
IntPtr lpdwTagId, String
lpDependencies, String lpServiceStartName, String lpPassword, String

= 0xffffffff;

... ...
sc = new ServiceController

(ChangeServiceConfig (




@"NT AUTHORITY\Network Service",

("Service configuration changed
, this.Text,
.OK, MessageBoxIcon.Information);

("Error changing service configuration.
Win32 error code: "
+ Marshal.GetLastWin32Error().ToString());

sc.Dispose ();

You can find other service configuration settings that can be changed via this Win32 API


Couple of Comments: (1) Neither option will work if the calling user does not have
the necessary rights to change service configuration details (2) The service should
be restarted for the new settings to take effect – what this means is that if the
password you supplied for the user name is incorrect, for example, you will not
know this until the service is restarted!

One of the features of the ASP.NET application that I am currently working on requires copying dynamically generated files to a network share path. As guessed(!), the user identity of the ASP.NET worker process (IIS application pool, in case of Windows 2003) did not have the required privileges to the network path, causing the familiar “access denied” exception. The solution is pretty obvious: impersonate the file copying code with an user account that has necessary access rights to the share. Here is the pretty straight forward code for everyone’s benefit:

using System.Runtime.InteropServices;
using System.IO;
using System.Security.Principal;
[DllImport ("advapi32.dll", SetLastError = true)]
private static extern bool LogonUser (string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken);
[DllImport ("kernel32.dll", CharSet = CharSet.Auto)]
private extern static bool CloseHandle (IntPtr handle);
IntPtr token;
WindowsIdentity wi;
if (LogonUser ("<user name with network share access>",
    "<domain name>",
    "<user password>",
    out token))
    wi = new WindowsIdentity (token);
    WindowsImpersonationContext wic = wi.Impersonate ();
    File.Copy (@"<file source>", @"<network share/UNC path>");     wic.Undo ();     CloseHandle (token); } else {     Console.WriteLine ("LogonUser() failed with error code " + Marshal.GetLastWin32Error ()); }