A great video from Microsoft //build 2013 event on SLAB: Creating Structured and Meaningful Logs with Semantic Logging
Check out this blog post as well on the same topic: Semantic Logging Application
Block
A great video from Microsoft //build 2013 event on SLAB: Creating Structured and Meaningful Logs with Semantic Logging
Check out this blog post as well on the same topic: Semantic Logging Application
Block
Perhaps this post should have been out before my previous one the topic. Anyway, continuing the journey on Managed Extensibility Framework (MEF) in .NET, let us see how we can attach metadata to an exported type and how it can be retrieved on the importing side, all in short steps (I am not going to spend much time and dig deep in to each of the available options; this objective here is simply to highlight the available options). I cannot stress enough the importance of metadata in MEF and discuss various scenarios where it proves useful. Nevertheless, there are four ways by which you can associate metadata to an export:
The first three approaches evolve around IDcictionary<string, object> type and as such, metadata is limited to key-value pair. Let us see how the first one works: you define an interface with read-only properties that represent the metadata. Once that is done, you can straightaway go and use ExportMetadata on the export parts:
It is very important that the interface properties are read-only and theirs name matches with the string key specified in ExportMetadata attribute. Also, you do not have to create a class that implements the metadata view interface; the MEF automatically creates one which accessible via the Metadata property on the Lazy<ProviderBase, IMetaView> type.
The second does not require you to define an interface to wrap the metadata properties; rather, MEF exposes the raw metadata dictionary directly on the Lazy<ProviderBase, IMetaView> type. It is the export and import developers’ responsibility that they use pre-agreed keys (as an explicit contract) and values types for the metadata dictionary. A simple misspell of the key or an incorrect data value for example, might result in erroneous metadata processing.
The next option for providing metadata is sort of a blended version of the first and second ones. You need to create a concrete class with properties representing the metadata but that class should have a public constructor that accepts IDictionary<string, object> as the only parameter. It is up to you how that class should interpret and expose the dictionary of export metadata. It is also not necessary that the class’s property names should match the keys in the dictionary received by the constructor. The key-value pair is all at your disposal for how you want to make use of it.
Here is the metadata view class. Just to demonstrate that the class can interpret and expose the provided metadata values by its own logic, I am simply exposing Name and Version metadata values in a different name to the importing type.
The final option gives you a strongly typed way to declare, define and consume metadata. As a first step, as in option 1, define an interface with read-only properties that would act as the metadata view. Then, define an attribute (class derived from System.Attribute) and mark it with MetadataAttribute attribute. This custom attribute should implement the interface defined in the previous step. These properties can be populated via constructor parameters (shown below) or direct property assignment at the calling site (export types). The final step is to decorate the export types with this custom attribute and supply the metadata values:
The custom metadata class is here below. Please note that this class does not have to implement the above metadata view interface as long as it exposes read-only properties matching the names and types of the properties present in that interface (duck typing).
Hope you found this post useful.
I have been playing around with Managed Extensibility Framework (MEF) of .NET 4.5 for a while now. Overall, it is a great framework for designing many plug-in/extensibility scenarios of .NET applications, web, desktop or even Windows Store (Metro). One area that got interesting in my MEF experiment was the way in which metadata worked in the case of inherited exports. As you know, in general, there are three ways to attach metadata to export parts (classes decorated with Export or InheritedExport attribute):
In my experiment, the first two options were like breeze. The third turned out be a bit challenging to get it correct; I was not sure if the behavior I noticed was the intended one or a bug. Here is what I did:
Here is the code:
Then, I went on to apply this custom attribute to my base export part that also has InheritedExport attribute on it. I defined two more classed deriving from the base export and applied metadata with the custom attribute. At this point, there are three export parts, each with its own metadata tagged via the custom attribute – CustomMetadata. Here is the code:
I setup a simple composition container to test out the metadata:
However, the output of the above took me for a surprise and I spent hours trying to fix this but later I inferred from various blog posts, Stack Overflow responses and the MEF CodePlex site that this behavior is “by design” in MEF! I was expecting the respective mask and symbol metadata values of each export part to be printed; rather I got the mask and symbol of the base export part printed for all three.
As you can see, the metadata supplied at each inherited export type was completely ignored. Rather, the metadata specified at the base class was carried on to the inherited classes too (contradicting to what I read at http://mef.codeplex.com/discussions/82448/). One solution to this issue is to stay away from the InheritedExport and explicitly apply Export attribute on each export part specifying the base export type:
And the corresponding output is:
The other solution is to have the custom metadata attribute extend ExportAttribute (in addition to the metadata interface) as shown below:
Then apply this attribute on each export part (without explicit Export attribute, since the new custom attribute extends the ExportAttribute type):
The output remains the same: each export part correctly gets the metadata supplied via the new custom attribute.
With v4.5, zip archival handling comes natively to .NET. No longer do you have to rely on 3rd party components to create, manage and extract zip files, though most of them are free.
The types you require to manage zip archives reside in System.IO.Compression
namespace, in the System.IO.Compression.dll assembly and those types are ZipArchive
and ZipArchiveEntry
. The first one represents the zip file as a single entity while the latter represents an individual file in that zip. Note that ZipArchive
acts like a pass-through stream, which means you should have a backing store (storage stream) such as a file on the disk.
Creating a zip file and packaging one or more files into it is relatively straight-forward. The following code creates a new zip file, Summary.zip, containing a single content file CS Summary.ppt.
You may provide a relative path in CreateEntry
if you would like to keep files in hierarchical way in the archive. The following code iterates a zip file’s contents and gets few basic details about each file in it:
Extracting a file from the zip archive is as easy as packaging one into the archive but just in opposite direction:
If you notice the above code listings for reading and writing zip files, we are dealing with multiple stream objects, even to create/read a single archive and write/extract a single entry into/from that archive. To ease the whole thing, .NET 4.5 has few convenient types with which you can create and read zip files with fewer lines of code. Basically, these types add extension methods to ZipArchive
and ZipArchiveEntry
types. Note that you have to add reference to the System.IO.Compression.FileSystem.dll assembly. The following code creates a new zip with a single file in it:
Of course, you can add as many files as you want to the archive by calling CreateEntryFromFile
multiple times. As of this writing, this method doesn’t support adding an entire folder to the zip just by specifying the folder name for the first parameter (or by any other means).
Extracting a file from zip is as easy the following code which extracts the first available file in the archive (assuming it is not a folder) and saves it to the file system:
MSDN Reference: System.IO.Compression Namespace
I never knew Stack Exchange, the base of many world-popular Q&A sites on varying topics, has so many cool open source libraries. Many of them seem to be performance-tuned for large web traffic. Worth taking a look: http://blog.stackoverflow.com/2012/02/stack-exchange-open-source-projects/
Windows Server AppFabric version 1.1 has worthy improvements over its predecessor version. The first from my v1.1 favorites list is the ability to retrieve data from data sources when the client-requested data is not available in the cache ("cache miss") and also save modified cache data back to the data source – a complete round-trip (Microsoft calls this "read-through & write-behind"). My second one is the option to compress the cached data exchanged between the cache client and the cache server thus improving the network performance.
Similar to WCF, AppFabric 1.1 supports allows multiple cache configuration sections on the client side and choose which one to use in the code. If not chosen programmatically, the section named "default" is used automatically when the cache client starts. This latter is useful when testing cache clients in multiple environments like DEV, TESTING, STAGING and PROD since it requires just a single configuration change. Here is a sample client-side cache configuration section:
Without the above code, cache client will load the default section. If no default cache configuration section is specified and your code doesn’t explicitly load a named section either, a runtime error will occur when you use the data cache factory.
Alright, back to the main topic of this post – data cache store provider. AppFabric 1.0 let you rely only on "cache-aside" programming model to handle any cache-miss scenarios. That is, your application is responsible for loading data from the data store (and store it in cache) if that data is not found in the cache. With v1.1, you can let AppFabric itself "fetch" the data from the data source whenever "cache miss" occurs, keep the fetched data in the cache and send it back to the client. Of course, we should fill the "how" part of the fetching process. Turning 180°, AppFabric v1.1 also makes it possible to save updated cache data (if happens) back to the data store. Like the read process, we have to plug in the "how" part of the persistence process.
AppFabric exposes read-ahead and write-behind mechanisms using familiar Provider design pattern. Implement the abstract class Microsoft.ApplicationServer.Caching.DataCacheStoreProvider
in your assembly and hook it up with AppFabric cache. Note the following:
New-Cache
) or later (Set-CacheConfig
) Let’s see a sample data cache store provider using NorthWind database:
AppFabric invokes the constructor in two occasions: (1) when the cache-cluster starts (2) when you attach the provider with a cache using either New-Cache
or Set-CacheConfig
. The Delete
method is invoked when you remove a cache item from the cache (DataCache.Remove()
).
Now the crucial part: the read and write methods. Let’s implement the data store read methods first. Note that there are two overloaded read methods:
As said earlier, the reads methods are invoked when a cache item with requested key is not found in the cache (from my experiment so far, AppFabric usually calls the first overload). In order to make both methods behave consistently, the collection overload of read method internally calls the simple read method. A null return value from the simple read throws an exception on the cache-client! Let us complete the write methods as well.
Write methods are invoked when cached items are updated by cache clients (for example, calling DataCache.Put()
). Unlike read, write and delete methods are not immediately called when cache items are updated or removed. Rather, AppFabric calls them in (configurable) regular intervals.
The final piece the Dispose()
method:
Pretty straightforward! :-)
Assuming the provider assembly builds successfully and provided a strong name, you can associate it with a new cache as below (you can also use Set-CacheConfig
cmdlet to enable or disable the provider for a cache):
New-Cache "ProviderEnabledCache" -ReadThroughEnabled true -WriteBehindEnabled true -WriteBehindInterval 60 -ProviderType "MsSqlDataCacheStoreProvider.CoreDataCacheProvider, CoreDataCacheStoreProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=21b666fac19955ad" -ProviderSettings @{"conStr"="Data Source=(local)\SQLEXPRESS;Initial Catalog=Northwind;Trusted_Connection=True"}
If everything went ok, the new cache should have been created with a data cache store provider enabled and attached to it. The most common reason for failed provider configuration is that one or more dependent assemblies (other than .NET framework assemblies) of the provider assembly missing in the cache host’s GAC. Needless to say, you have to deploy this provider assembly (including its dependents) on all cache hosts.
Points to note:
string
and Dictionary<string, string>
as parameters. Otherwise, the cache cluster will not start or will render unpredictable behavior.While working on a SQL Server change notification library (for cache invalidation) recently, the SqlDependency change notification handler kept receiving a notification with SqlNotificationEventArgs.Info = Options
, SqlNotificationEventArgs.Source = Statement
and SqlNotificationEventArgs.Type = Subscribe
thus failing with the subscription process. Using the lead SqlNotificationEventArgs.Info = Options
, further diagnosis revealed that one of the SET options required for SQL Server query notification was not correct. The offending option was ARITHABORT
, which was set to OFF
(connection default) but should be ON
. Except this, other SET
options were correctly set by default however.
The obvious solution is to explicitly turn on ARITHABORT
on the connection that would be used for notification subscription.
Please note that ARITHABORT
should be set before running the SQL query to be monitored, otherwise the subscription process will still fail as above.
A new tool that enables you to create a single assembly shat can be shared by multiple .NET runtime environments, namely Silverlight, Windows Phone 7 and XBOX Gaming. Of course, you can target the desktop .NET runtime also when building class libraries but since the former ones include only a subset of the desktop .NET runtime, you only get the common denominator assemblies.
This topic has been due for quite long time from my end. Things kept me busy and couldn’t get bandwidth to write about this subject. All right, let’s get dive into the topic.
WCF Data Services supports two providers out-of-the-box: Entity Framework and Reflection providers. In my earlier post I discussed the latter: OData feeds using POCO entities. In a nutshell, every feed that your service exposes should be implemented as a property of type IQueryable
for data retrieval. If you want to support insert, update and delete operations, then you should implement IUpdateable
interface as well. It is required to implement both for OData. In this post, I will show how you can use Entity Framework Data Service provider of WCF to publish OData services. Believe me – it is extremely simple compared to the Reflection Data Services provider of WCF.
Due to its simplicity and lightweight-ness, I am going to use the familiar Northwind database for this post. The first step is to create the entity data model (EDM) for your data source -the Northwind database. I will quickly run through the steps to do this (assuming you have a running Northwind database running on an SQL Server instance):
Now, you would see an .edmx file created and added to your class library project. You will also see an app.config file added to the project with a connection string entry looking like this (in a single line):
<connectionStrings> <add name="NWEntities" connectionString="metadata=res://*/Northwind.csdl|res://*/Northwind.ssdl|res://*/Northwind.msl; provider=System.Data.SqlClient;provider connection string="Data Source=(local);Initial Catalog=Northwind; Integrated Security=True; "" providerName="System.Data.EntityClient" /> </connectionStrings>
The designer-generated class for the .edmx file will have the entity and association classes defined for each table you selected during the EDM creation wizard process. But the most important generated-class is the entity container class derived from ObjectContext
class looking like the highlighted one below:
The second step is to create a class that wraps and exposes the entity model as OData feeds (entity sets) and a WCF service host for the feeds. Let’s create a class (name as you like) but derive it from DataService<T>
generic type. This is the same as what I did in my earlier post demonstrating the Reflection provider.
NWEntities is the name of my Entity Framework container class created during the first step (creating the EDM for Northwind). The data service initialization method must be exactly as shown above (including the method name). Yes, this method could have been defined as a virtual method in DataService
class that could be overridden in the derived class with custom access rules but I do not know why Microsoft chose to let the developers define this method in the way shown above with some hard-wiring somewhere (I would be glad if you know why and leave a comment). It can completely be an empty method in which case the WCF will configure the service class with default values, but if you want to control access to various entities in the EDM then this is the only place you can do it and you cannot change it later unless you restart the service and recreate the app domain.
Now the hosting part:
Running the host application should throw up the following window:
Now the OData service is ready for consumption. Open up Internet Explorer (or any application that can show XML response from HTTP requests, such as Fiddler) and navigate to the service address shown in the console window. The HTTP response would be an XML showing the list of feeds (entity sets) available at the service URL:
Depending on the tables and other objects selected during the EDM creation process and permissions set in the InitializeService method, you may see less or more number of feeds. Nevertheless, now you make all the OData data retrieval requests (HTTP GET) and for a sample refer to my earlier post. But what about entity creation, update and delete operations? We use HTTP verbs POST, PUT and DELETE for create, update and delete operations respectively. The body of the HTTP request will have the new values for the entity to be created, edited and deleted. For example, to a new category called Milk Products, you make a HTTP POST request as below (I used Fiddler to create all the HTTP requests shown from here on):
Note that I am not passing certain Category properties namely, category ID and Picture. You can omit any null-able (Picture) and identity type columns (Category ID). The Content-Type header specifies the payload format (in this case it is Atom XML, but it can be JSON too). On successful creation, the service will return an Atom XML representing the newly created entity with new identity fields (if any) inserted in:
You can add child entities directly to a parent. The following HTTP POST creates a new product for the newly created category ID 10:
Again, category ID and product ID are not passed because, the POST URI itself specifies the category ID implicitly and product ID is an identity column. The response on successful product creation looks like this:
Another scenario: what if you would like to add a parent and children in one shot? Let’s add a category and a couple of products to it in the same HTTP request. Please note how the parent (XML in dark red color) associated child products (XML in purple color) are specified inline in the feed itself:
The HTTP response for this request will have only the newly created parent Atom XML but at the backend the WCF Data Services would create the children and attach them with the newly created parent (fire up a SQL query and check for yourself).
Let’s see how you can update an entity, here an existing category with ID 13:
As per OData specification, PUT operation against an entity completely replaces the existing one in the data source; the direct effect is that if you miss a property in the request, it will be set to its default value or empty/null as the case may be. In order to avoid this situation, OData recommends HTTP MERGE operation, which merges new values with the existing one without overwriting.
The above operation updates only the Description property leaving the category name and picture as is. Had it been a PUT request, the latter ones would be set to null/empty and if the columns backing those properties were non-null able then the PUT operation would fail returning HTTP status code 500. It is also possible to target a PUT request against a specific entity property to update it directly. The following request updates the Decryption property of the Category with ID 13:
Note the content type is flipped back to application/xml. Finally the deletion, which is pretty straight-forward:
The above request deletes the category element with ID 13 from the data source. Obviously, the delete would fail if the target entity has associated children.
The important thing to note here is that all the CRUD operations performed are routed through the EDM created for the data source by the WCF Data Service infrastructure and hence you can perform any custom operations via stored procedure mapping for CUD requests (POST, PUT, MERGE and DELETE).
All right, I have shown you all the examples using raw XML markup and HTTP requests/responses but do you really have to deal with such raw format? The answer is No, unless you really like playing with XML directly and making HTTP requests yourself. OData has a client SDK for various platforms such as JavaScript, Objective-C, PHP and Java. As far as .NET is concerned, Visual Studio automatically creates all the necessary proxy entity classes to interact with the OData service and perform all supported CRUD operations.
Here is a simple client code that retrieves all products belonging to category ID 5 (assuming you have made a service reference to the WCF Data Service):
If everything goes fine, running the above code should output:
I seriously hope this post helped you realize the power of OData services and its applicability in the enterprise and B2B scenarios and especially how super-duper easy it is to implement one with WCF and Entity Framework. In a future post I will discuss another little cool OData feature that I have not opened up my mouth yet! :-)
I am sure most of you would have seen tons of definitions, tutorials and examples of MVVM pattern and its context with respect to Windows Presentation Foundation (WPF) and Silverlight (a stripped down WPF for the internet browser environment). However, many people tell me that those examples don’t really help them understand the pattern fully well to use it in their applications or at least think about the pattern’s applicability in a given scenario. This post is an attempt to address this gap and demystify MVVM enough to start implementing it via code.
Like its ancestors – MVC, MVP/variations, MVVM pattern serves to a specific purpose: separating presentation (ASP.NET, Windows Forms, WPF and Silverlight) from data (business data such as order, customer, contact, etc.) and the logic of displaying it (responding to control events, displaying data using various controls, etc.). By this separation, another benefit that MVVM brings to the table is the testability of the presentation logic independent of the UI. This especially makes sense where UI designers and developers have to work on the same artifact to do their respective jobs and follow their own workflow. Imagine a UI designer working on an .aspx page and a developer on the page’s code-behind. The designer may do multiple iterations trying various design combinations, themes, reviewing it by the business, etc. while the developer wouldn’t really care about it. On the flip side, a developer would want to do unit testing, code analysis, bug fixing, etc. that the designer doesn’t care about. These two roles can work independently without challenges if there is a mechanism that helps seamlessly integrate their work at end with ease. MVVM is that mechanism, which addresses this exact problem with less or no effort!
Model: Data we intend to display in the UI via various controls. It could be any data source such as a WCF services, OData data provider, SQL Server and XML File.
View: The actual UI – web forms, Windows forms, WPF/Silverlight (consumer of data, the Model)
ViewModel (VM): The confusing piece in the pattern by its name for many people! VM acts as the data source for any component (consumer) that would be interested in the data exposed by the VM; yet it does not know anything about its consumer. The consumer could be a View (which is the case most of the time) or just anything that knows how to make the best out of VM’s data. As a matter of fact, the VM doesn’t care how the consumer would display the data (if it is a UI), use it or whatsoever. It is this fact that makes a VM testable without an UI via mocking. After testing, you just replace the mock with the actual UI and everything just works as intended.
Generally ViewModels are designed based on its consumer(s) data requirements (Model for the UI and hence the name ViewModel). For example, a VM for a dialog showing a list of products will have public properties and methods for product list (an ICollection may be?), sorting and filtering all for that dialog’s consumption.
Remember that there can be more than one consumer pulling data from a VM. You might have come across cases where a single VM provides data to more than one ViewModels. Who by the way provides data to a VM to serve to its consumer(s)? Model.
So, how MVVM fits into the WPF/Silverlight development? Stay tuned!