Here is an awesome collection (as of now 50!) of Windows 8 app design templates (for free) for different application scenarios. It has the source code in both C#/XAML and JavaScript/HTML. Even if you may not use them as is, I think they can serve a starting point by giving some ideas around design and functionality implementation.

Hope you find them useful.

Windows Server AppFabric version 1.1 has worthy improvements over its predecessor version. The first from my v1.1 favorites list is the ability to retrieve data from data sources when the client-requested data is not available in the cache ("cache miss") and also save modified cache data back to the data source – a complete round-trip (Microsoft calls this "read-through & write-behind"). My second one is the option to compress the cached data exchanged between the cache client and the cache server thus improving the network performance.


Similar to WCF, AppFabric 1.1 supports allows multiple cache configuration sections on the client side and choose which one to use in the code. If not chosen programmatically, the section named "default" is used automatically when the cache client starts. This latter is useful when testing cache clients in multiple environments like DEV, TESTING, STAGING and PROD since it requires just a single configuration change. Here is a sample client-side cache configuration section:

Multiple Client Cache Config Sections

Load Cache Client Config Using Code

Without the above code, cache client will load the default section. If no default cache configuration section is specified and your code doesn’t explicitly load a named section either, a runtime error will occur when you use the data cache factory.

Alright, back to the main topic of this post – data cache store provider. AppFabric 1.0 let you rely only on "cache-aside" programming model to handle any cache-miss scenarios. That is, your application is responsible for loading data from the data store (and store it in cache) if that data is not found in the cache. With v1.1, you can let AppFabric itself "fetch" the data from the data source whenever "cache miss" occurs, keep the fetched data in the cache and send it back to the client. Of course, we should fill the "how" part of the fetching process. Turning 180°, AppFabric v1.1 also makes it possible to save updated cache data (if happens) back to the data store. Like the read process, we have to plug in the "how" part of the persistence process.

AppFabric exposes read-ahead and write-behind mechanisms using familiar Provider design pattern. Implement the abstract class Microsoft.ApplicationServer.Caching.DataCacheStoreProvider in your assembly and hook it up with AppFabric cache. Note the following:

  • Data cache store provider implementation assembly and its dependents should be deployed to GAC (strong naming implied)
  • Provider is attached to a cache at the time of cache creation (New-Cache) or later (Set-CacheConfig)
  • Provider class exposes a public constructor with two arguments: a string for the cache name and a Dictionary<string, string> for provider-specific parameters

Let’s see a sample data cache store provider using NorthWind database:

Provider Sample Part 1

AppFabric invokes the constructor in two occasions: (1) when the cache-cluster starts (2) when you attach the provider with a cache using either New-Cache or Set-CacheConfig. The Delete method is invoked when you remove a cache item from the cache (DataCache.Remove()).

Now the crucial part: the read and write methods. Let’s implement the data store read methods first. Note that there are two overloaded read methods:

Provider Sample Part 2

As said earlier, the reads methods are invoked when a cache item with requested key is not found in the cache (from my experiment so far, AppFabric usually calls the first overload). In order to make both methods behave consistently, the collection overload of read method internally calls the simple read method. A null return value from the simple read throws an exception on the cache-client! Let us complete the write methods as well.

Provider Sample Part 3

Write methods are invoked when cached items are updated by cache clients (for example, calling DataCache.Put()). Unlike read, write and delete methods are not immediately called when cache items are updated or removed. Rather, AppFabric calls them in (configurable) regular intervals.

The final piece the Dispose() method:

Provider Sample Part 4

Pretty straightforward! :-)

Assuming the provider assembly builds successfully and provided a strong name, you can associate it with a new cache as below (you can also use Set-CacheConfig cmdlet to enable or disable the provider for a cache):

New-Cache "ProviderEnabledCache" -ReadThroughEnabled true -WriteBehindEnabled true -WriteBehindInterval 60 -ProviderType "MsSqlDataCacheStoreProvider.CoreDataCacheProvider, CoreDataCacheStoreProvider, Version=, Culture=neutral, PublicKeyToken=21b666fac19955ad" -ProviderSettings @{"conStr"="Data Source=(local)\SQLEXPRESS;Initial Catalog=Northwind;Trusted_Connection=True"}

If everything went ok, the new cache should have been created with a data cache store provider enabled and attached to it. The most common reason for failed provider configuration is that one or more dependent assemblies (other than .NET framework assemblies) of the provider assembly missing in the cache host’s GAC. Needless to say, you have to deploy this provider assembly (including its dependents) on all cache hosts.

Points to note:

  • When the GAC is updated with a newer provider assembly, you have to restart the cache cluster for the new bit to get effect.
  • You do not have to implement both read and write methods. For example, if your cache has only read-through option enabled, you may just have empty write methods. Similarly, when you have only write-behind enabled, your read methods can be placeholder implementations.
  • Cache clients are not notified of uncaught exceptions thrown from write methods.
  • Provider class should have a public instance constructor that accepts a string and Dictionary<string, string> as parameters. Otherwise, the cache cluster will not start or will render unpredictable behavior.

This topic has been due for quite long time from my end. Things kept me busy and couldn’t get bandwidth to write about this subject. All right, let’s get dive into the topic.

WCF Data Services supports two providers out-of-the-box: Entity Framework and Reflection providers. In my earlier post I discussed the latter: OData feeds using POCO entities. In a nutshell, every feed that your service exposes should be implemented as a property of type IQueryable for data retrieval. If you want to support insert, update and delete operations, then you should implement IUpdateable interface as well. It is required to implement both for OData. In this post, I will show how you can use Entity Framework Data Service provider of WCF to publish OData services. Believe me – it is extremely simple compared to the Reflection Data Services provider of WCF.

Due to its simplicity and lightweight-ness, I am going to use the familiar Northwind database for this post. The first step is to create the entity data model (EDM) for your data source -the Northwind database. I will quickly run through the steps to do this (assuming you have a running Northwind database running on an SQL Server instance):

  1. Open Visual Studio and create a Class Library type project
  2. Add a new ADO.NET Entity Data Model (under Data Templates)
  3. On the subsequent dialog, select Generate from database and follow the wizard steps by selecting the Northwind database and all the table objects (Views and Stored Procedures not required for now).

Now, you would see an .edmx file created and added to your class library project. You will also see an app.config file added to the project with a connection string entry looking like this (in a single line):

<add name="NWEntities" connectionString="metadata=res://*/Northwind.csdl|res://*/Northwind.ssdl|res://*/Northwind.msl;
provider=System.Data.SqlClient;provider connection string=&quot;Data Source=(local);Initial Catalog=Northwind;
Integrated Security=True; &quot;" providerName="System.Data.EntityClient" />

The designer-generated class for the .edmx file will have the entity and association classes defined for each table you selected during the EDM creation wizard process. But the most important generated-class is the entity container class derived from ObjectContext class looking like the highlighted one below:


The second step is to create a class that wraps and exposes the entity model as OData feeds (entity sets) and a WCF service host for the feeds. Let’s create a class (name as you like) but derive it from DataService<T> generic type. This is the same as what I did in my earlier post demonstrating the Reflection provider.


NWEntities is the name of my Entity Framework container class created during the first step (creating the EDM for Northwind). The data service initialization method must be exactly as shown above (including the method name). Yes, this method could have been defined as a virtual method in DataService class that could be overridden in the derived class with custom access rules but I do not know why Microsoft chose to let the developers define this method in the way shown above with some hard-wiring somewhere (I would be glad if you know why and leave a comment). It can completely be an empty method in which case the WCF will configure the service class with default values, but if you want to control access to various entities in the EDM then this is the only place you can do it and you cannot change it later unless you restart the service and recreate the app domain.

Now the hosting part:

  1. Add a Console type application to the default solution
  2. Copy the EDM connection string created (shown above) to the console application’s configuration file
  3. Assuming you have the following few lines of code (you have to add project reference to the class library with the EDM file and the data service wrapper class):


Running the host application should throw up the following window:


Now the OData service is ready for consumption. Open up Internet Explorer (or any application that can show XML response from HTTP requests, such as Fiddler) and navigate to the service address shown in the console window. The HTTP response would be an XML showing the list of feeds (entity sets) available at the service URL:


Depending on the tables and other objects selected during the EDM creation process and permissions set in the InitializeService method, you may see less or more number of feeds. Nevertheless, now you make all the OData data retrieval requests (HTTP GET) and for a sample refer to my earlier post. But what about entity creation, update and delete operations? We use HTTP verbs POST, PUT and DELETE for create, update and delete operations respectively. The body of the HTTP request will have the new values for the entity to be created, edited and deleted. For example, to a new category called Milk Products, you make a HTTP POST request as below (I used Fiddler to create all the HTTP requests shown from here on):


Note that I am not passing certain Category properties namely, category ID and Picture. You can omit any null-able (Picture) and identity type columns (Category ID). The Content-Type header specifies the payload format (in this case it is Atom XML, but it can be JSON too). On successful creation, the service will return an Atom XML representing the newly created entity with new identity fields (if any) inserted in:


You can add child entities directly to a parent. The following HTTP POST creates a new product for the newly created category ID 10:


Again, category ID and product ID are not passed because, the POST URI itself specifies the category ID implicitly and product ID is an identity column. The response on successful product creation looks like this:


Another scenario: what if you would like to add a parent and children in one shot? Let’s add a category and a couple of products to it in the same HTTP request. Please note how the parent (XML in dark red color) associated child products (XML in purple color) are specified inline in the feed itself:



The HTTP response for this request will have only the newly created parent Atom XML but at the backend the WCF Data Services would create the children and attach them with the newly created parent (fire up a SQL query and check for yourself).

Let’s see how you can update an entity, here an existing category with ID 13:


As per OData specification, PUT operation against an entity completely replaces the existing one in the data source; the direct effect is that if you miss a property in the request, it will be set to its default value or empty/null as the case may be. In order to avoid this situation, OData recommends HTTP MERGE operation, which merges new values with the existing one without overwriting.


The above operation updates only the Description property leaving the category name and picture as is. Had it been a PUT request, the latter ones would be set to null/empty and if the columns backing those properties were non-null able then the PUT operation would fail returning HTTP status code 500. It is also possible to target a PUT request against a specific entity property to update it directly. The following request updates the Decryption property of the Category with ID 13:


Note the content type is flipped back to application/xml. Finally the deletion, which is pretty straight-forward:


The above request deletes the category element with ID 13 from the data source. Obviously, the delete would fail if the target entity has associated children.

The important thing to note here is that all the CRUD operations performed are routed through the EDM created for the data source by the WCF Data Service infrastructure and hence you can perform any custom operations via stored procedure mapping for CUD requests (POST, PUT, MERGE and DELETE).

All right, I have shown you all the examples using raw XML markup and HTTP requests/responses but do you really have to deal with such raw format? The answer is No, unless you really like playing with XML directly and making HTTP requests yourself. OData has a client SDK for various platforms such as JavaScript, Objective-C, PHP and Java. As far as .NET is concerned, Visual Studio automatically creates all the necessary proxy entity classes to interact with the OData service and perform all supported CRUD operations.

Here is a simple client code that retrieves all products belonging to category ID 5 (assuming you have made a service reference to the WCF Data Service):

OData Entity Framework Client

If everything goes fine, running the above code should output:


I seriously hope this post helped you realize the power of OData services and its applicability in the enterprise and B2B scenarios and especially how super-duper easy it is to implement one with WCF and Entity Framework. In a future post I will discuss another little cool OData feature that I have not opened up my mouth yet! :-)

Read part 1 here, part 2 here or part 3 here.

This is a sort of summary post of my MVVM Pattern 101 series, talking about the frameworks available for MVVM development. There are many open-source frameworks available for MVVM-based development. Most of the frameworks support WPF as well as Silverlight including the new Windows Phone 7 platform. Of these frameworks, notable ones are Prism from Microsoft’s P&P team, Caliburn and MVVM Light. Prism is a composite UI framework for WPF and Silverlight. It also uses Inversion of Control (IoC)/Dependency Injection pattern. This allows you to use other IoC frameworks such as Spring.NET, Microsoft’s Unity or your own with MVVM development. Caliburn and MVVM Light are 3rd party MVVM-dedicated frameworks. If you would like to see a list of some of the MVVM frameworks available today, check here. The author has made the comparison itself as a Silverlight app!

    Talking about frameworks doesn’t mean you can’t develop your own. Watch a nice presentation on building own MVVM framework here.

This is 3rd of my multi-part series on MVVM 101. Read part 1 here and part 2 here.

From an implementation perspective, ViewModel is a just class satisfying certain requirements and you follow a few simple steps to have a working one. Remember the key part in MVVM is the VM, the real controller, because View is just your WPF UI and Model is either an entity object or data transfer object. Let us say we want to display some very basic details about an insurer; the listing dialog will display details such as first name, last name, state and his dependent list. Let us also assume we already have domain objects for the insurer and dependent entities (I deliberately made Dependents as a complex type to show how collection properties can be dealt with in MVVM). Here is our simple model:

public class InsuranceInfo
    public string FirstName {getset;}
    public string LastName {getset;}
    public List<FDependents> Dependents {getset;}
    public string State {getset;}

public class FDependents
    public string Relationship {getset;}
    public string DepName {getset;}
    public short Age {getset;}


A simple UI we are going to build our VM against to show the above model:

Let’s go ahead and get the missing piece – ViewModel. As a convention, all the VM classes have the “ViewModel” suffix and I am calling ours as InsuranceViewModel.

Step 1: Add special properties to the VM for the UI to bind to

    Any data displayed in the UI should have a corresponding property in the VM class. These properties are special in that any change to their values should be detectable by the binding engine and update the bound controls accordingly. Hence the VM should implement


interface and with collection properties wrapped by



public class InsuranceViewModel : INotifyPropertyChanged
    private string _fn;
    private string _ln;
    private string _state;

    public event PropertyChangedEventHandler PropertyChanged;
    public ObservableCollection<FDependents> Dependents {getprivate set;}
    public string FirstName {
        get {return _fn;}
        set {
            if (_fn != value) {
                _fn = value;
                RaisePropertyChanged (“FirstName”);
    public string LastName {
        get {return _ln;}
        set {
            if (_ln != value) {
                _ln = value;
                RaisePropertyChanged (“LastName”);
    public string State {
        get {return _state;}
        set {
            if (_state != value) {
                _state = value;
                RaisePropertyChanged (“State”);
    public EditCommand EditCmd {private set;get;}
    private void RaisePropertyChanged (string p) {
        if (PropertyChanged != null) {
            PropertyChanged (thisnew PropertyChangedEventArgs (p));


Note that I have implemented the properties as property methods instead of fields because we should raise property change notification whenever the property values are changed.

Step 2: Provide ICommand properties for applicable control events

Since our UI has save option, a command property,


of type


is also included in the above code. The command class looks like this:

public class EditCommand : ICommand {
    private InsuranceViewModel _vm = null;
    public event EventHandler CanExecuteChanged {
        add {CommandManager.RequerySuggested += value;}
        remove {CommandManager.RequerySuggested -= value;}
    public bool CanExecute (object parameter) {
        if (parameter != null) {
            var cp = parameter as InsuranceViewModel;
            return !(cp.FirstName == “” || cp.LastName == “” || cp.State == “”);
            return true;

    public void Execute (object parameter) {
        if (parameter != null) {
            var cp = parameter as InsuranceViewModel;
            MessageBox.Show (String.Format(“{0}, {1}, {2}”
                cp.FirstName, cp.LastName, cp.State));

    public EditCommand (InsuranceViewModel vm) {_vm = vm;}

It is very important that the command classes derive from


. The


method tells if the action is available at any moment so that the control can disable itself or take appropriate action. This method will be called at appropriate times by the commanding infrastructure. The above code states that if at least one of first name, last name or state is empty, the Edit command should not be available.


method is called as a result of the control event (the event handler!). It is a common design that commands are initialized with a view model instance to delegate data service operations back to the view model itself.

Step 3: Hook up a ViewModel instance with UI markup

<Window x:Class=”MVVMDemo.MainWindow”
        Title=”MVVM Demo” Height=”352″ Width=”525″ Loaded=”Window_Loaded”>
    <Grid Height=”Auto”>
        <ListView Margin=”12,100,0,58″ Name=”lvwInsuranceInfo” 
                  SelectionMode=”Single” ItemsSource=”{Binding Path=Dependents}” 
                  HorizontalAlignment=”Left” Width=”352″>
                <GridView AllowsColumnReorder=”False”>
                    <GridViewColumn Header=”Relationship” 
                                    DisplayMemberBinding=”{Binding Path=Relationship}” />
                    <GridViewColumn Header=”Name” 
                                    DisplayMemberBinding=”{Binding Path=DepName}” />
                    <GridViewColumn Header=”Age” 
                                    DisplayMemberBinding=”{Binding Path=Age}” />
        <Button Content=”_Close” Margin=”0,0,12,12″ Name=”btnClose” 
                Click=”btnClose_Click” HorizontalAlignment=”Right” Width=”105″ 
                Height=”24″ VerticalAlignment=”Bottom” />
        <Label Content=”First Name:” Height=”24″ HorizontalAlignment=”Left” 
               Margin=”10,10,0,0″ Name=”lblFN” Padding=”0″ VerticalAlignment=”Top” 
               VerticalContentAlignment=”Center” />
        <Label Content=”Last Name:” Height=”24″ HorizontalAlignment=”Left” 
               Margin=”10,40,0,0″ Name=”label2″ Padding=”0″ VerticalAlignment=”Top” 
               VerticalContentAlignment=”Center” />
        <Label Content=”State:” Height=”24″ HorizontalAlignment=”Left” Margin=”10,70,0,0″ 
               Name=”label3″ Padding=”0″ VerticalAlignment=”Top” 
            VerticalContentAlignment=”Center” />
        <Button Command=”{Binding Path=EditCmd}” CommandParameter=”{Binding}” 
                Content=”Save” Height=”24″ HorizontalAlignment=”Left” Margin=”123,0,0,12″ 
                Name=”btnSave” VerticalAlignment=”Bottom” Width=”105″>
        <TextBox Height=”24″ Margin=”93,10,139,0″ Name=”txtFn” VerticalAlignment=”Top” 
                 Text=”{Binding Path=FirstName}” />
        <TextBox Height=”24″ Margin=”93,40,139,0″ Name=”txtLn” VerticalAlignment=”Top” 
                 Text=”{Binding Path=LastName}” />
        <TextBox Height=”24″ Margin=”93,71,139,0″ Name=”txtCity” VerticalAlignment=”Top” 
                 Text=”{Binding Path=State}” />
        <Label Content=”Relationship:” Height=”24″ HorizontalAlignment=”Left” Margin=”370,100,0,0″ 
               Name=”label1″ Padding=”0″ VerticalAlignment=”Top” VerticalContentAlignment=”Center” />
        <TextBox Height=”24″ Margin=”370,124,12,0″ Name=”txtRelationshipName” 
   Text=”{Binding ElementName=lvwInsuranceInfo, Path=SelectedItem.Relationship}” 
                 VerticalAlignment=”Top” />
        <Label Content=”Name:” Height=”24″ HorizontalAlignment=”Left” 
               Margin=”370,149,0,0″ Name=”label4″ Padding=”0″ VerticalAlignment=”Top” 
               VerticalContentAlignment=”Center” />
        <TextBox Height=”24″ Margin=”370,171,12,0″ Name=”txtName” 
                 Text=”{Binding ElementName=lvwInsuranceInfo, Path=SelectedItem.DepName}” 
                 VerticalAlignment=”Top” />
        <Label Content=”Age:” Height=”24″ HorizontalAlignment=”Left” Margin=”370,201,0,0″ 
               Name=”label5″ Padding=”0″ VerticalAlignment=”Top” VerticalContentAlignment=”Center” />
        <TextBox Height=”24″ Margin=”370,231,12,0″ Name=”txtAge” 
                 Text=”{Binding ElementName=lvwInsuranceInfo, Path=SelectedItem.Age}” 
                 VerticalAlignment=”Top” />

As shown above, the UI controls are bound to the corresponding properties of the ViewModel instance. Note also how the Save button’s command property is wired to handle its click event. Finally, the code to set the main window’s data context looks like this: public partial class MainWindow : Window
    public MainWindow ()
        InitializeComponent ();

    private void btnClose_Click (object sender, RoutedEventArgs e) {
        this.Close ();

    private void Window_Loaded (object sender, RoutedEventArgs e) {
        this.DataContext = new InsuranceViewModel ();

In fact, this is also the entire code-behind for our demo application! All it does is just set a ViewModel instance to the window’s data context. In fact, this can also be moved to XAML markup to do it declaratively and get rid of


handler altogether.

Just for the demo purpose, the ViewModel class fills itself with some sample data when instantiated as shown below.

public InsuranceViewModel () {
    FirstName = “Dave”;
    LastName = “Watson”;
    State = “NJ”;
    Dependents = new ObservableCollection<FDependents> ();
    Dependents.Add (new FDependents () {
        Relationship = “Spouse”,
        DepName = “Milla Watson”,
        Age = 33 });
    Dependents.Add (new FDependents () {
        Relationship = “Son”,
        DepName = “John Watson”,
        Age = 10 });
    Dependents.Add (new FDependents () {
        Relationship = “Mother”,
        DepName = “Marie Gold”,
        Age = 65 });

    EditCmd = new EditCommand (this);


If everything works fine whether you set the data context via code or markup, the output screen will be:

Making any of the top three text boxes empty, automatically disables the Save button:

The text boxes on the right are hooked to the selected item in the list view to let the user edit the selected dependent. WPF binding engine provides certain validation logic out of the box. In our demo, the Age property is numeric and in the screenshot below, you get a red outline around the text box when it is empty or have non-numeric values without requiring any extra code or configuration. By the way, this is nothing to do with MVVM, but thought would highlight some of the tiny benefits WPF gives free.

That’s it! Here is the recap of what we have discussed so far:

  1. MVVM is all about separation of business logic & UI separation in WPF and promoting loose coupling
  2. The code-behind contains no business logic (other than hooking the ViewModel if you choose to) with proper MVVM in place
  3. WPF’s Binding and Commands are the backbones for MVVM. Without them, it is very difficult to implement the pattern in a useful way.
  4. You follow simple steps to create a working ViewModel class:
    1. Let the ViewModel class expose properties based on what the UI shows
    2. Implement
    3. Create custom commands for applicable user actions/control events by implementing
    4. Bind its properties/commands to control properties in the XAML markup
    5. ViewModel classes are easily testable now
    6. Changing UI in future is as simple as designing a new one and consuming the existing VM. In fact, the UX team can try many prototypes and get feedback from the business with real data
    7. Since WPF and Silverlight front-ends can share almost the same ViewModel the effort required to support one front-end in addition to the above is lesser than having to building from the scratch. The bottom-line is reusability.

In some cases your code-behind may end up with extra code around MVVM hook up and other plumbing but that’s ok as long as you do not write any code that otherwise can be moved to ViewModel.

Alright, another claim is that MVVM enables independent (without UI intervention) testing of VM and downstream business logic easily. Restating one of the attributes of VM, it is purely UI independent and hence you can test the VM like any other class. For example, below is a simple NUnit test fixture for the


class. Like the UI, the test fixture in all respect is yet another consumer of VM.

public class InsuranceInfoVMUnitTest
    public void TestVMInit ()
        var vm = new InsuranceViewModel ();
        vm.PropertyChanged += 
            (s, e) => {Debug.WriteLine (String.Format(@”Property {0} value changed.”, e.PropertyName));};
        Assert.IsNotNull (vm.Dependents, @”Dependents collection not initialized.”);
        Assert.IsNotNull (vm.FirstName, @”First name failed to initialize.”);
        Assert.IsNotNull (vm.LastName, @”Last name failed to initialize.”);
        vm.State = “PA”// Should write out a debug message; check Debug window
        vm.FirstName = “”;
        Assert.IsFalse (vm.EditCmd.CanExecute(vm), @”EditCmd failed to check empty FirstName.”);
        vm.FirstName = “Bill”;
        vm.LastName = “”;
        Assert.IsFalse (vm.EditCmd.CanExecute(vm), @”EditCmd failed to check empty LastName.”);
        vm.LastName = “Rogers”;
        vm.State = “”;
        Assert.IsFalse (vm.EditCmd.CanExecute(vm), @”EditCmd failed to check empty State.”);
        vm.State = “CA”;
        Assert.IsTrue (vm.EditCmd.CanExecute(vm), @”EditCmd failed to check valid InsuranceInfo properties.”);
        vm.EditCmd.Execute(vm); // Save button click
        // Load current insurance info afresh & check for updated values


Now that we have discussed MVVM at a reasonable level, lets us see the flip aspects of MVVM in WPF:

  1. MVVM requires extra code to be written; you have to implement a few interfaces and extend few base classes and all these would most probably take more time but the end advantages are many. It is better to invest upfront to save in future.
  2. For more sophisticated applications, you may have to write value converters to handle custom types in XAML markup.
  3. Some argue that ViewModel is the new code-behind and not a value adder when it comes to separation of concerns. This is motivated by the View (UI) delegating event handling to ViewModel class through command binding (

    method). This in a sense gives a feeler of having event handler methods in the ViewModel. Nevertheless, I am not against or favor of this point as it depends on the individual’s viewpoint.

  4. Out of the box, commands are available only to

    -derived controls,


    , and


    . For other controls, esp. selection controls such as combo box, list view, you have to resort to other ways (may be, I will cover this in a future post).

This is the continuation of my previous discussion on MVVM here.

With the advent of new designer tools such as Expression Blend, MVVM pattern suites WPF/Silverlight development very well. Expression Blend (or anything that produces WPF XAML) enables UX design and developer teams work independently, unlike early days of waiting for the UI to complete before the developers put their meat. Since both the teams work in parallel, there is a significant saving in the whole development effort. The most important advantage on the table is that the development team can now unit test their code with mock UIs before plugging the actual UI. Note that this mock UI is simply a test class that talks to ViewModel like a real WPF UI would do via property access and method invocation. Once the actual UI is ready, it’s just a matter of changing certain properties in the XAML (to hook to the ViewModel) and re-compiling it.

Let us look at the two WPF features that power MVVM:

Commands: In short, the Command pattern wraps an action to be executed on a target (yes, the same GoF pattern) independent of who triggered it. In the context of WPF, a Command is triggered by a control event (button click for example) with optional command parameters. Some of the aspects of the Commanding infrastructure are:

  1. Command source – The control that triggers the command. The source may disable itself if the command it can trigger cannot be triggered due to various reasons. You would have for sure noticed all the text editors disabling their Copy menu item, context menu and command/ribbon bar buttons when there is no text selected. Here, Copy is the command and the UI elements invoking Copy (menu item, command bar/ribbon button and context menu) are all command sources.
  2. Command target – The object/control the command execution logic depends on. In the text copy example above, the text box is the command target because the copy function works the properties (text selection) of the text box. If no text is selected, copy cannot work and hence the associated controls would be disabled.
  3. Command – The command itself with optional parameters required for executing the command logic. You might have also come across another term called ‘routed command’, which is nothing but a command with the ability to traverse up and down in the visual tree hierarchy (Routing is a topic by itself and won’t be discussed here).

Binding: A powerful feature to let UI controls automatically get data from an object and populate themselves instead of manually doing it. The binding infrastructure takes care of automatically updating the target whenever the source changes, vice versa or both (two-way binding). This is accomplished via event notification mechanism already available in .NET. The implying fact is that there should be a mechanism for the WPF’s binding engine to detect changes in the bound property data and pass those changes to the listeners. Binding can be used not just for properties but also for commands. The latter is required to delegate event handling logic from the UI to elsewhere (read ViewModel)! One important aspect of command binding is that it sets the binding scope which determines how far in the ancestor hierarchy a control can search for its command event handler.

These two form the backbone of the MVVM implementation, indeed. Despite a tiny description of Commands and Binding, both Silverlight and WPF have a strong infrastructure for Commands and Binding. I assume you have basic understanding of the both and the above only sets the context for MVVM.

I will get into some code and implement MVVM in WPF in my next due.

“What is the difference between architecture and design?” Or how is one different from the other? From time to time I come across this question at my work and online forums – An all too often discussed topic in the software world!

This is not an uncommon question in today’s poorly defined and often overlapping roles and responsibilities that architects, senior developers and developers play today in many organizations. It becomes more confusing when a single person or everyone in a small team does a mix of architecture, design and coding without a defined boundary. The mere term architecture could mean many things depending on the context it is referenced in. But for this writing I mostly assume application architecture and design. Yes, there are other architectures types exist in this software world!

Application architecture is all about decomposing an application into modules and sub-modules, defining their attributes and responsibilities and the relationships among them. During this process, numerous parameters are considered and thoroughly analyzed and based on the merits and demerits of each, various constraints/trade-offs are made. Strict lines are drawn to arrive at an optimal architecture that solves the business problem and aligns well with the organization’s enterprise architecture or business requirements directly. Ideally, software architecture should be technology and platform neutral but often it is defined around a specific technology such as J2EE, .NET and open-source, which is not too bad, in my perspective. On the other end, it potentially locks down oneself into a specific stream where alternatives exist. Instead of going in depth in to the subject, here is the crisp of software architecture.

The term architecture itself doesn’t have a global definition but standards bodies such as IEEE and SEI-CMM have their own versions and reinforce them in their respective publications/open talks. However, here is my version closely derived from one of the standards body’s definition that I feel closely defines software architecture: a software architecture is the definition and description of the organization of a system in terms of its sub-systems, modules, their interdependencies, relationships, interaction with external systems and how they communicate with each other cohesively in order to give the intended functionality and meet the definition of the system in question.

In the above representative diagram (a loose form of architecture diagram), a CRM system has been broken down into, for simplicity, three modules. One of the modules, User Interface is further broken into multiple modules representing various UI options the CRM system is expected to support. Further, Rich Client is pieced into by client types, and so on. Don’t get carried away by this break-and-divide rule, too much of de-modularization can result in too many small pieces making the whole process complicated.

Like its definition, software architecture does not have a single representation to represent itself. In order to communicate its purpose and meaning to various stakeholders in an unambiguous way, different types of representations or views have been developed. For example, deployment architecture doesn’t make any sense for an end user, right? Multiple architectural representations exist each targeting a different type of audience. For example, Rational’s 4+1 View has:

  1. Use case view (business users)
  2. Logical view (architects)
  3. Development view (developers)
  4. Process View (performance tuning engineers/testers)
  5. Physical View (IT engineers)

Microsoft has a similar one:

  1. Conceptual view (business users)
  2. Logical view (architects)
  3. Physical view (Developers)
  4. Implementation view (IT engineers)

To an extent, architecture is all about making the right tradeoffs and decisions at the right time based on the priority and importance of various system features because they could potentially influence other down-level sub-systems and their behavior.

Design is the realization process of application architecture, in terms of a specific technology, platform and set of tools. Design process breaks down sub-modules to lower levels and provide technology-specific design definitions of each broken-down module. In case of dev view/physical view, it involves defining abstractions, contracts or interfaces, inter-class or module communication mechanisms, classes using various patterns, all detail enough for a programmer to actually implement the design by writing code. Design specs are the input for programmers. In some cases, an intermediate state is brought in between design and development, intended to develop pseudo code implementation of complex logic/algorithms in the design.

In today’s RAD ecosystem, the difference between design and architecture is often blurred thus making it difficult to understand the difference between them. Of course, for small applications, one may not see the real benefit of developing architecture and design due to unnecessary overhead resulting it redundant information and duplication of effort.


I had been thinking of writing on this subject for quite some time now but Thanks to a co-architect who forwarded me a write up I wrote few years ago on this subject and cut my laziness! :-)