Let’s get it right. Virtualization is about reducing the physical footprint of IT infrastructure and maximizing the utilization of the same. It helps to drastically reduce cumulative maintenance and onetime procurement costs of physical hardware. Since a virtualized environment (VE) is a logical representation of a physical environment hosted on real hardware and sandboxed, it makes practically possible to host multiple VEs on a single physical computer and an enterprise’s business and IT applications can be consolidated on to fewer physical hardware:

Virtualization Visualized

 

Virtualization helps in reducing infrastructure complexity (less hardware), power (more green) and cuts overall operational overhead.

Cloud offers the same infrastructure & cost benefits of virtualization – reduced physical hardware and associated operational/maintenance overhead. Almost all the cloud service providers today have implemented their cloud platform using virtualization and that is what confuses people more about virtualization and cloud. Simply put, virtualization is one of the ways of implementing cloud. In other words, nothing stops one from implementing a cloud infrastructure with hundreds of blade servers instead of virtualization, for example. Not just that, cloud enables sharing compute power/resources – RAM, disk space, processors, network bandwidth from a central pool of those resources on-demand basis.

Cloud Visual

US Federal Government’s National Institute of Standards and Technology (NIST) lists the following characteristics as essential for cloud model:

  • On-demand self-service (cloud provisioning by consumers themselves)
  • Broad network access (from a variety of/heterogeneous devices and software apps)
  • Service usage to be measurable (monitoring & measuring resource usage – CPU, memory, disk space, network bandwidth, etc.)
  • Elasticity (Cloud resources to be easily provisioned for increased & reduced load)
  • Resource pooling (Computing resources pooled to be able to transparently share among multiple cloud consumers based on their demand)

As you can see, the above require capabilities much more than just virtualization part.

Think virtualization & cloud computing in parallel to classic ASP.NET web services & SOA. While ASP.NET web services are a means of realizing SOA, they are not SOA by themselves.

This topic has been due for quite long time from my end. Things kept me busy and couldn’t get bandwidth to write about this subject. All right, let’s get dive into the topic.

WCF Data Services supports two providers out-of-the-box: Entity Framework and Reflection providers. In my earlier post I discussed the latter: OData feeds using POCO entities. In a nutshell, every feed that your service exposes should be implemented as a property of type IQueryable for data retrieval. If you want to support insert, update and delete operations, then you should implement IUpdateable interface as well. It is required to implement both for OData. In this post, I will show how you can use Entity Framework Data Service provider of WCF to publish OData services. Believe me – it is extremely simple compared to the Reflection Data Services provider of WCF.

Due to its simplicity and lightweight-ness, I am going to use the familiar Northwind database for this post. The first step is to create the entity data model (EDM) for your data source -the Northwind database. I will quickly run through the steps to do this (assuming you have a running Northwind database running on an SQL Server instance):

  1. Open Visual Studio and create a Class Library type project
  2. Add a new ADO.NET Entity Data Model (under Data Templates)
  3. On the subsequent dialog, select Generate from database and follow the wizard steps by selecting the Northwind database and all the table objects (Views and Stored Procedures not required for now).

Now, you would see an .edmx file created and added to your class library project. You will also see an app.config file added to the project with a connection string entry looking like this (in a single line):

<connectionStrings>
<add name="NWEntities" connectionString="metadata=res://*/Northwind.csdl|res://*/Northwind.ssdl|res://*/Northwind.msl;
provider=System.Data.SqlClient;provider connection string=&quot;Data Source=(local);Initial Catalog=Northwind;
Integrated Security=True; &quot;" providerName="System.Data.EntityClient" />
</connectionStrings>

The designer-generated class for the .edmx file will have the entity and association classes defined for each table you selected during the EDM creation wizard process. But the most important generated-class is the entity container class derived from ObjectContext class looking like the highlighted one below:

image

The second step is to create a class that wraps and exposes the entity model as OData feeds (entity sets) and a WCF service host for the feeds. Let’s create a class (name as you like) but derive it from DataService<T> generic type. This is the same as what I did in my earlier post demonstrating the Reflection provider.

image

NWEntities is the name of my Entity Framework container class created during the first step (creating the EDM for Northwind). The data service initialization method must be exactly as shown above (including the method name). Yes, this method could have been defined as a virtual method in DataService class that could be overridden in the derived class with custom access rules but I do not know why Microsoft chose to let the developers define this method in the way shown above with some hard-wiring somewhere (I would be glad if you know why and leave a comment). It can completely be an empty method in which case the WCF will configure the service class with default values, but if you want to control access to various entities in the EDM then this is the only place you can do it and you cannot change it later unless you restart the service and recreate the app domain.

Now the hosting part:

  1. Add a Console type application to the default solution
  2. Copy the EDM connection string created (shown above) to the console application’s configuration file
  3. Assuming you have the following few lines of code (you have to add project reference to the class library with the EDM file and the data service wrapper class):

image

Running the host application should throw up the following window:

image

Now the OData service is ready for consumption. Open up Internet Explorer (or any application that can show XML response from HTTP requests, such as Fiddler) and navigate to the service address shown in the console window. The HTTP response would be an XML showing the list of feeds (entity sets) available at the service URL:

image

Depending on the tables and other objects selected during the EDM creation process and permissions set in the InitializeService method, you may see less or more number of feeds. Nevertheless, now you make all the OData data retrieval requests (HTTP GET) and for a sample refer to my earlier post. But what about entity creation, update and delete operations? We use HTTP verbs POST, PUT and DELETE for create, update and delete operations respectively. The body of the HTTP request will have the new values for the entity to be created, edited and deleted. For example, to a new category called Milk Products, you make a HTTP POST request as below (I used Fiddler to create all the HTTP requests shown from here on):

image

Note that I am not passing certain Category properties namely, category ID and Picture. You can omit any null-able (Picture) and identity type columns (Category ID). The Content-Type header specifies the payload format (in this case it is Atom XML, but it can be JSON too). On successful creation, the service will return an Atom XML representing the newly created entity with new identity fields (if any) inserted in:

image

You can add child entities directly to a parent. The following HTTP POST creates a new product for the newly created category ID 10:

image

Again, category ID and product ID are not passed because, the POST URI itself specifies the category ID implicitly and product ID is an identity column. The response on successful product creation looks like this:

image

Another scenario: what if you would like to add a parent and children in one shot? Let’s add a category and a couple of products to it in the same HTTP request. Please note how the parent (XML in dark red color) associated child products (XML in purple color) are specified inline in the feed itself:

image

image

The HTTP response for this request will have only the newly created parent Atom XML but at the backend the WCF Data Services would create the children and attach them with the newly created parent (fire up a SQL query and check for yourself).

Let’s see how you can update an entity, here an existing category with ID 13:

image

As per OData specification, PUT operation against an entity completely replaces the existing one in the data source; the direct effect is that if you miss a property in the request, it will be set to its default value or empty/null as the case may be. In order to avoid this situation, OData recommends HTTP MERGE operation, which merges new values with the existing one without overwriting.

image

The above operation updates only the Description property leaving the category name and picture as is. Had it been a PUT request, the latter ones would be set to null/empty and if the columns backing those properties were non-null able then the PUT operation would fail returning HTTP status code 500. It is also possible to target a PUT request against a specific entity property to update it directly. The following request updates the Decryption property of the Category with ID 13:

image

Note the content type is flipped back to application/xml. Finally the deletion, which is pretty straight-forward:

image

The above request deletes the category element with ID 13 from the data source. Obviously, the delete would fail if the target entity has associated children.

The important thing to note here is that all the CRUD operations performed are routed through the EDM created for the data source by the WCF Data Service infrastructure and hence you can perform any custom operations via stored procedure mapping for CUD requests (POST, PUT, MERGE and DELETE).

All right, I have shown you all the examples using raw XML markup and HTTP requests/responses but do you really have to deal with such raw format? The answer is No, unless you really like playing with XML directly and making HTTP requests yourself. OData has a client SDK for various platforms such as JavaScript, Objective-C, PHP and Java. As far as .NET is concerned, Visual Studio automatically creates all the necessary proxy entity classes to interact with the OData service and perform all supported CRUD operations.

Here is a simple client code that retrieves all products belonging to category ID 5 (assuming you have made a service reference to the WCF Data Service):

OData Entity Framework Client

If everything goes fine, running the above code should output:

image

I seriously hope this post helped you realize the power of OData services and its applicability in the enterprise and B2B scenarios and especially how super-duper easy it is to implement one with WCF and Entity Framework. In a future post I will discuss another little cool OData feature that I have not opened up my mouth yet! :-)

Applications use data from many sources – from mainframe to flat files, XML, RDBMS, Excel, Microsoft Access and the list goes on. In parallel, we have many data access APIs/libraries based on the development platform: for C/C++ applications, we have ODBC, for COM/DCOM there is ADO, Java world enjoys JDBC and .NET applications have ADO.NET. That said, in this Internet ubiquitous world, data is increasing exposed via Internet too (a recent entry into this arena is Microsoft “Dallas”). More matured standards/protocols around the Internet, ever evolving network technology stacks and universal data representation schemes are making organizations inclining slowly towards this new model of “data over internet”, both on the consuming as well as publishing ends.

So how do we access data that is exposed over the web in a standard method irrespective of the platform? Enter OData! OData (Open Data) is the ODBC/ADO/JDBC/ADO.NET equivalent for accessing “internet enabled” business data. Independent of operating systems and development platforms, OData is a data access API (application-level protocol, to be specific – remember the OSI layers from good old college days?) based on other standards: HTML, REST and AtomPub (RFC5023) (a data publishing protocol which itself is based on the Atom syndication protocol). OData makes certain extensions to AtomPub to deliver a richer set of functionality. In the diagram below, the left-hand side represents the payload or data exchange format stack and the righ-hand side represents communication stack for OData.

Since all the constituent standards of OData are operating system and programming language independent, you can use OData to consume data (from any OData publisher) from a variety of development environments, OS platforms and devices.


From a consumer’s perspective, OData lets you do resource-specific CRUD operations purely via HTTP/HTTPS using plain XML, Atom or JSON as the payload. On the backend, the OData publisher uses REST model to expose its resources (Consumers, Products, Orders, etc.) and CRUD and service-specific operations on those resources. Of course, the publisher need not support all the CRUD operations on all the resources; it can decide and set appropriate operation permissions on the resources and actions on its own. Let us see an example: assume we have a simple database with a list of law firms each with multiple lawyers associated with it. Each lawyer can have more than one area of practice (such as Divorce, Corporate Compliance and Mortgage). Suppose you want to list the available lawyers and had it been an RDBMS you would write an SQL query like the following:

SELECT * FROM Lawyers;

In OData, you would make a simple HTTP GET request to the OData service as:

http://odata.example.org/Lawyers

Here, http://odata.example.org/ is called the service root URI and Lawyers is the resource path. A service can publish any number of resources at the same root URI. You can pass additional parameters and commands via URI query strings to fintune and shape the data. The service returns the result, list of lawyer entities in this case, in XML format conforming to AtomPub standards. A typical output is shown below highlighting a single entry from the feed (for brievety, many entities and shown collapaed). Note that the output XML does not show the complete the entity information, rather a link to edit that entity is included (append the href attribute value to the xml:base value at the top as in http://localhost:5684/Lawyer.svc/Lawyers(93). A OData client can use this URI to get the editable fields from the publisher). Also note that the content element has the primary key name and its value for the entry.

Extending the above query a bit further, if you want the list of lawyers specializing in accident related law, the typical SQL query would be:

SELECT *
FROM Lawyers WHERE
PracticeArea=‘Accident’;

The OData HTTP request doing the same job would be:

http://odata.example.org/Lawyers$filter=PracticeArea eq ‘Accident’;

This screenshot below shows a sample feed output for this query (there are two entries satisfying this filter but second one is collapsed):

We can also have more than one filter condition:

SELECT
*
FROM Lawyers WHERE PracticeArea=‘Accident’ AND City=‘Chicago’;

http://odata.example.org/Lawyers$filter=PracticeArea eq ‘Accident’ and City eq ‘Chicago’

To request specific columns (projection) as in the following:

SELECT LawyerID, Name, Exp FROM Lawyers WHERE Exp > 10;

http://odata.example.org/lawyers$filter=exp gt 10&$select=LawyerID,Name,Exp

The below XML fragment from the actual output highlights the projected properties from the above REST query:

To get a specific entity by its key value:

SELECT
*
FROM Lawyers WHERE LawyerID=1940

http://odata.example.org/lawyers(1940)/

How about composite primary keys? Pretty simple:

SELECT
*
FROM Lawyers WHERE LawyerID=1940 AND OfficeID=’IL2159′;

http://odata.example.org/lawyers(lawyerid=1940, officeid=’IL2159′)/

Notice that non-numeric values are enclosed in quotes in the URI query string irrespective of their context. To access a specific property, say birth date, of the entity with ID 491:

SELECT
BirthDate
FROM Lawyers WHERE LawyerID=491

http://odata.example.org/lawyers(491)/birthdate

If you want just the value without the XML part, append $value to the URI as in:

Here are some interesting URI queries for your curiosity!! J

http://odata.example.org/offices(‘IL2159’)/lawyers/$top=5&$orderby=name desc

http://odata.example.org/offices(‘NYC19’)/lawyers/$filter=(practicearea eq ‘Loans’) and (startswith(name, ‘Nick’) eq true)&$orderby=name desc

http://odata.example.org/offices(‘WAM2’)/lawyers/$filter=(address/city eq ‘Redmond’) and (practicearea eq ‘corporate’ or practicearea eq ‘banking’) and (year(jdate) ge 1999)&$skip=10&$top=10&$select=Name,jdate

These URIs may look ascaringly complex but if you pay close attention to them with a pinch of SQL semantics, they are straight forward and easy to understand! For more filtering options, refer to section 4 of the OData URI Conventions.

By default, the OData service returns results in AtomPub XML format. However, if you would prefer you can get it in JSON too (if the publisher supports it).

http://odata.example.org/Lawyers$filter=City eq ‘Chicago’&$select=LawyerID,Name,City&$format=json

WCF Data Services (formerly ADO.NET Data Services, codename “Astoria”) currently does not support this query string convention for returning JSON data. Instead you have to specify the MIME type for JSON in the Accept header of the HTTP request.

GET http://localhost:5684/Lawyer.svc/Lawyers(7350)/ HTTP/1.1

Accept: application/json

Host: localhost:5684

HTTP/1.1 200 OK

Server: ASP.NET Development Server/10.0.0.0

Date: Wed, 13 Oct 2010 20:21:12 GMT

X-AspNet-Version: 4.0.30319

DataServiceVersion: 1.0;

Content-Length: 458

Cache-Control: no-cache

Content-Type: application/json;charset=utf-8

Connection: Close

 

{

“d” : {

“__metadata”: {

“uri”: “http://localhost:5684/Lawyer.svc/Lawyers(7350)”, “type”: “LawyerOData.LawyerInfo”

}, “LawyerID”: 7350, “Name”: “John Woodlee”, “Exp”: 17, “PracticeArea”: “Employment”, “BirthDate”: “\/Date(20995200000)\/”, “JDate”: “\/Date(423014400000)\/”, “OfficeID”: “MAB1”, “Address”: {

“__metadata”: {

“type”: “LawyerOData.AddressInfo”

}, “AddressInfoID”: 72, “AddressLine”: “Cinema Way”, “City”: “Natick”, “State”: “MA”

}

}

}

Now that we have seen some basic examples of OData requests, let us briefly discuss some of the concepts that make OData as a whole.

Property: is the most granular item in a OData data stream, equivalent to a column in a database table. A property is typed and can be a simple type (integer, float, string, boolean, etc) or a complex type – another entry (see next para). Think of this is a class property which is also a class as in Lawyer. Eample: http://odata.example.org/lawyers(491)/Name refers to the Name property of the lawyer ID 491.

Entry: is a structured item that contains one or more properties, much like a row in a table. An entry can have any number of properties, with a single or composite primary key. Example: http://odata.example.org/lawyers(491)/
represents a lawyer entry whose primary key is 491.

Feed: is a collection of entries and as you might have probably guessed it is conceptually like a database table. Typically OData services expose multiple feeds representing mutliple resource sets. For example, the URI http://odata.example.org/lawyers represents the lawyer feed.

Service Documents: are XML docuents OData services expose for the clients to discover the feeds available. Generally, a OData service makes its service document available at its root URI. A sample service document exposing a single feed Lawywers is shown below:

Metadata: desribes the strucure of the entries available in the service feed. It provides details including the entry name, its constituent property names and their types, primary key and entry relationships in XML form. Think of this as what WSDL is to web services. (If you know Enity Framework, then the metadata is nothing but the CSDL document of the entity data model (EDM)). You can access a service’s metadata by appending $metadata to the service root URI as in http://odata.example.org/$metadata. OData clients such as Visual Studio’s Add Service Reference feature uses this URI to generate the client-side proxy and entity classes. The below screenshot shows the EDM model for the sample OData service I used for this article:

Associations: As mentioned above, an association describes the relationship between two entries (in database terms, foreign key relationships). Association information is available as part of the service metadata. CSDL referes association properties of entries as navigation properties because they are used to navigate from a parent entry to child ones to grand-children and so on. Example: http://odata.example.org/lawyers(491)/addresses represents the list of addresses (feed) that the lawyer ID 491 has.

In OData, associations are called Links. Don’t worry about these terminologies because they are just different names to the same thing depending on the context.

Though OData is new, it is already supported by products. Some of them are Microsoft SharePoint 2010, Windows Azure’s Table Stoage Services and IBM Websphere. Of course you can add OData support for your own data/products by writing your own data service provider (DSP).

If you would like to know more about OData protocol, check out http://www.odata.org. It has everything you need to get started and use the protocol in production environments. It also has some public sample OData URIs that you can play with and understand it better.

I hope this post gave you a broad idea about OData and its querying features. I’ll talk about update, delete and insert features in a future post.

I am sure most of you would have seen tons of definitions, tutorials and examples of MVVM pattern and its context with respect to Windows Presentation Foundation (WPF) and Silverlight (a stripped down WPF for the internet browser environment). However, many people tell me that those examples don’t really help them understand the pattern fully well to use it in their applications or at least think about the pattern’s applicability in a given scenario. This post is an attempt to address this gap and demystify MVVM enough to start implementing it via code.

Like its ancestors – MVC, MVP/variations, MVVM pattern serves to a specific purpose: separating presentation (ASP.NET, Windows Forms, WPF and Silverlight) from data (business data such as order, customer, contact, etc.) and the logic of displaying it (responding to control events, displaying data using various controls, etc.). By this separation, another benefit that MVVM brings to the table is the testability of the presentation logic independent of the UI. This especially makes sense where UI designers and developers have to work on the same artifact to do their respective jobs and follow their own workflow. Imagine a UI designer working on an .aspx page and a developer on the page’s code-behind. The designer may do multiple iterations trying various design combinations, themes, reviewing it by the business, etc. while the developer wouldn’t really care about it. On the flip side, a developer would want to do unit testing, code analysis, bug fixing, etc. that the designer doesn’t care about. These two roles can work independently without challenges if there is a mechanism that helps seamlessly integrate their work at end with ease. MVVM is that mechanism, which addresses this exact problem with less or no effort!

Model: Data we intend to display in the UI via various controls. It could be any data source such as a WCF services, OData data provider, SQL Server and XML File.

View: The actual UI – web forms, Windows forms, WPF/Silverlight (consumer of data, the Model)

ViewModel (VM): The confusing piece in the pattern by its name for many people! VM acts as the data source for any component (consumer) that would be interested in the data exposed by the VM; yet it does not know anything about its consumer. The consumer could be a View (which is the case most of the time) or just anything that knows how to make the best out of VM’s data. As a matter of fact, the VM doesn’t care how the consumer would display the data (if it is a UI), use it or whatsoever. It is this fact that makes a VM testable without an UI via mocking. After testing, you just replace the mock with the actual UI and everything just works as intended.

Generally ViewModels are designed based on its consumer(s) data requirements (Model for the UI and hence the name ViewModel). For example, a VM for a dialog showing a list of products will have public properties and methods for product list (an ICollection may be?), sorting and filtering all for that dialog’s consumption.

Remember that there can be more than one consumer pulling data from a VM. You might have come across cases where a single VM provides data to more than one ViewModels. Who by the way provides data to a VM to serve to its consumer(s)? Model.

So, how MVVM fits into the WPF/Silverlight development? Stay tuned!

“What is the difference between architecture and design?” Or how is one different from the other? From time to time I come across this question at my work and online forums – An all too often discussed topic in the software world!

This is not an uncommon question in today’s poorly defined and often overlapping roles and responsibilities that architects, senior developers and developers play today in many organizations. It becomes more confusing when a single person or everyone in a small team does a mix of architecture, design and coding without a defined boundary. The mere term architecture could mean many things depending on the context it is referenced in. But for this writing I mostly assume application architecture and design. Yes, there are other architectures types exist in this software world!

Application architecture is all about decomposing an application into modules and sub-modules, defining their attributes and responsibilities and the relationships among them. During this process, numerous parameters are considered and thoroughly analyzed and based on the merits and demerits of each, various constraints/trade-offs are made. Strict lines are drawn to arrive at an optimal architecture that solves the business problem and aligns well with the organization’s enterprise architecture or business requirements directly. Ideally, software architecture should be technology and platform neutral but often it is defined around a specific technology such as J2EE, .NET and open-source, which is not too bad, in my perspective. On the other end, it potentially locks down oneself into a specific stream where alternatives exist. Instead of going in depth in to the subject, here is the crisp of software architecture.

The term architecture itself doesn’t have a global definition but standards bodies such as IEEE and SEI-CMM have their own versions and reinforce them in their respective publications/open talks. However, here is my version closely derived from one of the standards body’s definition that I feel closely defines software architecture: a software architecture is the definition and description of the organization of a system in terms of its sub-systems, modules, their interdependencies, relationships, interaction with external systems and how they communicate with each other cohesively in order to give the intended functionality and meet the definition of the system in question.


In the above representative diagram (a loose form of architecture diagram), a CRM system has been broken down into, for simplicity, three modules. One of the modules, User Interface is further broken into multiple modules representing various UI options the CRM system is expected to support. Further, Rich Client is pieced into by client types, and so on. Don’t get carried away by this break-and-divide rule, too much of de-modularization can result in too many small pieces making the whole process complicated.

Like its definition, software architecture does not have a single representation to represent itself. In order to communicate its purpose and meaning to various stakeholders in an unambiguous way, different types of representations or views have been developed. For example, deployment architecture doesn’t make any sense for an end user, right? Multiple architectural representations exist each targeting a different type of audience. For example, Rational’s 4+1 View has:

  1. Use case view (business users)
  2. Logical view (architects)
  3. Development view (developers)
  4. Process View (performance tuning engineers/testers)
  5. Physical View (IT engineers)

Microsoft has a similar one:

  1. Conceptual view (business users)
  2. Logical view (architects)
  3. Physical view (Developers)
  4. Implementation view (IT engineers)

To an extent, architecture is all about making the right tradeoffs and decisions at the right time based on the priority and importance of various system features because they could potentially influence other down-level sub-systems and their behavior.

Design is the realization process of application architecture, in terms of a specific technology, platform and set of tools. Design process breaks down sub-modules to lower levels and provide technology-specific design definitions of each broken-down module. In case of dev view/physical view, it involves defining abstractions, contracts or interfaces, inter-class or module communication mechanisms, classes using various patterns, all detail enough for a programmer to actually implement the design by writing code. Design specs are the input for programmers. In some cases, an intermediate state is brought in between design and development, intended to develop pseudo code implementation of complex logic/algorithms in the design.

In today’s RAD ecosystem, the difference between design and architecture is often blurred thus making it difficult to understand the difference between them. Of course, for small applications, one may not see the real benefit of developing architecture and design due to unnecessary overhead resulting it redundant information and duplication of effort.

 

I had been thinking of writing on this subject for quite some time now but Thanks to a co-architect who forwarded me a write up I wrote few years ago on this subject and cut my laziness! :-)