Speeding up your AngularJS applications

In general AngularJS applications are quite fast, specially when compared to more traditional browser based applications that constantly post back to the server. However there are always a few things that will help performance and make an application even faster.

 

Disabling Debug Data

Normally AngularJS adds several things like CSS classes and some scope related properties to DOM elements. This is not really needed to run the application and is really only done to help development tools like Protractor and Batarang. When the application is in production that is not really needed and you can save some overhead by disabling this using the $compileProvider.debugInfoEnabled() function.

   1: demoApp.config(function($compileProvider) {
   2:   $compileProvider.debugInfoEnabled(false);
   3: });

 

Explicit dependency injection annotations

Another option to speed up your application is by using explicit dependency injection annotations. If the DI annotations are not present AngularJS has to parse functions to see the parameter names, something that can be avoided by adding the explicit annotations. The annotations can be added manually, which can be tedious to do, or automatically using something like ng-annotate with either a Gulp or Grunt task.

Adding the ngStrictDi directive to the same element as the ngApp directive can help you find

 

Reducing the number of $apply() calls

Another helpful option is to reduce the number of $apply() calls that are the result of $http request finishing. When you are doing multiple $http requests when a page loads each will trigger a $apply() function causing all watches and data bindings to be reevaluated. By combining these into a single $apply() call for requests that are done at almost the same time we can increase the load speed of you application, something that can be done using $httpProvider.useApplyAsync().

   1: demoApp.config(function($httpProvider) {
   2:   $httpProvider.useApplyAsync(true);
   3: });

 



Enjoy!

Testing an AngularJS directive with its template

 

Testing AngularJS directives usually isn’t very hard. Most of the time it is just a matter of instantiating the directive using the $compile() function and interacting with the scope or related controller to verify if the behavior is as expected. However that leaves a bit of a gap as most of the time the interaction between the directives template and it’s scope isn’t tested. With really simple templates you can include them in the template property but using the templateUrl and loading them on demand is much more common, specially with more complex templates. Now when it comes to unit testing the HTTP request to load the template if not doing to work and as a result the interaction isn’t tested. Sure it is possible to use the $httpBackend service to fake the response but that still doesn’t use the actual template so doesn’t really test the interaction.

 

Testing the template

It turns out testing the template isn’t that hard after all, there are just a few pieces to the puzzle. First of all Karma can server up other files beside the normal JavaScript files just fine, so we can tell it to serve our templates as well. With the pattern option for files we can tell Karma to watch and server the templates without including them in the default HTML page loaded. See the files section from the karma.conf.js file below.

   1: files: [
   2:     'app/bower_components/angular/angular.js',
   3:     'app/bower_components/angular-mocks/angular-mocks.js',
   4:     'app/components/**/*.js',
   5:     'app/*.js',
   6:     'tests/*.js',
   7:     {
   8:         pattern: 'app/*.html',
   9:         watched: true,
  10:         included: false,
  11:         served: true
  12:     }
  13: ],

 

With that the files are available on the server. There are two problems here though. First of all when running unit tests the mock $httpBackend is used and that never does an actual HTTP request. Secondly the file is hosted at a slightly different URL, Karma includes ‘/base’ as the root of our files. So just letting AngularJS just load it is out of the question. However if we use a plain XMLHttpRequest object the mock $httpBackend is completely bypassed and we can load what we want. Using the plain XMLHttpRequest object has a second benefit in that we can do a synchronous request instead of the normal asynchronous request and use the response to pre-populate the $templateCache before the unit test runs. Using synchronous HTTP request is not advisable for code on the Internet and should be avoided in any production code but in a unit test like this would work perfectly fine.

So taking an AngularJS directive like this:

   1: angular.module('myApp', [])
   2:     .directive('myDirective', function(){
   3:       return{
   4:         scope:{
   5:           clickMe:'&'
   6:         },
   7:         templateUrl:'/app/myDirective.html'
   8:       }
   9:     });

 

And a template like this:

   1: <button ng-click="clickMe()">Click me</button>

 

Can be easily tested like this:

   1: describe('The myDirective', function () {
   2:     var element, scope;
   3:  
   4:     beforeEach(module('myApp'));
   5:  
   6:     beforeEach(inject(function ($templateCache) {
   7:         var templateUrl = '/app/myDirective.html';
   8:         var asynchronous = false;
   9:         var req = new XMLHttpRequest();
  10:         req.onload = function () {
  11:             $templateCache.put(templateUrl, this.responseText);
  12:         };
  13:         req.open('get', '/base' + templateUrl, asynchronous);
  14:         req.send();
  15:     }));
  16:  
  17:     beforeEach(inject(function ($compile, $rootScope) {
  18:         scope = $rootScope.$new();
  19:         scope.doIt = angular.noop;
  20:  
  21:         var html = '<div my-directive="" click-me="doIt()"></div>'
  22:         element = $compile(html)(scope);
  23:         scope.$apply();
  24:     }));
  25:  
  26:     it('template should react to clicking', function () {
  27:         spyOn(scope, 'doIt');
  28:  
  29:         element.find('button')[0].click();
  30:  
  31:         expect(scope.doIt).toHaveBeenCalled();
  32:     });
  33: });

 


Now making any breaking change to the template, like removing the ng-click, will immediately cause the unit test to fail in Karma.

 

Enjoy!

angular.module("module") is an anti pattern

 

There are two ways to use the angular.module() function. There is the call with one parameter, that returns an existing module and there is an option of using two parameter which creates a new module. The second way, where a new module is created, is perfectly fine and should be used. However the first option, where an existing module is loaded should be considered and anti pattern in most cases and should not be used unless there is an exceptional and very good reason.

 

What is wrong with angular.module(“module”)?

Why should this usage be seen as an anti pattern? Well both creating and retrieving using angular.module() returns the module so it can be extended. And that is exactly where the problem is. When you create a new module in a JavaScript file you can use that reference to add anything you want, no need to load it again. So the only place loading an exiting module is needed is when you want to add something to it in another JavaScript file.

Splitting modules introduces a big risk. As soon as you split an AngularJS module into separate files you can run into the possibility of loading a partially configured module. Where AngularJS checks if all module dependencies can be satisfied at load time it has no way of seeing if these modules are complete or not.  Missing a complete module produces a very clear error message right at startup time like this:

Uncaught Error: [$injector:modulerr] Failed to instantiate module mainApp due to:
Error: [$injector:modulerr] Failed to instantiate module mainApp.data due to:
Error: [$injector:nomod] Module ‘mainApp.data’ is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you

As the complete application fails to load very obvious and hard not to spot.

 

However if you fail to load just a part of a module the errors are a lot less obvious. In this case the error doesn’t appear until the missing component is actually needed, everything up to that point will run just fine.  The king of error message you will see is something like:

Error: [$injector:unpr] Unknown provider: productsProvider <- products

The error in itself is clear enough but discovering it might not be as easy. If the error occurs in a part of that application that is not used often it might go completely unnoticed.

 

My rule of the thumb: Always define a complete AngularJS module in one JavaScript file.

 

Want to split the functionality into multiple files. By all means go ahead but make sure to do so in a new module and take use module dependencies to make sure everything is loaded right at the application start time. And as angular.module(“module”) is only required to load a module defined in another file there really should almost never be a need to use it.

Enjoy!

Using browserify to manage JavaScript dependencies

Managing JavaScript dependencies in the browser is hard. Library scripts typically create global variables and functions. Other scripts now depend on those global objects to do their work. This works but in order to load all required scripts we have to add <script> elements to our HTML, making sure to add them in the right order, and basically know what each exposes.

The problem

Consider the following client side code:

   1: // Print a message
   2: utils.print("Hello");

 

This depends on another piece of script below:

   1: // Expose the utility object with it's print function
   2: var utils = {
   3:     print: function(msg){
   4:         console.log(msg);
   5:     }
   6: };

 

And for all of that to work we have to load the scripts in the right order using some HTML as below:

   1: <!DOCTYPE html>
   2: <html>
   3: <head lang="en">
   4:     <meta charset="UTF-8">
   5:     <title>Browserify demo</title>
   6: </head>
   7: <body>
   8:  
   9:  
  10: <script src="utils.js"></script>
   1:  
   2: <script src="demo.js">
</script>
  11:  
  12: </body>
  13: </html>

 

Not really rocket science here but if we want update utils.print() to call a printIt() function loaded from yet another library we have to go back to our HTML and make sure we load the printIt.js as well. Easy in a small app but this can become hard and error prone with larger applications.

 

Browserify to the rescue

Using browserify will make managing these dependencies a lot easier. To understand how it works we first must take a quick look at how NodeJS modules work.

With node each module can take a dependency on another module by requiring it using the require() function. And each module can define what it exports to other modules by using module.exports. The NodeJS runtime takes care of loading the files and adding dependencies inside a module will not require a change anywhere else in the program.

This system works really nice but unfortunately the browser doesn’t provide this NodeJS runtime capability. One problem here is that a call to require() is a synchronous call that returns the loaded module while the browser does all of its IO asynchronously. In the browser you can use something like RequireJS to asynchronously load scripts but while this works file this is not very efficient due to its asynchronous nature. As a result people usually use RequireJS during development and then create a bundle with all the code for production.

Browserify on the other hand will allow us to use the synchronous NodeJS approach with script loading in the browser. This is done by packaging up all files required based on the require() calls and creating one file to load at runtime. Converting the example above to use this style requires some small changes in the code.

The demo.js specifies it requires utils.js. The syntax “./utils” means that we should load the file from the same folder.

   1: var utils = require("./utils");
   2: // Print a message
   3: utils.print("Hello");

 

Next the utils.js specifies what it exports:

   1: // Expose the utility object with it's print function
   2:  
   3: var utils = {
   4:     print: function(msg){
   5:         console.log(msg);
   6:     }
   7: };
   8:  
   9: module.exports = utils;

 

Next we need to run browserify to bundle the file for use in the browser. As browserify is a node application we need to install node and then, through the node package manager NPM, install browserify with

   1: npm install -g browserify

 

With browserify installed we can bundle the files into one using:

   1: browserify demo.js > bundle.js

This will create a bundle.js with the following content:

   1: (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){
   2: var utils = require("./utils");
   3: // Print a message
   4: utils.print("Hello");
   5:  
   6: },{"./utils":2}],2:[function(require,module,exports){
   7: // Expose the utility object with it's print function
   8:  
   9: var utils = {
  10:     print: function(msg){
  11:         console.log(msg);
  12:     }
  13: };
  14:  
  15: module.exports = utils;
  16: },{}]},{},[1]);

 

Not the most readable but then that was not what it was designed to do. Instead we can see all code we need is included. Now by just including this generated file we ready to start our browser application.

Adding the printIt() function

Doing the same change as above is simple and best of all doesn’t require any change to the HTML to load different files. Just update utils.js to require() printIt.js and explicity export the function in printIt.js, rerun browserify and you are all set.

   1: function printIt(msg){
   2:     console.info(msg);
   3: }
   4:  
   5: module.exports = printIt;

 

Note that it’s fine to just export a single function here.

 

   1: // Expose the utility object with it's print function
   2: var printIt = require("./printIt");
   3:  
   4: var utils = {
   5:     print: function(msg){
   6:         printIt(msg);
   7:     }
   8: };
   9:  
  10: module.exports = utils;

And the result of running browserify is:

   1: (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){
   2: var utils = require("./utils");
   3: // Print a message
   4: utils.print("Hello");
   5:  
   6: },{"./utils":3}],2:[function(require,module,exports){
   7: function printIt(msg){
   8:     console.info(msg);
   9: }
  10:  
  11: module.exports = printIt;
  12:  
  13: },{}],3:[function(require,module,exports){
  14: // Expose the utility object with it's print function
  15: var printIt = require("./printIt");
  16:  
  17: var utils = {
  18:     print: function(msg){
  19:         printIt(msg);
  20:     }
  21: };
  22:  
  23: module.exports = utils;
  24: },{"./printIt":2}]},{},[1]);


Again not the most readable code but the printIt() function is now included. Nice and no changes required to the HTML :-)


Proper scoping


As a side benefit browserify also wraps all our JavaScript files in a function ensuring that proper scope for variables is used and we don’t accidently leak variables to the proper scope.


 


Using browserify works really nice but this way we do have to start it after every time. In the next blog post I will show how to use Gulp or Grunt to automate this making the workflow a lot smoother.


 


Enjoy!

X things every JavaScript developer should know: Automatic Semicolon Insertion

As with many other things in JavaScript Automatic Semicolon Insertion is usually not a problem but it can occasionally bite you if you are unaware of it. What Automatic Semicolon Insertion does is really simple. It basically boils down to semicolons being optional in JavaScript and the parser  injecting them when it is appropriate. That might sound very nice, after all you can leave semicolons out and the right thing will happen. For example the following code, without a single semicolon, is completely valid and will print a sum of 3 as expected:

   1: console.log(add(1, 2))
   2:  
   3: function add(x, y) {
   4:     var sum
   5:     sum = x + y
   6:     return sum
   7: }

 

What basically happens is that the JavaScript parser adds a semicolon at the end of each line if that doesn’t cause the syntax to become invalid. See section 7.9.1 of the ECMA-262 standard or read it online here.

Now that might sound great but it turns out that Automatic Semicolon Insertion can cause some interesting issues :-(

JavaScript style rules

One thing you might have noticed that the normal style of writing JavaScript is different than that of C# or Java. Compare the JavaScript code above with the same C# code below:

   1: public int Add(int x, int y)
   2: {
   3:     int sum;
   4:     sum = x + y;
   5:     return sum;
   6: }

Besides the obvious difference with the typing and the required semicolons the open curly brace for the add function is on the same line as the declaration in JavaScript and the next line in C#. While the JavaScript conventions would work fine in C# the reverse is not always the case. If we reformatted the JavaScript to the following the code, in this case, would still run fine.

   1: function add(x, y)
   2: {
   3:     var sum
   4:     sum = x + y
   5:     return sum
   6: }

 

However if we would return an object literal and format our code the same way we would run into a problem. Consider the following code:

   1: console.log(add(1, 2))
   2:  
   3: function add(x, y) {
   4:     var sum
   5:     sum = x + y
   6:  
   7:     return
   8:     {
   9:          sum: sum
  10:     }
  11: }

You might expect this to print an object with a property sum containing the value 3. However the code prints “undefined”. Compare that with the following code that is only formatted differently:

   1: console.log(add(1, 2))
   2:  
   3: function add(x, y) {
   4:     var sum
   5:     sum = x + y
   6:  
   7:     return {
   8:         sum: sum
   9:     }
  10: }

 

This will print the expected object with a sum of 3.

 

Blame JavaScript Automatic Semicolon Insertion

This unexpected behavior is caused by semicolon insertion. instead of the code you most likely think will execute the following executes:

   1: console.log(add(1, 2));
   2:  
   3: function add(x, y) {
   4:     var sum;
   5:     sum = x + y;
   6:  
   7:     return;
   8:     {
   9:         sum: sum
  10:     };
  11: }


Notice the semicolon after the return statement?


That actually means return nothing, i.e. undefined, and just have some unreachable code on the next few lines. That is completely valid so that is what happens :-(


Best practices


The general advice, even though it doesn’t protect you is to always add semicolons and not leave it up the the JavaScript parser. It doesn’t really help a lot because the parser will still inject semicolons of it thinks it is appropriate. So the only real solution is to use the JavaScript formatting conventions and ensure that the open curly brace of the object literal is after the return statement. That way adding a semicolon there is invalid and you can be sure the right thing happens.


Unfortunately ‘use strict’ doesn’t help here either. It will prevent some errors but it doesn’t make semicolons required :-(


Enjoy!

X things every JavaScript developer should know: Comparisons

Another item of things every JavaScript developer should know is how comparisons work. Just like with some of the other JavaScript, or I should really say ECMAScript, features anything you know about C# or Java could actually be misleading here.

 

To == or to ===

One of the weird things is there are actually two comparison operators in JavaScript, the double and the triple equals. The == is called the equals operator, see section 11.9.1 of the ECMAScript standard, and was the original equality operator. Unfortunately the way this operator works is quite some cause for confusion and as a result the === or Strict Equals operator was introduced, see section 11.9.4 of the ECMAScript standard. It would have been nice if they had just fixed the original problem but if they had they would have broken existing JavaScript applications.

In general I would always advise you to use the Strict Equals Operator or === whenever you do a comparison unless you have a specific need for the behavior or the original operator.

 

What is the problem with ==

I mentioned that == has problems and should be avoided but its still helpful to understand these problems. These problems basically boil down to the fact that the == operator does type conversions if the two types being compared are not the same. For example the following all evaluate to true:

   1: 0 == "0" // true


   2: 1 == "1" // true


   3: 2 == "2" // true

Sounds reasonable enough right?

 

Unfortunately it isn’t quite that simple all of the following evaluate to false:

   1: false == "false" // false


   2: true == "true" // false

These might seem weird, especially since the following evacuates to true again:

   1: true == !!"true" // true

 
So what is going on here?
 

The Abstract Equality Comparison Algorithm

Section 11.9.3 of the ECMAScript standard describes what is happening here. If one operand is a number and the other a string, as was the case in the first examples, the string is converted to a number and the comparison is done based on those. So basically these comparisons where:
   1: 0 == 0 // true


   2: 1 == 1 // true


   3: 2 == 2 // true



 



So what was the case in the other two comparisons?



In these cases almost the same happens and the Boolean values are converted to a number. That leaves a number to string comparison where the string is also converted to a number. And the result of converting true and false to a number is 1 and 0 but the result of the string to number conversions is an invalid number or NaN. And NaN being not equal to any other number means those comparisons result in false.



So why did the last comparison true == !!”true” evaluate to true? Well simple the double bang operator !! is evaluated first and a non empty string is truthy. End result of that is the expression true == true and that is obviously true. Sounds reasonable but that also means that any non empty string will result in true, so even true == !!"false" evaluates to true :-(



 











Conclusion



The double equality operator is a confusing part of the JavaScript history. You are best of to avoid it an use the Strict Equals Operator === instead.



 



Enjoy!

Converting the RavenDB Northwind database to a more denormalized form

In a previous blog post I demonstrated how to denormalize the RavenDB sample database and use the DenormalizedReference<T> and INamedDocument types from the RavenDB documentation to make life really sweet. That leaves us with one small problem and that is that the original sample database doesn’t work with our improved document design. With the sample database, small as it is, loading all document as a dynamic type, converting them and saving them would be easy enough but in a real database that would not be practical. So lets look at a better solution fixing the database.

 

Updating the database on the server

Instead of downloading each document, updating the structure and saving it back to the server it is much better to do these sort of actions on the server itself. Fortunately RavenDB has the capability to execute database commands on the server. These update commands can be PatchRequest objects that will let you do a large number of things using a nice C# API. And a the ultimate fallback there is the ScriptedPatchRequest which will let you execute a block of JavaScript code on the server. Why JavaScript? Well RavenDB stores things in JSON and the server is really not dependent on a .NET client.

Using the ScriptedPatchRequest we can either execute a patch on a single document or on a collection of documents. In this case I want to update all Order documents to reflect their new structure. It turns out this is quite simple

 

   1: using (IDocumentStore documentStore = new DocumentStore


   2: {


   3:     ConnectionStringName = "Northwind"


   4: }.Initialize())


   5: {


   6:     var javaScript = @"...";


   7:     


   8:     documentStore.DatabaseCommands.UpdateByIndex(


   9:         "Raven/DocumentsByEntityName",


  10:         new IndexQuery


  11:         {


  12:             Query = "Tag:Orders"


  13:         },


  14:         new ScriptedPatchRequest


  15:         {


  16:             Script = javaScript


  17:         });


  18: }

This code will execute the JavaScript code to patch the document once for each document in the Orders collection.

 

The JavaScript code to execute is quite simple, just make the changes required to the document and you are set.

   1: var company = LoadDocument(this.Company); 


   2: this.Company = {Id: this.Company, Name: company.Name};


   3:  


   4: var employee = LoadDocument(this.Employee);


   5: this.Employee = {Id: this.Employee, Name: employee.FirstName + ' ' + employee.LastName};


   6:  


   7: var shipVia = LoadDocument(this.ShipVia); 


   8: this.ShipVia = {Id: this.ShipVia, Name: shipVia.Name};


   9:  


  10: this.Lines.forEach(function(line){


  11:     var product = LoadDocument(line.Product); 


  12:     line.Product = {Id: line.Product, Name: product.Name};


  13:     delete line.ProductName;


  14: });



 





In this case I am converting the Company, Employee, ShipVia and Product properties to have the new structure. Additionally I am removing the ProductName from the OrderLine as that is no longer needed.



 



Sweet :-)

Denormalizing data in RavenDB

One of the things with RavenDB, or NoSQL document databases in general, is that you don’t do joins to combine data. Normally you try to model the documents you store in such a way that the data you need for most common actions is stored in the document itself. That often means denormalizing data. When you first get started with document databases that feels strange, after all with relational databases we are taught to normalize data as much as possible and not repeat the same values. Where normalizing data is great for updates and minimizing the size of databases it is less than ideal for querying. This is because when querying we need to join various tables to turn abstract foreign keys into something that is actually understandable by the end user. And while relational databases are pretty good at joining tables these operations are not free, instead we pay for the that with every query we do. Now it turns out that most applications are read heavy and not write heavy. And as a result optimizing writes  actually hurts something like 99% of the database operations we do.

With document database like RavenDB we can’t even do a join action. When we normalize data the client actively has to fetch related data and turn those abstract identities to other documents into, for a user, meaningful values. Normally the documents in a RavenDB database are much more denormalized that similar data in a SQL server database would be. The result is that for most operations a single IDocumentSession.Load() is enough to work with a document.

 

That data makes sense to denormalize?

Not everything makes sense to denormalize, normally only relatively static data that is frequently needed is denormalized. Why relatively static data? Simple, every time the master document for that piece of data is updated all documents where it might be denormalized also need to be updated. And while not especially difficult it would become a bottleneck if it happened to often. Fortunately there is enough data that fits the criteria.

 

The RavenDB example data

The de-facto sample data for SQL Server is the Northwind database. And by sheer coincidence it so happens that RavenDB also ships with this same database, except now in document form. With lots of .NET developers being familiar with SQL Server this Northwind database is often the first stop at how a document database should be constructed.

image

As you can see in the screenshot from the RavenDB studio a relatively small number of collections replaces the tables from SQL Server. Nice :-)

image

The structure used to save an order is also nice and simple, just the Order and OrderLine classes saved in a single document.

   1: public class Order


   2: {


   3:     public string Id { get; set; }


   4:     public string Company { get; set; }


   5:     public string Employee { get; set; }


   6:     public DateTime OrderedAt { get; set; }


   7:     public DateTime RequireAt { get; set; }


   8:     public DateTime? ShippedAt { get; set; }


   9:     public Address ShipTo { get; set; }


  10:     public string ShipVia { get; set; }


  11:     public decimal Freight { get; set; }


  12:     public List<OrderLine> Lines { get; set; }


  13: }


  14:  


  15: public class OrderLine


  16: {


  17:     public string Product { get; set; }


  18:     public string ProductName { get; set; }


  19:     public decimal PricePerUnit { get; set; }


  20:     public int Quantity { get; set; }


  21:     public decimal Discount { get; set; }


  22: }

 

One missing thing

Nice as this may be there is one missing thing. Other than the product name being sold and it’s price there is no data denormalized. This means that if we want to display to the user for even the most basic of uses we will need to load additional document. For example the Company property in an order just contains the identity of a customer. If we want to display the order the very least we would have to do is load the company and display the customers name instead of its identity. And the same it true for the employee and shipper.

While this sample database is not denormalized it turns out is is quite easy to do so ourselves.

 

Denormalizing the RavenDB Northwind database

The first step is to store the related name along with each referred to identity as seen below.

image

 

The order is the same but this time we can do common user interaction operations with just the one document and not be required to load additional documents. It turns out this is quite easy to do. The RavenDB documentation has a nice description on how to do that using INamedDocument and DenormalizedReference<T>. Using this technique makes it really easy and consistent to work with denormalized data and create a document structure like the one above. The change to the Order and OrderLine classes are minimal. All I had to do is replace the string type Company property with one of type DenormalizedReference<Company>.

   1: public class Order


   2:  {


   3:      public string Id { get; set; }


   4:      public DenormalizedReference<Company> Company { get; set; }


   5:      public DenormalizedReference<Employee> Employee { get; set; }


   6:      public DateTime OrderedAt { get; set; }


   7:      public DateTime RequireAt { get; set; }


   8:      public DateTime? ShippedAt { get; set; }


   9:      public Address ShipTo { get; set; }


  10:      public DenormalizedReference<Shipper> ShipVia { get; set; }


  11:      public decimal Freight { get; set; }


  12:      public List<OrderLine> Lines { get; set; }


  13: }


  14:  


  15: public class OrderLine


  16: {


  17:     public DenormalizedReference<Product> Product { get; set; }


  18:     public string ProductName { get; set; }


  19:     public decimal PricePerUnit { get; set; }


  20:     public int Quantity { get; set; }


  21:     public decimal Discount { get; set; }


  22: }

 

The DenormalizedReference<T> and INamedDocument are also really simple and straight from the RavenDB documentation.

   1: public class DenormalizedReference<T> where T : INamedDocument


   2: {


   3:     public string Id { get; set; }


   4:     public string Name { get; set; }


   5:  


   6:     public static implicit operator DenormalizedReference<T>(T doc)


   7:     {


   8:         return new DenormalizedReference<T>


   9:         {


  10:             Id = doc.Id,


  11:             Name = doc.Name


  12:         };


  13:     }


  14: }


  15:  


  16: public interface INamedDocument


  17: {


  18:     string Id { get; }


  19:     string Name { get; }


  20: }

 

The implicit cast operator in the DenormalizedReference<T> makes using this really simple. Just assign a property and it will take case of the proper reference needed.

   1: var order = session.Load<Order>("orders/42");


   2: order.Company = session.Load<Company>("companies/11");

 

One useful extension method

Loading the single document and doing common operations should be easy now but there are still operations where you will need more data from the related entity. Loading them is easy enough.

   1: var customer = session.Load<Company>(order.Company.Id);

 

However using the DenormalizedReference<T> the structure and type is already captured in the Order class. Using this with a simple extension method makes the code even simpler which is always nice :-)

   1: public static class IDocumentSessionExtensions


   2: {


   3:     public static T Load<T>(this IDocumentSession session, DenormalizedReference<T> reference)


   4:         where T : INamedDocument


   5:     {


   6:         return session.Load<T>(reference.Id);


   7:     }


   8: }

 

This simple extension method will let is load the customer as follows:

   1: var customer = session.Load(order.Company);





 



Saves another few keystrokes and completely type safe. Sweet :-)



 



Enjoy!