Data Storage Maturity Model

There are many ways of storing data when developing applications, some more mature and capable than others. Storing data of some sort or another in an application is common. Extremely common to be exact as almost every application out there needs to store data is some way or another. After all even a game usually stores the users achievements.

But it’s not games I am interested in. Sure they are interesting to develop and play but most developers I know are busy developing line of business (LOB) applications of some sort or another. One thing line of business application have in common is they work with and usually store data of some sort.

Data Storage Maturity Model

When looking at data oriented applications we can categorize data storage architectures based on different characteristics and capabilities.

 

Level 0: Data Dumps

The most basic way of working with data is just dumping whatever the users works with in the UI to some proprietary data file. This typically means we are working with really simple Create Read Update Delete (CRUD) type of data entry forms and not even storing the data in a structured way. This is extremely limited in capabilities and should generally be avoided at all costs. Any time you have to work with a slightly larger data set or update the structure you are in for a world of hurt.

 

Level 1: Structured Storage

At level 1 we are still working with CRUD style data entry forms but at least we have started using a formal database of some sorts. The database can be a relational database like SQL Server or MySQL but a NoSQL database like MongoDB is equally valid. While data database used allows us to do much more the user interface is not much better. We are still loading complete objects and storing them in a CRUD fashion. This might work reasonably well in a low usage scenario with a low change of conflicts but is really not suitable for anything more complex than a basic business application. We are only storing the current state of the data and as the database stores whatever is send from the UI, or business processing code, there is no sense of meaning to any change made.

 

Level 2: Command Query Responsibility Segregation

When we need to develop better and more complex business applications we really should use Command Query Responsibility Segregation (CQRS) as minimum. In this case we separate the read actions from the write actions. We no longer just send an object to be stored from the user interface to the back end but we are sending commands to update the data. These commands should be related to business actions the application works with. So in other words if a business analyst sees the command names he should be able to make sense of what they do without looking at the code implementations.

While this is a lot better we are still only storing the current state of the data. And that is the problem as it can be very hard to figure out how something got to be in a given state. So if a users detects that something is wrong with the data and suspects a bug in the program we might just have a hard time figuring out how it got to be that way. And once we do fixing the issue might be extremely hard as well.

There are other limitations with just storing the current state like not being able to produce reports, or only at great difficulty, the ask for. Or possibly alter business rules after the fact. And if you think that doesn’t happen just try working on a large government project where the slowness of the decision process means that rules are only definitely updated after the fact.

 

Level 3: Event Sourcing

The most advanced level to be working at is using Event Sourcing (ES). An events sourced application resembles a CQRS style application in a lot of ways except for one vital part. With an Event Sourced application we are no longer storing the current state of the data but we are storing all events that lead up to this. All these events are stored as one big steam of changes and are used to deduce the current state of the data in the application. These events typically never change once written, after all we don’t change history (although our view of it might change over time). This has some large benefits as we can now track exactly how the state came to be as it is making it easier to find bugs. And if the bug is in how we used those business events that we can fix the bug and often that is enough to deduce the correct state.

The usual queries done in an application are much harder on an event stream. In order to fix that issue the events are usually projected out to a read model making querying much easier. This read model it normally stored in some appropriate database like SQL Server or a NoSQL database but could also just be kept in memory. However the event stream is the true source of the truth and not the projections as these are just a derived result. This means we can delete all projections and completely rebuild them from the existing events resulting in much more flexibility. Need to do an expensive query in version two of an application? Well just create a projection designed for that purpose and rebuild it from all previously stored events. This is similar to our view of history changing.

There are some more benefits from storing events instead of just the current state. We can now do temporal queries, or queries over time, on how the data got to be how it is. These kind of queries have lot of goals like for example fraud detection. Another possibility is displaying the state at any previous point in time and running reports or analysis on the data as it was then.

 

Conclusion

It’s kind of hard to say at what level you should be working. Level 0, limited as it is might be appropriate for your application. Lots of applications are at level 1 and just basic forms over data CRUD applications. In some that might be appropriate but in a lot of cases that is actually sub optimal. Level 2 with CQRS is a pretty sweet place to be. You can capture the business intent with command and have a reasonable flexibility. At level 3 with event sourcing you gain a lot of flexibility and strength. If you are doing a more complex business application you should be working on this level. But as always there is no free lunch so don’t go there is the application is really not that complex :-)

 

Enjoy!

Speeding up your AngularJS applications

In general AngularJS applications are quite fast, specially when compared to more traditional browser based applications that constantly post back to the server. However there are always a few things that will help performance and make an application even faster.

 

Disabling Debug Data

Normally AngularJS adds several things like CSS classes and some scope related properties to DOM elements. This is not really needed to run the application and is really only done to help development tools like Protractor and Batarang. When the application is in production that is not really needed and you can save some overhead by disabling this using the $compileProvider.debugInfoEnabled() function.

   1: demoApp.config(function($compileProvider) {

   2:   $compileProvider.debugInfoEnabled(false);

   3: });

 

Explicit dependency injection annotations

Another option to speed up your application is by using explicit dependency injection annotations. If the DI annotations are not present AngularJS has to parse functions to see the parameter names, something that can be avoided by adding the explicit annotations. The annotations can be added manually, which can be tedious to do, or automatically using something like ng-annotate with either a Gulp or Grunt task.

Adding the ngStrictDi directive to the same element as the ngApp directive can help you find missing annotations.

 

Reducing the number of $apply() calls

Another helpful option is to reduce the number of $apply() calls that are the result of $http request finishing. When you are doing multiple $http requests when a page loads each will trigger a $apply() function causing all watches and data bindings to be reevaluated. By combining these into a single $apply() call for requests that are done at almost the same time we can increase the load speed of you application, something that can be done using $httpProvider.useApplyAsync().

   1: demoApp.config(function($httpProvider) {

   2:   $httpProvider.useApplyAsync(true);

   3: });

 

Enjoy!

Testing an AngularJS directive with its template

 

Testing AngularJS directives usually isn’t very hard. Most of the time it is just a matter of instantiating the directive using the $compile() function and interacting with the scope or related controller to verify if the behavior is as expected. However that leaves a bit of a gap as most of the time the interaction between the directives template and it’s scope isn’t tested. With really simple templates you can include them in the template property but using the templateUrl and loading them on demand is much more common, specially with more complex templates. Now when it comes to unit testing the HTTP request to load the template if not doing to work and as a result the interaction isn’t tested. Sure it is possible to use the $httpBackend service to fake the response but that still doesn’t use the actual template so doesn’t really test the interaction.

 

Testing the template

It turns out testing the template isn’t that hard after all, there are just a few pieces to the puzzle. First of all Karma can server up other files beside the normal JavaScript files just fine, so we can tell it to serve our templates as well. With the pattern option for files we can tell Karma to watch and server the templates without including them in the default HTML page loaded. See the files section from the karma.conf.js file below.

   1: files: [

   2:     'app/bower_components/angular/angular.js',

   3:     'app/bower_components/angular-mocks/angular-mocks.js',

   4:     'app/components/**/*.js',

   5:     'app/*.js',

   6:     'tests/*.js',

   7:     {

   8:         pattern: 'app/*.html',

   9:         watched: true,

  10:         included: false,

  11:         served: true

  12:     }

  13: ],

 

With that the files are available on the server. There are two problems here though. First of all when running unit tests the mock $httpBackend is used and that never does an actual HTTP request. Secondly the file is hosted at a slightly different URL, Karma includes ‘/base’ as the root of our files. So just letting AngularJS just load it is out of the question. However if we use a plain XMLHttpRequest object the mock $httpBackend is completely bypassed and we can load what we want. Using the plain XMLHttpRequest object has a second benefit in that we can do a synchronous request instead of the normal asynchronous request and use the response to pre-populate the $templateCache before the unit test runs. Using synchronous HTTP request is not advisable for code on the Internet and should be avoided in any production code but in a unit test like this would work perfectly fine.

So taking an AngularJS directive like this:

   1: angular.module('myApp', [])

   2:     .directive('myDirective', function(){

   3:       return{

   4:         scope:{

   5:           clickMe:'&'

   6:         },

   7:         templateUrl:'/app/myDirective.html'

   8:       }

   9:     });

 

And a template like this:

   1: <button ng-click="clickMe()">Click me</button>

 

Can be easily tested like this:

   1: describe('The myDirective', function () {

   2:     var element, scope;

   3:  

   4:     beforeEach(module('myApp'));

   5:  

   6:     beforeEach(inject(function ($templateCache) {

   7:         var templateUrl = '/app/myDirective.html';

   8:         var asynchronous = false;

   9:         var req = new XMLHttpRequest();

  10:         req.onload = function () {

  11:             $templateCache.put(templateUrl, this.responseText);

  12:         };

  13:         req.open('get', '/base' + templateUrl, asynchronous);

  14:         req.send();

  15:     }));

  16:  

  17:     beforeEach(inject(function ($compile, $rootScope) {

  18:         scope = $rootScope.$new();

  19:         scope.doIt = angular.noop;

  20:  

  21:         var html = '<div my-directive="" click-me="doIt()"></div>'

  22:         element = $compile(html)(scope);

  23:         scope.$apply();

  24:     }));

  25:  

  26:     it('template should react to clicking', function () {

  27:         spyOn(scope, 'doIt');

  28:  

  29:         element.find('button')[0].click();

  30:  

  31:         expect(scope.doIt).toHaveBeenCalled();

  32:     });

  33: });

 

Now making any breaking change to the template, like removing the ng-click, will immediately cause the unit test to fail in Karma.

 

Enjoy!

angular.module("module") is an anti pattern

 

There are two ways to use the angular.module() function. There is the call with one parameter, that returns an existing module and there is an option of using two parameter which creates a new module. The second way, where a new module is created, is perfectly fine and should be used. However the first option, where an existing module is loaded should be considered and anti pattern in most cases and should not be used unless there is an exceptional and very good reason.

 

What is wrong with angular.module(“module”)?

Why should this usage be seen as an anti pattern? Well both creating and retrieving using angular.module() returns the module so it can be extended. And that is exactly where the problem is. When you create a new module in a JavaScript file you can use that reference to add anything you want, no need to load it again. So the only place loading an exiting module is needed is when you want to add something to it in another JavaScript file.

Splitting modules introduces a big risk. As soon as you split an AngularJS module into separate files you can run into the possibility of loading a partially configured module. Where AngularJS checks if all module dependencies can be satisfied at load time it has no way of seeing if these modules are complete or not.  Missing a complete module produces a very clear error message right at startup time like this:

Uncaught Error: [$injector:modulerr] Failed to instantiate module mainApp due to:
Error: [$injector:modulerr] Failed to instantiate module mainApp.data due to:
Error: [$injector:nomod] Module ‘mainApp.data’ is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you

As the complete application fails to load very obvious and hard not to spot.

 

However if you fail to load just a part of a module the errors are a lot less obvious. In this case the error doesn’t appear until the missing component is actually needed, everything up to that point will run just fine.  The king of error message you will see is something like:

Error: [$injector:unpr] Unknown provider: productsProvider <- products

The error in itself is clear enough but discovering it might not be as easy. If the error occurs in a part of that application that is not used often it might go completely unnoticed.

 

My rule of the thumb: Always define a complete AngularJS module in one JavaScript file.

 

Want to split the functionality into multiple files. By all means go ahead but make sure to do so in a new module and take use module dependencies to make sure everything is loaded right at the application start time. And as angular.module(“module”) is only required to load a module defined in another file there really should almost never be a need to use it.

Enjoy!

Using browserify to manage JavaScript dependencies

Managing JavaScript dependencies in the browser is hard. Library scripts typically create global variables and functions. Other scripts now depend on those global objects to do their work. This works but in order to load all required scripts we have to add <script> elements to our HTML, making sure to add them in the right order, and basically know what each exposes.

The problem

Consider the following client side code:

   1: // Print a message

   2: utils.print("Hello");

 

This depends on another piece of script below:

   1: // Expose the utility object with it's print function

   2: var utils = {

   3:     print: function(msg){

   4:         console.log(msg);

   5:     }

   6: };

 

And for all of that to work we have to load the scripts in the right order using some HTML as below:

   1: <!DOCTYPE html>

   2: <html>

   3: <head lang="en">

   4:     <meta charset="UTF-8">

   5:     <title>Browserify demo</title>

   6: </head>

   7: <body>

   8:  

   9:  

  10: <script src="utils.js"></script>
   1:  

   2: <script src="demo.js">

</script>

  11:  

  12: </body>

  13: </html>

 

Not really rocket science here but if we want update utils.print() to call a printIt() function loaded from yet another library we have to go back to our HTML and make sure we load the printIt.js as well. Easy in a small app but this can become hard and error prone with larger applications.

 

Browserify to the rescue

Using browserify will make managing these dependencies a lot easier. To understand how it works we first must take a quick look at how NodeJS modules work.

With node each module can take a dependency on another module by requiring it using the require() function. And each module can define what it exports to other modules by using module.exports. The NodeJS runtime takes care of loading the files and adding dependencies inside a module will not require a change anywhere else in the program.

This system works really nice but unfortunately the browser doesn’t provide this NodeJS runtime capability. One problem here is that a call to require() is a synchronous call that returns the loaded module while the browser does all of its IO asynchronously. In the browser you can use something like RequireJS to asynchronously load scripts but while this works file this is not very efficient due to its asynchronous nature. As a result people usually use RequireJS during development and then create a bundle with all the code for production.

Browserify on the other hand will allow us to use the synchronous NodeJS approach with script loading in the browser. This is done by packaging up all files required based on the require() calls and creating one file to load at runtime. Converting the example above to use this style requires some small changes in the code.

The demo.js specifies it requires utils.js. The syntax “./utils” means that we should load the file from the same folder.

   1: var utils = require("./utils");

   2: // Print a message

   3: utils.print("Hello");

 

Next the utils.js specifies what it exports:

   1: // Expose the utility object with it's print function

   2:  

   3: var utils = {

   4:     print: function(msg){

   5:         console.log(msg);

   6:     }

   7: };

   8:  

   9: module.exports = utils;

 

Next we need to run browserify to bundle the file for use in the browser. As browserify is a node application we need to install node and then, through the node package manager NPM, install browserify with

   1: npm install -g browserify

 

With browserify installed we can bundle the files into one using:

   1: browserify demo.js > bundle.js

This will create a bundle.js with the following content:

   1: (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){

   2: var utils = require("./utils");

   3: // Print a message

   4: utils.print("Hello");

   5:  

   6: },{"./utils":2}],2:[function(require,module,exports){

   7: // Expose the utility object with it's print function

   8:  

   9: var utils = {

  10:     print: function(msg){

  11:         console.log(msg);

  12:     }

  13: };

  14:  

  15: module.exports = utils;

  16: },{}]},{},[1]);

 

Not the most readable but then that was not what it was designed to do. Instead we can see all code we need is included. Now by just including this generated file we ready to start our browser application.

Adding the printIt() function

Doing the same change as above is simple and best of all doesn’t require any change to the HTML to load different files. Just update utils.js to require() printIt.js and explicity export the function in printIt.js, rerun browserify and you are all set.

   1: function printIt(msg){

   2:     console.info(msg);

   3: }

   4:  

   5: module.exports = printIt;

 

Note that it’s fine to just export a single function here.

 

   1: // Expose the utility object with it's print function

   2: var printIt = require("./printIt");

   3:  

   4: var utils = {

   5:     print: function(msg){

   6:         printIt(msg);

   7:     }

   8: };

   9:  

  10: module.exports = utils;

And the result of running browserify is:

   1: (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){

   2: var utils = require("./utils");

   3: // Print a message

   4: utils.print("Hello");

   5:  

   6: },{"./utils":3}],2:[function(require,module,exports){

   7: function printIt(msg){

   8:     console.info(msg);

   9: }

  10:  

  11: module.exports = printIt;

  12:  

  13: },{}],3:[function(require,module,exports){

  14: // Expose the utility object with it's print function

  15: var printIt = require("./printIt");

  16:  

  17: var utils = {

  18:     print: function(msg){

  19:         printIt(msg);

  20:     }

  21: };

  22:  

  23: module.exports = utils;

  24: },{"./printIt":2}]},{},[1]);

Again not the most readable code but the printIt() function is now included. Nice and no changes required to the HTML :-)

Proper scoping

As a side benefit browserify also wraps all our JavaScript files in a function ensuring that proper scope for variables is used and we don’t accidently leak variables to the proper scope.

 

Using browserify works really nice but this way we do have to start it after every time. In the next blog post I will show how to use Gulp or Grunt to automate this making the workflow a lot smoother.

 

Enjoy!

X things every JavaScript developer should know: Automatic Semicolon Insertion

As with many other things in JavaScript Automatic Semicolon Insertion is usually not a problem but it can occasionally bite you if you are unaware of it. What Automatic Semicolon Insertion does is really simple. It basically boils down to semicolons being optional in JavaScript and the parser  injecting them when it is appropriate. That might sound very nice, after all you can leave semicolons out and the right thing will happen. For example the following code, without a single semicolon, is completely valid and will print a sum of 3 as expected:

   1: console.log(add(1, 2))

   2:

   3: function add(x, y) {

   4:     var sum

   5:     sum = x + y

   6:     return sum

   7: }

 

What basically happens is that the JavaScript parser adds a semicolon at the end of each line if that doesn’t cause the syntax to become invalid. See section 7.9.1 of the ECMA-262 standard or read it online here.

Now that might sound great but it turns out that Automatic Semicolon Insertion can cause some interesting issues :-(

JavaScript style rules

One thing you might have noticed that the normal style of writing JavaScript is different than that of C# or Java. Compare the JavaScript code above with the same C# code below:

   1: public int Add(int x, int y)

   2: {

   3:     int sum;

   4:     sum = x + y;

   5:     return sum;

   6: }

Besides the obvious difference with the typing and the required semicolons the open curly brace for the add function is on the same line as the declaration in JavaScript and the next line in C#. While the JavaScript conventions would work fine in C# the reverse is not always the case. If we reformatted the JavaScript to the following the code, in this case, would still run fine.

   1: function add(x, y)

   2: {

   3:     var sum

   4:     sum = x + y

   5:     return sum

   6: }

 

However if we would return an object literal and format our code the same way we would run into a problem. Consider the following code:

   1: console.log(add(1, 2))

   2:

   3: function add(x, y) {

   4:     var sum

   5:     sum = x + y

   6:

   7:     return

   8:     {

   9:          sum: sum

  10:     }

  11: }

You might expect this to print an object with a property sum containing the value 3. However the code prints “undefined”. Compare that with the following code that is only formatted differently:

   1: console.log(add(1, 2))

   2:

   3: function add(x, y) {

   4:     var sum

   5:     sum = x + y

   6:

   7:     return {

   8:         sum: sum

   9:     }

  10: }

 

This will print the expected object with a sum of 3.

 

Blame JavaScript Automatic Semicolon Insertion

This unexpected behavior is caused by semicolon insertion. instead of the code you most likely think will execute the following executes:

   1: console.log(add(1, 2));

   2:

   3: function add(x, y) {

   4:     var sum;

   5:     sum = x + y;

   6:

   7:     return;

   8:     {

   9:         sum: sum

  10:     };

  11: }

Notice the semicolon after the return statement?

That actually means return nothing, i.e. undefined, and just have some unreachable code on the next few lines. That is completely valid so that is what happens :-(

Best practices

The general advice, even though it doesn’t protect you is to always add semicolons and not leave it up the the JavaScript parser. It doesn’t really help a lot because the parser will still inject semicolons of it thinks it is appropriate. So the only real solution is to use the JavaScript formatting conventions and ensure that the open curly brace of the object literal is after the return statement. That way adding a semicolon there is invalid and you can be sure the right thing happens.

Unfortunately ‘use strict’ doesn’t help here either. It will prevent some errors but it doesn’t make semicolons required :-(

Enjoy!

X things every JavaScript developer should know: Comparisons

Another item of things every JavaScript developer should know is how comparisons work. Just like with some of the other JavaScript, or I should really say ECMAScript, features anything you know about C# or Java could actually be misleading here.

 

To == or to ===

One of the weird things is there are actually two comparison operators in JavaScript, the double and the triple equals. The == is called the equals operator, see section 11.9.1 of the ECMAScript standard, and was the original equality operator. Unfortunately the way this operator works is quite some cause for confusion and as a result the === or Strict Equals operator was introduced, see section 11.9.4 of the ECMAScript standard. It would have been nice if they had just fixed the original problem but if they had they would have broken existing JavaScript applications.

In general I would always advise you to use the Strict Equals Operator or === whenever you do a comparison unless you have a specific need for the behavior or the original operator.

 

What is the problem with ==

I mentioned that == has problems and should be avoided but its still helpful to understand these problems. These problems basically boil down to the fact that the == operator does type conversions if the two types being compared are not the same. For example the following all evaluate to true:

   1: 0 == "0" // true

   2: 1 == "1" // true

   3: 2 == "2" // true

Sounds reasonable enough right?

 

Unfortunately it isn’t quite that simple all of the following evaluate to false:

   1: false == "false" // false

   2: true == "true" // false

These might seem weird, especially since the following evacuates to true again:

   1: true == !!"true" // true

 
So what is going on here?
 

The Abstract Equality Comparison Algorithm

Section 11.9.3 of the ECMAScript standard describes what is happening here. If one operand is a number and the other a string, as was the case in the first examples, the string is converted to a number and the comparison is done based on those. So basically these comparisons where:
   1: 0 == 0 // true

   2: 1 == 1 // true

   3: 2 == 2 // true

 

So what was the case in the other two comparisons?

In these cases almost the same happens and the Boolean values are converted to a number. That leaves a number to string comparison where the string is also converted to a number. And the result of converting true and false to a number is 1 and 0 but the result of the string to number conversions is an invalid number or NaN. And NaN being not equal to any other number means those comparisons result in false.

So why did the last comparison true == !!”true” evaluate to true? Well simple the double bang operator !! is evaluated first and a non empty string is truthy. End result of that is the expression true == true and that is obviously true. Sounds reasonable but that also means that any non empty string will result in true, so even true == !!"false" evaluates to true :-(

 

Conclusion

The double equality operator is a confusing part of the JavaScript history. You are best of to avoid it an use the Strict Equals Operator === instead.

 

Enjoy!

Converting the RavenDB Northwind database to a more denormalized form

In a previous blog post I demonstrated how to denormalize the RavenDB sample database and use the DenormalizedReference<T> and INamedDocument types from the RavenDB documentation to make life really sweet. That leaves us with one small problem and that is that the original sample database doesn’t work with our improved document design. With the sample database, small as it is, loading all document as a dynamic type, converting them and saving them would be easy enough but in a real database that would not be practical. So lets look at a better solution fixing the database.

 

Updating the database on the server

Instead of downloading each document, updating the structure and saving it back to the server it is much better to do these sort of actions on the server itself. Fortunately RavenDB has the capability to execute database commands on the server. These update commands can be PatchRequest objects that will let you do a large number of things using a nice C# API. And a the ultimate fallback there is the ScriptedPatchRequest which will let you execute a block of JavaScript code on the server. Why JavaScript? Well RavenDB stores things in JSON and the server is really not dependent on a .NET client.

Using the ScriptedPatchRequest we can either execute a patch on a single document or on a collection of documents. In this case I want to update all Order documents to reflect their new structure. It turns out this is quite simple

 

   1: using (IDocumentStore documentStore = new DocumentStore

   2: {

   3:     ConnectionStringName = "Northwind"

   4: }.Initialize())

   5: {

   6:     var javaScript = @"...";

   7:     

   8:     documentStore.DatabaseCommands.UpdateByIndex(

   9:         "Raven/DocumentsByEntityName",

  10:         new IndexQuery

  11:         {

  12:             Query = "Tag:Orders"

  13:         },

  14:         new ScriptedPatchRequest

  15:         {

  16:             Script = javaScript

  17:         });

  18: }

This code will execute the JavaScript code to patch the document once for each document in the Orders collection.

 

The JavaScript code to execute is quite simple, just make the changes required to the document and you are set.

   1: var company = LoadDocument(this.Company); 

   2: this.Company = {Id: this.Company, Name: company.Name};

   3:  

   4: var employee = LoadDocument(this.Employee);

   5: this.Employee = {Id: this.Employee, Name: employee.FirstName + ' ' + employee.LastName};

   6:  

   7: var shipVia = LoadDocument(this.ShipVia); 

   8: this.ShipVia = {Id: this.ShipVia, Name: shipVia.Name};

   9:  

  10: this.Lines.forEach(function(line){

  11:     var product = LoadDocument(line.Product); 

  12:     line.Product = {Id: line.Product, Name: product.Name};

  13:     delete line.ProductName;

  14: });

 

In this case I am converting the Company, Employee, ShipVia and Product properties to have the new structure. Additionally I am removing the ProductName from the OrderLine as that is no longer needed.

 

Sweet :-)