JavaScript functional goodness

10522520853_efce3f51df_z

Using some functional principals and using immutable data can really make your JavaScript a lot better and easier to test. While using immutable data in JavaScript seems like something really complex it turns out is really isn’t that hard to get started with if you are already using Babel. And while libraries like Immutable.js are highly recommended we can start even simpler.

Babel does a lot of things for you as it lets you use all sorts of next generation JavaScript, or ECMAScript 2015 to be more correct. And it is quite easy to use whatever your build pipeline is or even as a standalone transpiler if you are not using a build pipeline yet.

When you want to use immutable data the functional array functions map() and filter() as well as spread properties are really useful. Here are a few examples to get you started.

Changing a property on an object

var originalPerson = {
firstName: 'Maurice',
lastName: ''
};

var newPerson = {
...originalPerson,
lastName: 'de Beijer'
};

console.log(newPerson);

The …originalPerson is using the spread properties which expands all properties. The lastName: ‘de Beijer’ comes after it so it overrules the lastName from the originalPerson object. And the result is a new object.

{
firstName: "Maurice",
lastName: "de Beijer"
}

Simple and easy. And as we are never changing objects we can replace the var keyword with the new const keyword to indicate variables are never reassigned.

const originalPerson = {
firstName: 'Maurice',
lastName: ''
};

const newPerson = {
...originalPerson,
lastName: 'de Beijer'
};

console.log(newPerson);

 

Adding something to an array

Usually when adding something to an array either an assignment or a push() function is used. But both just mutate the existing array instead of creating a new one. And with the pure functional approach we do not want to modify the existing array but create a new one instead. Again really simple using spread properties.

const originalPeople = [{
firstName: 'Maurice'
}];

const newPeople = [
...originalPeople,
{firstName: 'Jack'}
];

console.log(newPeople);

In this case we end up with a new array with two objects:

[{
firstName: "Maurice"
}, {
firstName: "Jack"
}]

 

Removing something from an array

Deleting from an array is just as simple using the array filter() function.

const originalPeople = [{
firstName: 'Maurice'
}, {
firstName: 'Jack'
}];

const newPeople = originalPeople.filter(p => p.firstName !== 'Jack');

console.log(newPeople);

And we end up with an array with just a single person.

[{
firstName: 'Maurice'
}]

 

Updating an existing item in an array

Changing an existing items is just as easy when we combine spread properties with the array map() function.

const originalPeople = [{
firstName: 'Maurice'
}, {
firstName: 'Jack'
}];

const newPeople = originalPeople.map(p => {
if (p.firstName !== 'Jack') {
return p;
}

return {
...p,
firstName: 'Bill'
};
});

console.log(newPeople);

And that is all it takes to change Jack to Bill.

[{
firstName: "Maurice"
}, {
firstName: "Bill"
}]

 

Really nice and easy and it makes for very easy to read code once you are familiar with the new spread properties.

Hosting an ASP.NET 5 site in a Linux based Docker container

Note: This post is based on ASP.NET 5 Beta8, the samples might not work for you anymore

Docker has becoming a popular way of hosting Linux based applications the last few years. With Windows Server 2016 Docker containers are also coming to Windows and are likely going to be a popular way of hosting there as well. But as Windows support is just in CPT form right now and ASP.NET is also moving to Linux with the CoreCLR I decided to try and run an ASP.NET web site on a Linux based container.

 

Getting started with an ASP.NET 5 application

To get started I installed the ASP.NET 5 beta 8 using these steps. Once these where installed I created a new ASP.NET MVC project. Just to show where the application is running I added a bit of code to the About page. When running from Visual Studio 2015 using IIS Express this looks like this:

image

The extra code in the HomeController:

   1: public class HomeController : Controller

   2: {

   3:     private readonly IRuntimeEnvironment _runtimeEnvironment;

   4:  

   5:     public HomeController(IRuntimeEnvironment runtimeEnvironment)

   6:     {

   7:         _runtimeEnvironment = runtimeEnvironment;

   8:     }

   9:  

  10:     public IActionResult About()

  11:     {

  12:         ViewData["Message"] = "Your application is running on:";

  13:         ViewData["OperatingSystem"] = _runtimeEnvironment.OperatingSystem;

  14:         ViewData["OperatingSystemVersion"] = _runtimeEnvironment.OperatingSystemVersion;

  15:         ViewData["RuntimeType"] = _runtimeEnvironment.RuntimeType;

  16:         ViewData["RuntimeArchitecture"] = _runtimeEnvironment.RuntimeArchitecture;

  17:         ViewData["RuntimeVersion"] = _runtimeEnvironment.RuntimeVersion;

  18:         

  19:         return View();

  20:     }

  21: }

With the following Razor view:

   1: @{

   2:     ViewData["Title"] = "About";

   3: }

   4: <h2>@ViewData["Title"].</h2>

   5: <h3>@ViewData["Message"]</h3>

   6:  

   7: <h4>OperatingSystem: @ViewData["OperatingSystem"]</h4>

   8: <h4>OperatingSystemVersion: @ViewData["OperatingSystemVersion"]</h4>

   9: <h4>RuntimeType: @ViewData["RuntimeType"]</h4>

  10: <h4>RuntimeArchitecture: @ViewData["RuntimeArchitecture"]</h4>

  11: <h4>RuntimeVersion: @ViewData["RuntimeVersion"]</h4>

  12:  

  13: <p>Use this area to provide additional information.</p>

 

So far so good and we can see the application runs just fine on Windows using the full CLR. From Visual Studio we can also run the application using Kestrel with the CoreCLR as shown below.

image

 

Running the application in a Docker container

Having installed the Docker-Machine we can also run this same website in a Docker container on Linux. The first thing we need to do is create a Dockerfile with the following contents:

   1: FROM microsoft/aspnet:1.0.0-beta8-coreclr

   2:  

   3: COPY ./src/WebApplication3 ./app

   4:  

   5: WORKDIR ./app

   6: RUN dnu restore

   7:  

   8: ENTRYPOINT dnx web --server.urls http://*:80

 

With this container definition in place we need to build the Docker container itself using the docker build -t web-application-3 . command. This is executed from the Docker terminal window in the main folder of our web application where the Dockerfile is located. This will build our Docker image which we can now see see when running docker images.

image

With this new image we can run it using: docker run -d -p 8080:80 web-application-3

image

With the application running we can navigate to http://192.168.99.100:8080/Home/About where 192.168.99.100 is the IP address of the virtual machine running the Linux machine with the Docker daemon.

image

Sweet Smile

What is the right level of maturity

In my previous blog post I explained about the Data Storage Maturity Model and how you would get a much more mature and capable application if you used Event Sourcing. That blog post did bring up some interesting question.

 

Should I always use Event Sourcing?

Given that Event Sourcing was at the top of the pyramid you could conclude that you should always aim for the the top and use Event Sourcing. Aiming high is a noble cause and sounds like the right thing but it turns out that it isn’t that simple.

If your application is relative simple and you don’t have much of a domain model there is little point in Event Sourcing your data storage. For example a To-Do application probably has little reason to do so. Maybe if you want to do advanced analysis over the history of to-do items there is a need but in most cases all you need to do is persist a few items of data to some persistent store for later retrieval. And that sounds like level 1, CRUD with structured storage, will do quite well while adding Event Sourcing would just complicate things.

There is also a differentiation to be made inside of applications. Suppose you are doing  a complex banking app. In that case Event Sourcing would make perfect sense for your domain layer. However there is more then just your domain layer. Every application has utility data, for example a list of countries in the world. This is a mostly static reference table and using Event Sourcing for data like that would be over engineering. Again just using a CRUD store for this data would be more then enough even though all financial transaction data is stored using Event Sourcing.

So I guess the answer is: It depends but probably not for everything in your applications or maybe not at all 🙂

 

But what about Data Access?

Another question that came up is that of Data Access technology to be used. Again this is kind of a hard question to give a simple answer to. It also depends on whether you are looking at the Event Sourced domain model or the projected Read Model.

For the Event Sourcing side I really like Greg Young’s GetEventStore which can be used as a standalone server or embedded client. You can either use an HTTP API but as I mainly use .NET on the server it’s native C# client is the way to go.

For the projected Read Model it really depends on what you are using as the data storeage mechanism. In the case of a relational database you could use NHibernate or Entity Framework but these are probably a bit overkill and will hurt performance. On most cases you will be better of with one of the Mirco ORM’s out there like Dapper, ServiceStack.OrmLite or something similar.

I prefer using a NoSQL database though and really like RavenDB or MongoDB. Currently I am using Redis with the ServiceStack.Redis client in a sample project and that is also working really well for me.

So again it really depends on your preferences but choosing for speed and flexibility is a good thing.

 

Enjoy!

Data Storage Maturity Model

There are many ways of storing data when developing applications, some more mature and capable than others. Storing data of some sort or another in an application is common. Extremely common to be exact as almost every application out there needs to store data is some way or another. After all even a game usually stores the users achievements.

But it’s not games I am interested in. Sure they are interesting to develop and play but most developers I know are busy developing line of business (LOB) applications of some sort or another. One thing line of business application have in common is they work with and usually store data of some sort.

Data Storage Maturity Model

When looking at data oriented applications we can categorize data storage architectures based on different characteristics and capabilities.

 

Level 0: Data Dumps

The most basic way of working with data is just dumping whatever the users works with in the UI to some proprietary data file. This typically means we are working with really simple Create Read Update Delete (CRUD) type of data entry forms and not even storing the data in a structured way. This is extremely limited in capabilities and should generally be avoided at all costs. Any time you have to work with a slightly larger data set or update the structure you are in for a world of hurt.

 

Level 1: Structured Storage

At level 1 we are still working with CRUD style data entry forms but at least we have started using a formal database of some sorts. The database can be a relational database like SQL Server or MySQL but a NoSQL database like MongoDB is equally valid. While data database used allows us to do much more the user interface is not much better. We are still loading complete objects and storing them in a CRUD fashion. This might work reasonably well in a low usage scenario with a low change of conflicts but is really not suitable for anything more complex than a basic business application. We are only storing the current state of the data and as the database stores whatever is send from the UI, or business processing code, there is no sense of meaning to any change made.

 

Level 2: Command Query Responsibility Segregation

When we need to develop better and more complex business applications we really should use Command Query Responsibility Segregation (CQRS) as minimum. In this case we separate the read actions from the write actions. We no longer just send an object to be stored from the user interface to the back end but we are sending commands to update the data. These commands should be related to business actions the application works with. So in other words if a business analyst sees the command names he should be able to make sense of what they do without looking at the code implementations.

While this is a lot better we are still only storing the current state of the data. And that is the problem as it can be very hard to figure out how something got to be in a given state. So if a users detects that something is wrong with the data and suspects a bug in the program we might just have a hard time figuring out how it got to be that way. And once we do fixing the issue might be extremely hard as well.

There are other limitations with just storing the current state like not being able to produce reports, or only at great difficulty, the ask for. Or possibly alter business rules after the fact. And if you think that doesn’t happen just try working on a large government project where the slowness of the decision process means that rules are only definitely updated after the fact.

 

Level 3: Event Sourcing

The most advanced level to be working at is using Event Sourcing (ES). An events sourced application resembles a CQRS style application in a lot of ways except for one vital part. With an Event Sourced application we are no longer storing the current state of the data but we are storing all events that lead up to this. All these events are stored as one big steam of changes and are used to deduce the current state of the data in the application. These events typically never change once written, after all we don’t change history (although our view of it might change over time). This has some large benefits as we can now track exactly how the state came to be as it is making it easier to find bugs. And if the bug is in how we used those business events that we can fix the bug and often that is enough to deduce the correct state.

The usual queries done in an application are much harder on an event stream. In order to fix that issue the events are usually projected out to a read model making querying much easier. This read model it normally stored in some appropriate database like SQL Server or a NoSQL database but could also just be kept in memory. However the event stream is the true source of the truth and not the projections as these are just a derived result. This means we can delete all projections and completely rebuild them from the existing events resulting in much more flexibility. Need to do an expensive query in version two of an application? Well just create a projection designed for that purpose and rebuild it from all previously stored events. This is similar to our view of history changing.

There are some more benefits from storing events instead of just the current state. We can now do temporal queries, or queries over time, on how the data got to be how it is. These kind of queries have lot of goals like for example fraud detection. Another possibility is displaying the state at any previous point in time and running reports or analysis on the data as it was then.

 

Conclusion

It’s kind of hard to say at what level you should be working. Level 0, limited as it is might be appropriate for your application. Lots of applications are at level 1 and just basic forms over data CRUD applications. In some that might be appropriate but in a lot of cases that is actually sub optimal. Level 2 with CQRS is a pretty sweet place to be. You can capture the business intent with command and have a reasonable flexibility. At level 3 with event sourcing you gain a lot of flexibility and strength. If you are doing a more complex business application you should be working on this level. But as always there is no free lunch so don’t go there is the application is really not that complex 🙂

 

Enjoy!

Speeding up your AngularJS applications

In general AngularJS applications are quite fast, specially when compared to more traditional browser based applications that constantly post back to the server. However there are always a few things that will help performance and make an application even faster.

 

Disabling Debug Data

Normally AngularJS adds several things like CSS classes and some scope related properties to DOM elements. This is not really needed to run the application and is really only done to help development tools like Protractor and Batarang. When the application is in production that is not really needed and you can save some overhead by disabling this using the $compileProvider.debugInfoEnabled() function.

   1: demoApp.config(function($compileProvider) {

   2:   $compileProvider.debugInfoEnabled(false);

   3: });

 

Explicit dependency injection annotations

Another option to speed up your application is by using explicit dependency injection annotations. If the DI annotations are not present AngularJS has to parse functions to see the parameter names, something that can be avoided by adding the explicit annotations. The annotations can be added manually, which can be tedious to do, or automatically using something like ng-annotate with either a Gulp or Grunt task.

Adding the ngStrictDi directive to the same element as the ngApp directive can help you find missing annotations.

 

Reducing the number of $apply() calls

Another helpful option is to reduce the number of $apply() calls that are the result of $http request finishing. When you are doing multiple $http requests when a page loads each will trigger a $apply() function causing all watches and data bindings to be reevaluated. By combining these into a single $apply() call for requests that are done at almost the same time we can increase the load speed of you application, something that can be done using $httpProvider.useApplyAsync().

   1: demoApp.config(function($httpProvider) {

   2:   $httpProvider.useApplyAsync(true);

   3: });

 

Enjoy!

Testing an AngularJS directive with its template

 

Testing AngularJS directives usually isn’t very hard. Most of the time it is just a matter of instantiating the directive using the $compile() function and interacting with the scope or related controller to verify if the behavior is as expected. However that leaves a bit of a gap as most of the time the interaction between the directives template and it’s scope isn’t tested. With really simple templates you can include them in the template property but using the templateUrl and loading them on demand is much more common, specially with more complex templates. Now when it comes to unit testing the HTTP request to load the template if not doing to work and as a result the interaction isn’t tested. Sure it is possible to use the $httpBackend service to fake the response but that still doesn’t use the actual template so doesn’t really test the interaction.

 

Testing the template

It turns out testing the template isn’t that hard after all, there are just a few pieces to the puzzle. First of all Karma can server up other files beside the normal JavaScript files just fine, so we can tell it to serve our templates as well. With the pattern option for files we can tell Karma to watch and server the templates without including them in the default HTML page loaded. See the files section from the karma.conf.js file below.

   1: files: [

   2:     'app/bower_components/angular/angular.js',

   3:     'app/bower_components/angular-mocks/angular-mocks.js',

   4:     'app/components/**/*.js',

   5:     'app/*.js',

   6:     'tests/*.js',

   7:     {

   8:         pattern: 'app/*.html',

   9:         watched: true,

  10:         included: false,

  11:         served: true

  12:     }

  13: ],

 

With that the files are available on the server. There are two problems here though. First of all when running unit tests the mock $httpBackend is used and that never does an actual HTTP request. Secondly the file is hosted at a slightly different URL, Karma includes ‘/base’ as the root of our files. So just letting AngularJS just load it is out of the question. However if we use a plain XMLHttpRequest object the mock $httpBackend is completely bypassed and we can load what we want. Using the plain XMLHttpRequest object has a second benefit in that we can do a synchronous request instead of the normal asynchronous request and use the response to pre-populate the $templateCache before the unit test runs. Using synchronous HTTP request is not advisable for code on the Internet and should be avoided in any production code but in a unit test like this would work perfectly fine.

So taking an AngularJS directive like this:

   1: angular.module('myApp', [])

   2:     .directive('myDirective', function(){

   3:       return{

   4:         scope:{

   5:           clickMe:'&'

   6:         },

   7:         templateUrl:'/app/myDirective.html'

   8:       }

   9:     });

 

And a template like this:

   1: <button ng-click="clickMe()">Click me</button>

 

Can be easily tested like this:

   1: describe('The myDirective', function () {

   2:     var element, scope;

   3:  

   4:     beforeEach(module('myApp'));

   5:  

   6:     beforeEach(inject(function ($templateCache) {

   7:         var templateUrl = '/app/myDirective.html';

   8:         var asynchronous = false;

   9:         var req = new XMLHttpRequest();

  10:         req.onload = function () {

  11:             $templateCache.put(templateUrl, this.responseText);

  12:         };

  13:         req.open('get', '/base' + templateUrl, asynchronous);

  14:         req.send();

  15:     }));

  16:  

  17:     beforeEach(inject(function ($compile, $rootScope) {

  18:         scope = $rootScope.$new();

  19:         scope.doIt = angular.noop;

  20:  

  21:         var html = '<div my-directive="" click-me="doIt()"></div>'

  22:         element = $compile(html)(scope);

  23:         scope.$apply();

  24:     }));

  25:  

  26:     it('template should react to clicking', function () {

  27:         spyOn(scope, 'doIt');

  28:  

  29:         element.find('button')[0].click();

  30:  

  31:         expect(scope.doIt).toHaveBeenCalled();

  32:     });

  33: });

 

Now making any breaking change to the template, like removing the ng-click, will immediately cause the unit test to fail in Karma.

 

Enjoy!

angular.module("module") is an anti pattern

 

There are two ways to use the angular.module() function. There is the call with one parameter, that returns an existing module and there is an option of using two parameter which creates a new module. The second way, where a new module is created, is perfectly fine and should be used. However the first option, where an existing module is loaded should be considered and anti pattern in most cases and should not be used unless there is an exceptional and very good reason.

 

What is wrong with angular.module(“module”)?

Why should this usage be seen as an anti pattern? Well both creating and retrieving using angular.module() returns the module so it can be extended. And that is exactly where the problem is. When you create a new module in a JavaScript file you can use that reference to add anything you want, no need to load it again. So the only place loading an exiting module is needed is when you want to add something to it in another JavaScript file.

Splitting modules introduces a big risk. As soon as you split an AngularJS module into separate files you can run into the possibility of loading a partially configured module. Where AngularJS checks if all module dependencies can be satisfied at load time it has no way of seeing if these modules are complete or not.  Missing a complete module produces a very clear error message right at startup time like this:

Uncaught Error: [$injector:modulerr] Failed to instantiate module mainApp due to:
Error: [$injector:modulerr] Failed to instantiate module mainApp.data due to:
Error: [$injector:nomod] Module ‘mainApp.data’ is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you

As the complete application fails to load very obvious and hard not to spot.

 

However if you fail to load just a part of a module the errors are a lot less obvious. In this case the error doesn’t appear until the missing component is actually needed, everything up to that point will run just fine.  The king of error message you will see is something like:

Error: [$injector:unpr] Unknown provider: productsProvider <- products

The error in itself is clear enough but discovering it might not be as easy. If the error occurs in a part of that application that is not used often it might go completely unnoticed.

 

My rule of the thumb: Always define a complete AngularJS module in one JavaScript file.

 

Want to split the functionality into multiple files. By all means go ahead but make sure to do so in a new module and take use module dependencies to make sure everything is loaded right at the application start time. And as angular.module(“module”) is only required to load a module defined in another file there really should almost never be a need to use it.

Enjoy!