Introducing the React Tutorial

React is hot and it seems that almost every front-end web developer wants a piece of it. Not surprising maybe because Facebook created and open sourced React. And React not only powers the Facebook website but also many others like Netflix and Airbnb.

Because I have been using and teaching React for the last year I decided to try doing things a bit bigger. If you want to learn React I want to help you with a series of online video tutorials. Each video covers one part and the whole series will give you a deep understanding of React. Of course this takes quite some work so I decided to start a Kickstarter campaign to fund the whole project. You can find the Kickstarter project here.

If you become one of the backers you can get early access to the videos if you want to. All you need to do is choose the appropriate backer level. Regardless of the level you back me you will get access to the videos before others who buys after the Kickstarter campaign finishes. And not just earlier access, you will also pay less :-).

http://bit.ly/the-react-tutorial

Turbocharging Docker build

Building a Docker image can take a bit of time depending on what you have to do. Specially when you have to do something like DNU Restore, DotNet Restore, NPM Install or Nuget Restore builds can become slow because packages might have to be downloaded from the internet.

Take the following Dockerfile which does a DNU Restore.

FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

MAINTAINER Maurice de Beijer <maurice.de.beijer@gmail.com>

COPY . ./app

WORKDIR ./app
RUN dnu restore

EXPOSE 5000

CMD ["--server.urls", "http://*:5000"]
ENTRYPOINT ["dnx", "web"]

Running the Docker build multiple times without any changes is quite fast. To time it I am using the command:

time docker build -t dotned .
This reports it takes between 1.3 and 1.5 seconds on my aging laptop. Not too bad really.
 
Unfortunately this changes quite a bit when I make a change to the source code of the application. Just adding some non significant whitespace slows the build from 1.5 seconds to 58 seconds which is quite a bit of time to wait before being able to run the container.
 
The reason for this slowdown is that Docker has to do a lot more work. When you build a Docker container for the second time Docker creates a layer for each command executed. And each layer is cached to be reused. But if a cached layer depends on another layer that is changed it can’t be reused anymore. This means that once the source code is changes the result of the COPY command is a different layer and the DNU Restore layer has to be recreated which takes a long time.
 
A much faster approach is to copy just the project.json file so we can do a DNU Restore before copying the rest of the source code. With this approach Docker build are down to quite a reasonable 3.3 seconds  and only take a long time when there is a change to the project.json file. Something that should not happen very often. The functionally identical but much faster Dockerfile looks like this:
 
FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

MAINTAINER Maurice de Beijer <maurice.de.beijer@gmail.com>

COPY ./project.json ./app/

WORKDIR ./app
RUN dnu restore

COPY . ./app

EXPOSE 5000

CMD ["--server.urls", "http://*:5000"]
ENTRYPOINT ["dnx", "web"]

Enjoy Smile
 
 

JavaScript functional goodness

10522520853_efce3f51df_z

Using some functional principals and using immutable data can really make your JavaScript a lot better and easier to test. While using immutable data in JavaScript seems like something really complex it turns out is really isn’t that hard to get started with if you are already using Babel. And while libraries like Immutable.js are highly recommended we can start even simpler.

Babel does a lot of things for you as it lets you use all sorts of next generation JavaScript, or ECMAScript 2015 to be more correct. And it is quite easy to use whatever your build pipeline is or even as a standalone transpiler if you are not using a build pipeline yet.

When you want to use immutable data the functional array functions map() and filter() as well as spread properties are really useful. Here are a few examples to get you started.

Changing a property on an object

var originalPerson = {
firstName: 'Maurice',
lastName: ''
};

var newPerson = {
...originalPerson,
lastName: 'de Beijer'
};

console.log(newPerson);

The …originalPerson is using the spread properties which expands all properties. The lastName: ‘de Beijer’ comes after it so it overrules the lastName from the originalPerson object. And the result is a new object.

{
firstName: "Maurice",
lastName: "de Beijer"
}

Simple and easy. And as we are never changing objects we can replace the var keyword with the new const keyword to indicate variables are never reassigned.

const originalPerson = {
firstName: 'Maurice',
lastName: ''
};

const newPerson = {
...originalPerson,
lastName: 'de Beijer'
};

console.log(newPerson);

 

Adding something to an array

Usually when adding something to an array either an assignment or a push() function is used. But both just mutate the existing array instead of creating a new one. And with the pure functional approach we do not want to modify the existing array but create a new one instead. Again really simple using spread properties.

const originalPeople = [{
firstName: 'Maurice'
}];

const newPeople = [
...originalPeople,
{firstName: 'Jack'}
];

console.log(newPeople);

In this case we end up with a new array with two objects:

[{
firstName: "Maurice"
}, {
firstName: "Jack"
}]

 

Removing something from an array

Deleting from an array is just as simple using the array filter() function.

const originalPeople = [{
firstName: 'Maurice'
}, {
firstName: 'Jack'
}];

const newPeople = originalPeople.filter(p => p.firstName !== 'Jack');

console.log(newPeople);

And we end up with an array with just a single person.

[{
firstName: 'Maurice'
}]

 

Updating an existing item in an array

Changing an existing items is just as easy when we combine spread properties with the array map() function.

const originalPeople = [{
firstName: 'Maurice'
}, {
firstName: 'Jack'
}];

const newPeople = originalPeople.map(p => {
if (p.firstName !== 'Jack') {
return p;
}

return {
...p,
firstName: 'Bill'
};
});

console.log(newPeople);

And that is all it takes to change Jack to Bill.

[{
firstName: "Maurice"
}, {
firstName: "Bill"
}]

 

Really nice and easy and it makes for very easy to read code once you are familiar with the new spread properties.

Hosting an ASP.NET 5 site in a Linux based Docker container

Note: This post is based on ASP.NET 5 Beta8, the samples might not work for you anymore

Docker has becoming a popular way of hosting Linux based applications the last few years. With Windows Server 2016 Docker containers are also coming to Windows and are likely going to be a popular way of hosting there as well. But as Windows support is just in CPT form right now and ASP.NET is also moving to Linux with the CoreCLR I decided to try and run an ASP.NET web site on a Linux based container.

 

Getting started with an ASP.NET 5 application

To get started I installed the ASP.NET 5 beta 8 using these steps. Once these where installed I created a new ASP.NET MVC project. Just to show where the application is running I added a bit of code to the About page. When running from Visual Studio 2015 using IIS Express this looks like this:

image

The extra code in the HomeController:

   1: public class HomeController : Controller

   2: {

   3:     private readonly IRuntimeEnvironment _runtimeEnvironment;

   4:  

   5:     public HomeController(IRuntimeEnvironment runtimeEnvironment)

   6:     {

   7:         _runtimeEnvironment = runtimeEnvironment;

   8:     }

   9:  

  10:     public IActionResult About()

  11:     {

  12:         ViewData["Message"] = "Your application is running on:";

  13:         ViewData["OperatingSystem"] = _runtimeEnvironment.OperatingSystem;

  14:         ViewData["OperatingSystemVersion"] = _runtimeEnvironment.OperatingSystemVersion;

  15:         ViewData["RuntimeType"] = _runtimeEnvironment.RuntimeType;

  16:         ViewData["RuntimeArchitecture"] = _runtimeEnvironment.RuntimeArchitecture;

  17:         ViewData["RuntimeVersion"] = _runtimeEnvironment.RuntimeVersion;

  18:         

  19:         return View();

  20:     }

  21: }

With the following Razor view:

   1: @{

   2:     ViewData["Title"] = "About";

   3: }

   4: <h2>@ViewData["Title"].</h2>

   5: <h3>@ViewData["Message"]</h3>

   6:  

   7: <h4>OperatingSystem: @ViewData["OperatingSystem"]</h4>

   8: <h4>OperatingSystemVersion: @ViewData["OperatingSystemVersion"]</h4>

   9: <h4>RuntimeType: @ViewData["RuntimeType"]</h4>

  10: <h4>RuntimeArchitecture: @ViewData["RuntimeArchitecture"]</h4>

  11: <h4>RuntimeVersion: @ViewData["RuntimeVersion"]</h4>

  12:  

  13: <p>Use this area to provide additional information.</p>

 

So far so good and we can see the application runs just fine on Windows using the full CLR. From Visual Studio we can also run the application using Kestrel with the CoreCLR as shown below.

image

 

Running the application in a Docker container

Having installed the Docker-Machine we can also run this same website in a Docker container on Linux. The first thing we need to do is create a Dockerfile with the following contents:

   1: FROM microsoft/aspnet:1.0.0-beta8-coreclr

   2:  

   3: COPY ./src/WebApplication3 ./app

   4:  

   5: WORKDIR ./app

   6: RUN dnu restore

   7:  

   8: ENTRYPOINT dnx web --server.urls http://*:80

 

With this container definition in place we need to build the Docker container itself using the docker build -t web-application-3 . command. This is executed from the Docker terminal window in the main folder of our web application where the Dockerfile is located. This will build our Docker image which we can now see see when running docker images.

image

With this new image we can run it using: docker run -d -p 8080:80 web-application-3

image

With the application running we can navigate to http://192.168.99.100:8080/Home/About where 192.168.99.100 is the IP address of the virtual machine running the Linux machine with the Docker daemon.

image

Sweet Smile

What is the right level of maturity

In my previous blog post I explained about the Data Storage Maturity Model and how you would get a much more mature and capable application if you used Event Sourcing. That blog post did bring up some interesting question.

 

Should I always use Event Sourcing?

Given that Event Sourcing was at the top of the pyramid you could conclude that you should always aim for the the top and use Event Sourcing. Aiming high is a noble cause and sounds like the right thing but it turns out that it isn’t that simple.

If your application is relative simple and you don’t have much of a domain model there is little point in Event Sourcing your data storage. For example a To-Do application probably has little reason to do so. Maybe if you want to do advanced analysis over the history of to-do items there is a need but in most cases all you need to do is persist a few items of data to some persistent store for later retrieval. And that sounds like level 1, CRUD with structured storage, will do quite well while adding Event Sourcing would just complicate things.

There is also a differentiation to be made inside of applications. Suppose you are doing  a complex banking app. In that case Event Sourcing would make perfect sense for your domain layer. However there is more then just your domain layer. Every application has utility data, for example a list of countries in the world. This is a mostly static reference table and using Event Sourcing for data like that would be over engineering. Again just using a CRUD store for this data would be more then enough even though all financial transaction data is stored using Event Sourcing.

So I guess the answer is: It depends but probably not for everything in your applications or maybe not at all 🙂

 

But what about Data Access?

Another question that came up is that of Data Access technology to be used. Again this is kind of a hard question to give a simple answer to. It also depends on whether you are looking at the Event Sourced domain model or the projected Read Model.

For the Event Sourcing side I really like Greg Young’s GetEventStore which can be used as a standalone server or embedded client. You can either use an HTTP API but as I mainly use .NET on the server it’s native C# client is the way to go.

For the projected Read Model it really depends on what you are using as the data storeage mechanism. In the case of a relational database you could use NHibernate or Entity Framework but these are probably a bit overkill and will hurt performance. On most cases you will be better of with one of the Mirco ORM’s out there like Dapper, ServiceStack.OrmLite or something similar.

I prefer using a NoSQL database though and really like RavenDB or MongoDB. Currently I am using Redis with the ServiceStack.Redis client in a sample project and that is also working really well for me.

So again it really depends on your preferences but choosing for speed and flexibility is a good thing.

 

Enjoy!

Data Storage Maturity Model

There are many ways of storing data when developing applications, some more mature and capable than others. Storing data of some sort or another in an application is common. Extremely common to be exact as almost every application out there needs to store data is some way or another. After all even a game usually stores the users achievements.

But it’s not games I am interested in. Sure they are interesting to develop and play but most developers I know are busy developing line of business (LOB) applications of some sort or another. One thing line of business application have in common is they work with and usually store data of some sort.

Data Storage Maturity Model

When looking at data oriented applications we can categorize data storage architectures based on different characteristics and capabilities.

 

Level 0: Data Dumps

The most basic way of working with data is just dumping whatever the users works with in the UI to some proprietary data file. This typically means we are working with really simple Create Read Update Delete (CRUD) type of data entry forms and not even storing the data in a structured way. This is extremely limited in capabilities and should generally be avoided at all costs. Any time you have to work with a slightly larger data set or update the structure you are in for a world of hurt.

 

Level 1: Structured Storage

At level 1 we are still working with CRUD style data entry forms but at least we have started using a formal database of some sorts. The database can be a relational database like SQL Server or MySQL but a NoSQL database like MongoDB is equally valid. While data database used allows us to do much more the user interface is not much better. We are still loading complete objects and storing them in a CRUD fashion. This might work reasonably well in a low usage scenario with a low change of conflicts but is really not suitable for anything more complex than a basic business application. We are only storing the current state of the data and as the database stores whatever is send from the UI, or business processing code, there is no sense of meaning to any change made.

 

Level 2: Command Query Responsibility Segregation

When we need to develop better and more complex business applications we really should use Command Query Responsibility Segregation (CQRS) as minimum. In this case we separate the read actions from the write actions. We no longer just send an object to be stored from the user interface to the back end but we are sending commands to update the data. These commands should be related to business actions the application works with. So in other words if a business analyst sees the command names he should be able to make sense of what they do without looking at the code implementations.

While this is a lot better we are still only storing the current state of the data. And that is the problem as it can be very hard to figure out how something got to be in a given state. So if a users detects that something is wrong with the data and suspects a bug in the program we might just have a hard time figuring out how it got to be that way. And once we do fixing the issue might be extremely hard as well.

There are other limitations with just storing the current state like not being able to produce reports, or only at great difficulty, the ask for. Or possibly alter business rules after the fact. And if you think that doesn’t happen just try working on a large government project where the slowness of the decision process means that rules are only definitely updated after the fact.

 

Level 3: Event Sourcing

The most advanced level to be working at is using Event Sourcing (ES). An events sourced application resembles a CQRS style application in a lot of ways except for one vital part. With an Event Sourced application we are no longer storing the current state of the data but we are storing all events that lead up to this. All these events are stored as one big steam of changes and are used to deduce the current state of the data in the application. These events typically never change once written, after all we don’t change history (although our view of it might change over time). This has some large benefits as we can now track exactly how the state came to be as it is making it easier to find bugs. And if the bug is in how we used those business events that we can fix the bug and often that is enough to deduce the correct state.

The usual queries done in an application are much harder on an event stream. In order to fix that issue the events are usually projected out to a read model making querying much easier. This read model it normally stored in some appropriate database like SQL Server or a NoSQL database but could also just be kept in memory. However the event stream is the true source of the truth and not the projections as these are just a derived result. This means we can delete all projections and completely rebuild them from the existing events resulting in much more flexibility. Need to do an expensive query in version two of an application? Well just create a projection designed for that purpose and rebuild it from all previously stored events. This is similar to our view of history changing.

There are some more benefits from storing events instead of just the current state. We can now do temporal queries, or queries over time, on how the data got to be how it is. These kind of queries have lot of goals like for example fraud detection. Another possibility is displaying the state at any previous point in time and running reports or analysis on the data as it was then.

 

Conclusion

It’s kind of hard to say at what level you should be working. Level 0, limited as it is might be appropriate for your application. Lots of applications are at level 1 and just basic forms over data CRUD applications. In some that might be appropriate but in a lot of cases that is actually sub optimal. Level 2 with CQRS is a pretty sweet place to be. You can capture the business intent with command and have a reasonable flexibility. At level 3 with event sourcing you gain a lot of flexibility and strength. If you are doing a more complex business application you should be working on this level. But as always there is no free lunch so don’t go there is the application is really not that complex 🙂

 

Enjoy!

Speeding up your AngularJS applications

In general AngularJS applications are quite fast, specially when compared to more traditional browser based applications that constantly post back to the server. However there are always a few things that will help performance and make an application even faster.

 

Disabling Debug Data

Normally AngularJS adds several things like CSS classes and some scope related properties to DOM elements. This is not really needed to run the application and is really only done to help development tools like Protractor and Batarang. When the application is in production that is not really needed and you can save some overhead by disabling this using the $compileProvider.debugInfoEnabled() function.

   1: demoApp.config(function($compileProvider) {

   2:   $compileProvider.debugInfoEnabled(false);

   3: });

 

Explicit dependency injection annotations

Another option to speed up your application is by using explicit dependency injection annotations. If the DI annotations are not present AngularJS has to parse functions to see the parameter names, something that can be avoided by adding the explicit annotations. The annotations can be added manually, which can be tedious to do, or automatically using something like ng-annotate with either a Gulp or Grunt task.

Adding the ngStrictDi directive to the same element as the ngApp directive can help you find missing annotations.

 

Reducing the number of $apply() calls

Another helpful option is to reduce the number of $apply() calls that are the result of $http request finishing. When you are doing multiple $http requests when a page loads each will trigger a $apply() function causing all watches and data bindings to be reevaluated. By combining these into a single $apply() call for requests that are done at almost the same time we can increase the load speed of you application, something that can be done using $httpProvider.useApplyAsync().

   1: demoApp.config(function($httpProvider) {

   2:   $httpProvider.useApplyAsync(true);

   3: });

 

Enjoy!