What should be in a Single Page Application?

Single Page Applications (SPA) are really popular these days. And that is understandable as React, Angular and similar libraries with browser based routing make it quite easy to do so for developers. End users like SPA’s as well because navigating around is really fast. Much faster than the traditional browser-based applications that do lots of postbacks. So everyone is happy right?

Well everyone is happy if SPA’s are done right but quite often they are not Sad smile

 

The problem with SPA’s

The problem with SPA’s is quite typical:

If a bit of something is good then more of the same must be better.

This is something that comes up with lots of things, not just with SPA’s, and it’s usually wrong.

The first reason developers notice that creating one big SPA is an issue, is when the load time of the initial page starts taking way too long. After all everything loads up front when the user first loads the application. But then they learn about Webpack bundle splitting and lazy loading and the problem seems to go away. Depending on the technology used, and the application architecture, lazy loading can be really simple. Read this blog post for an example of how to do so in Angular.

The real reason one large SPA is wrong is because of coupling. If you build your whole application as a SPA then everything is coupled together. Making a small change in one obscure part of your application could have a global effect. With different pages the chance of an accidental global change is much smaller. It can only happen if the code is actually loaded on another page.

 

So should we stop building SPA’s?

No, absolutely not!

But we should stop making them as big as some people are doing.

Instead, split your application into functional parts. Turn each functional part into its own SPA and use normal browser navigation to navigate between them. These SPA’s are typically called mini SPA’s as many of them together form the complete application.

 

How big should a mini SPA be?

That is an easy one: as big as they need to be and no bigger than that.

Which is still kind of vague of course.

In general, they should be about one piece of functionality. Or, in case you are into DDD, one bounded context is the right size.

What that means in practice is that you split your application into functional parts. Just suppose for a moment you are building a large sales application. Then you probably need to build a customer management piece. There will also be an article management part. There will be an order entry part and finally there will be an order fulfillment part of the application. In reality there will be more but as an example these four are enough. These four are quite distinct, each has it’s own job to do and end users will not likely switch between them all the time to get their job done.

Sure there is overlap. Both order entry and order fulfillment will need to use customer and article data. But the shape and form of that data will most likely be different and certainly be read only. So while they have a dependency on the data from the other parts they don’t need the actual code from the other modules to function.

szczecinjezioroquotrusalkaquot-zawody_w_pokonywaniu_mini_wodospadu-kibicow_nie_brakuje_-_panoramio

This makes for a great split. Create four different mini SPA’s and have a shell application so everything feels like a single application to the end user. The navigation inside a mini SPA is really fast, it is a SPA after all. The navigation between different modules is a bit slower but far less frequent. And use of HTTP caching or, in a new browser the ServiceWorker, will even make that quite fast.

Enjoy building your mini SPA’s Smile

Lazy loading and Angular routing

One problem with creating a Single Page Application (SPA) is that you might load the entire application when the user first starts the application but most of it is never used. Sure navigation is nice and fast after the initial load but that slows down the initial load as that becomes larger. And that might be worth it if the user navigates to all, or most, routes loaded. But when most routes are never activated that is just a big waste. Fortunately with Angular that is easy enough to fix with lazy loading. And lazy loading is really easy to set up in Angular as well.

BTW This blog post is about Angular 2 but as we are not supposed t0 use the 2 anymore it’s just Angular Smile

 

Eager loading of routes

Below an example of eager loading with Angular routes. Notice there is no HTTP traffic at all when I click on the different links and different routes are activated. Of course this is a really simple example and in a real application you would most likely be fetching some data using an AJAX call

2017-01-02_14-11-21

 

So what code did I use here?

Most of it was just generated with the Angular CLI. You can see all the code here but it boils down to the main AppModule with two sub modules: Mod1Module and Mod2Module both generated with the CLI and each has a single route and component added to it. The Mod1Module is about books and Mod2Module is about authors. Below is Mod1Module as an example.

   1: import { NgModule } from '@angular/core';

   2: import { CommonModule } from '@angular/common';

   3: import { Routes, RouterModule } from '@angular/router';

   4:

   5: import { BooksComponent } from './books/books.component';

   6:

   7: const routes: Routes = [

   8:   { path: 'books', component: BooksComponent },

   9: ];

  10:

  11: @NgModule({

  12:   imports: [

  13:     CommonModule,

  14:     RouterModule.forChild(routes)

  15:   ],

  16:   declarations: [BooksComponent],

  17: })

  18: export class Mod1Module { }

 

The AppModule is also quite straightforward. It adds the two sub modules and initializes the routing module. As both routes used are defined in the sub modules the routing table is actually empty here.

   1: import { BrowserModule } from '@angular/platform-browser';

   2: import { NgModule } from '@angular/core';

   3: import { FormsModule } from '@angular/forms';

   4: import { HttpModule } from '@angular/http';

   5: import { Routes, RouterModule } from '@angular/router';

   6:

   7: import { AppComponent } from './app.component';

   8:

   9: import { Mod1Module } from './mod1/mod1.module';

  10: import { Mod2Module } from './mod2/mod2.module';

  11:

  12: const routes: Routes = [];

  13: const routerModule = RouterModule.forRoot(routes);

  14:

  15: @NgModule({

  16:   declarations: [

  17:     AppComponent

  18:   ],

  19:   imports: [

  20:     BrowserModule,

  21:     FormsModule,

  22:     HttpModule,

  23:     routerModule,

  24:     Mod1Module,

  25:     Mod2Module

  26:   ],

  27:   providers: [],

  28:   bootstrap: [AppComponent]

  29: })

  30: export class AppModule { }

The view for the main AppComponent is also quite simple. There are two links with anchor tags using the RouterLink directive to let the user navigate. Finally there is the RouterOutlet directive to show the route components template.

   1: <h1>

   2:   {{title}}

   3: </h1>

   4:

   5: <nav>

   6:   <a routerLink="/books">Books</a>

   7:   <a routerLink="/authors">Authors</a>

   8: </nav>

   9:

  10: <hr />

  11:

  12: <router-outlet>

  13: </router-outlet>

 

Running this with ng serve works and as you can see above the application loads everything at startup, no additional requests are done when we click on the route links.

 

Switching to lazy loading

It turns out that switching to lazy loading is really simple and just requires a few small changes.

We want the sum modules only to be loaded when needed so the main routes need to be defined in AppModule instead. In both Mod1Module and Mod2Module we need to change the route path to an empty string. The only change is on line 8 here.

   1: import { NgModule } from '@angular/core';

   2: import { CommonModule } from '@angular/common';

   3: import { Routes, RouterModule } from '@angular/router';

   4:

   5: import { BooksComponent } from './books/books.component';

   6:

   7: const routes: Routes = [

   8:   { path: '', component: BooksComponent },

   9: ];

  10:

  11: @NgModule({

  12:   imports: [

  13:     CommonModule,

  14:     RouterModule.forChild(routes)

  15:   ],

  16:   declarations: [BooksComponent],

  17: })

  18: export class Mod1Module { }

 

The majority of the work needs to be done in AppModule. Still it is just a few lines and most of it is deleting code Smile

   1: import { BrowserModule } from '@angular/platform-browser';

   2: import { NgModule } from '@angular/core';

   3: import { FormsModule } from '@angular/forms';

   4: import { HttpModule } from '@angular/http';

   5: import { Routes, RouterModule } from '@angular/router';

   6:

   7: import { AppComponent } from './app.component';

   8:

   9: const routes: Routes = [

  10:   { path: 'books', loadChildren: './mod1/mod1.module#Mod1Module' },

  11:   { path: 'authors', loadChildren: './mod2/mod2.module#Mod2Module' },

  12: ];

  13: const routerModule = RouterModule.forRoot(routes);

  14:

  15: @NgModule({

  16:   declarations: [

  17:     AppComponent

  18:   ],

  19:   imports: [

  20:     BrowserModule,

  21:     FormsModule,

  22:     HttpModule,

  23:     routerModule

  24:   ],

  25:   providers: [],

  26:   bootstrap: [AppComponent]

  27: })

  28: export class AppModule { }

 

Note that both the TypeScript as well as the module imports for Mod1Module and Mod2Module are gone. New are the two routes where the books and the authors are defines. Instead of specifying the component to load there is a string ‘./mod1/mod1.module#Mod1Module’ which points to the module file and the module class to load. This is in a string because we don’t want to import the type, they should be in a separate module and only loaded when needed. The documentation here is still a bit sparse as it point to the LoadChildren section which points back to Routes Sad smile Oh well, the documentation is open source so I guess I should open a pull request Smile

Anyway, with these small changes both Mod1Module and Mod2Module are lazily loaded when first needed and not on the initial page load. Below you can see 0.chunk.js and 1.chunk.js being loaded the first time I click on a link to the respective module.

2017-01-02_14-46-27

You can see the compete commit with all the changes here or the GitHub repository with the final code here.

Enjoy!

Angular 2 and HTTP Request headers

Angular 2 does a lot of awesome work for you. But it doesn’t do everything you might expect. One thing it doesn’t do is set a lot of HTTP request headers.

Now that makes sense as Angular doesn’t know what you are doing with a request so you really need to do so. But most Http request made with the Http service are going to be for JSON serialized data. The default request however add no headers to let the server know this. The result is that different browsers will do very different requests.

This is an example of an Http request made by Chrome:

GET http://localhost:4200/movies.json HTTP/1.1
Host: localhost:4200
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
Accept: */*
Referer: http://localhost:4200/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,nl;q=0.6
If-None-Match: W/"3a4c7-1590757b458"
If-Modified-Since: Fri, 16 Dec 2016 11:15:05 GMT

 

And this is the same request made with FireFox:

GET http://localhost:4200/movies.json HTTP/1.1
Host: localhost:4200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://localhost:4200/
Connection: keep-alive
If-Modified-Since: Fri, 16 Dec 2016 11:15:05 GMT
If-None-Match: W/"3a4c7-1590757b458"
Cache-Control: max-age=0

 

Notice the difference between the two requests? And specially the Accept header where Chrome will claim to accept anything and FireFox indicates a preference for HTML or XML.

 

Adding HTTP headers

Setting HTTP headers for a request is not hard. When calling the Http.get() function you can specify the headers and you are all good.

@Injectable()
export class MoviesService {

  constructor(private http: Http) { }

  getMovies(): Observable<Movie[]> {
    var options = new RequestOptions({
      headers: new Headers({
        'Accept': 'application/json'
      })
    });

    return this.http
      .get('/movies.json', options)
      .map(resp => resp.json());
  }
}

 

However doing this in every service that does an HTTP request is rather tedious and easy to forget so there must be an easier way.

 

BaseRequestOptions and dependency injection to the rescue

 

As it happens Angular use the BaseRequestOptions type as the default for all options. So if we can just change this default we are good to go. And that is exactly what dependency injection will let us do.

First we need to define our own default request options class with whatever settings you would like. In this case I am just adding two headers.

@Injectable()
export class DefaultRequestOptions extends BaseRequestOptions {
  headers = new Headers({
    'Accept': 'application/json',
    'X-Requested-By':'Angular 2',
  });
}

 

Next we configure the DI provider to use out class instead of the default.

@NgModule({
  // Other settings
  providers: [
    MoviesService,
    {provide: RequestOptions, useClass: DefaultRequestOptions }
  ],
})
export class AppModule { }

 

And we are good to go with every Http request using our default two headers. Here the example from Chrome.

GET http://localhost:4200/movies.json HTTP/1.1
Host: localhost:4200
Connection: keep-alive
Cache-Control: max-age=0
Accept: application/json
X-Requested-By: Angular 2
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
Referer: http://localhost:4200/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,nl;q=0.6
If-None-Match: W/"3a4c7-1590757b458"
If-Modified-Since: Fri, 16 Dec 2016 11:15:05 GMT

Adding dynamic Http headers

These headers are great but there is one limitation. These headers are always the same. And with some headers, for example authentication you might want to control the actual values at request time. Turns out his isn’t very hard either. Each requests merges the options, and thus headers, from the current request with the default request options using the merge function. As we can override this we can add whatever dynamic header we want.

@Injectable()
export class DefaultRequestOptions extends BaseRequestOptions {
  headers = new Headers({
    'Accept': 'application/json',
    'X-Requested-By':'Angular 2',
  });
 
  merge(options?: RequestOptionsArgs): RequestOptions {
    var newOptions = super.merge(options);
    newOptions.headers.set('X-Requested-At', new Date().toISOString());
    return newOptions;
  }
}

 

Sweet Smile

Creating a React based Chrome extension

Creating Chrome extensions is quite easy. In fact it is so easy I found it hard to believe how quick I had a sample up and running. And given how useful Chrome extensions can be I wondered how hard it would be to create one using React. It turns out that it is easy Smile

 

Creating a basic React app

To get started I decided to create a React app using create-react-app.  This gave me a working React application to get started with. To turn an HTML page with JavaScript into a Chrome extension you need to add a package.json file. The Chrome developer Getting Started tutorial has a nice template to get started with. As this file needs to be in the root of the folder you deploy as an extension I added it to the public folder. As it needs an PNG to display I downloaded the logo_small.png from the React GitHub repository and also added that to the public folder. After updating the page to open to index.html I ended up with the following package.json:

{
"manifest_version": 2,

"name": "Demo React-Chrome extension",
"description": "This extension shows how to run a React app as a Chrome extension",
"version": "1.0",

"browser_action": {
"default_icon": "logo_small.png",
"default_popup": "index.html"
}
}

This is already enough but it is helpful to give the window a specific size. If you don’t the window will be as small as is possible which isn’t nice in this case. Again easy to do by adding a height and width to the body tag in the index.css.

body {
margin: 0;
padding: 0;
font-family: sans-serif;

/* Added body dimensions */
height: 300px;
width: 250px;
}
A chrome extension needs to be static files that get packed up. Again easy to do, just run npm run build and the resulting build folder contains exactly what you need.
 
 

Testing the extension locally

Doing a local test with the extension is easy. Open up Chrome and select Setting/Extensions or just navigate to chrome://extensions/. At the right top there Developer mode checkbox. Make sure that it’s checked.

image

Next you can drag the build folder into the extensions window. Now you should see the extension appear in the top bar of Chrome. There you should see the React icon appear.

image

When you click the React icon the extension should start. You will see the default create-react-app Welcome to React home screen. Complete with its animation and just like you would see it in the main browser window.

image

How cool is that Smile

 

 

Deploying to the Chrome web store

Publishing the extension to the Chrome web store is easy as well. You will need to create an account for a small one time fee. Once your account is setup just follow these steps and you will be done in no time. You can install and try this demo extension here.

 

The complete source code for the plugin can be found here.

Bring your own React

React is a a great UI library from Facebook that works well for creating fast browser based user interfaces. Working with it is quite easy. Once you learn the basics you can be quite productive. But to create great applications it is helpful to understand some of the internals of React. But that is where things become a bit more complex. The React source code is not easy to understand. There are a lot of performance optimizations that make code harder to read. Also there are  a lot of browser related issues. Small differences that add a lot more complexity. Another reason is that React is not just for browser based applications and their DOM. It’s also for other platforms like React Native.

When going through the original source code is really hard. Like it is with React. But understanding the choices is beneficial, there is a good alternative. That alternative is to create a simplified own implementation. The goal is not to create a new UI library. No there are other simpler alternatives out there like Preact. The goal is just a teaching tool to better understand React.

 

To JSX or not to JSX?

Using JSX to write React code is not necessary or required. But it is the de-facto standard way of writing React. It makes code a  lot easier to read compared to the plain JavaScript style. Fortunately  JSX is just a format that can be used with different UI libraries.  Transpiling JSX into JavaScript isn’t even done by the React team these days. Instead that is left up to Babel which is pretty much the de-facto standard for transpiling ECMAScript 2015 and JSX.

The way to do this is in fact quite simple. There is a Babel plugin called transform-react-jsx. This is the normal way to transpile JSX code. By default it turns JSX markup elements into React.createElement() functions. Yet by specifying a pragma option you can make it output anything you want. In this case I am going to replace React.createElement() with my own ByoReact.createElement() using the following .babelrc file.

{
"presets": ["es2015", "stage-0", "react"],
"plugins": [
["transform-react-jsx", {
"pragma": "ByoReact.createElement"
}]
]
}

This will allow me to use any new ECMAScript feature and transpile JSX code to my own library.

 

The Hello World of Bring Your Own React

Most development starts with Hello World and there is no reason why not to start there. The fist version of the code is just going to render the following:

image

Not impressive but we have to start  somewhere Smile

The code to render this Hello World is as follows:

import ByoReactDOM from '../../src/bring-your-own-react-dom';
import ByoReact from '../../src/bring-your-own-react'; // eslint-disable-line no-unused-vars

class HelloWorld extends ByoReact.Component {
render() {
return <div>Hello world</div>;
}
}

ByoReactDOM.render(<HelloWorld />,
document.getElementById('app'));

Doing the minimal required means mostly just implementing the ByoReact.createElement(). React itself uses a virtual DOM but in this case I am just going to stick with the real browser DOM. This will change soon enough but this is a nice start. The function is passed three parameters: The tag to render which can be a HTML tag name or a child component. The second is the properties, something we will ignore for now. The last is the list of child components. These child components can either be a string literal or another component.

The code for this is quite simple and always returns a HTML element:

const createElement = (tag, props, ...childeren) => {
let result;
if (typeof tag === 'string') {
result = document.createElement(tag);
} else {
const component = new tag(); // eslint-disable-line new-cap
result = component.render();
}

for (const child of childeren) {
if (typeof child === 'string') {
const textNode = document.createTextNode(child);
result.appendChild(textNode);
} else {
result.appendChild(child);
}
}

return result;
};

The base class Component is there but as it doesn’t contain any functionality yet there is not much to see. Again this will change as we get further along.

This leaves rending the <HelloWorld /> component in the browser using ByoReactDOM.render(). Again there is little to this yet as the ByoReact.createElement() returns a DOM object.

const render = (reactElement, domContainerNode) => {
domContainerNode.innerHTML = reactElement.outerHTML; // eslint-disable-line no-param-reassign
};
 
When we switch to a virtual DOM and updating existing UI components this will become a lot more complex. This will basically trigger the reconciliation process. This is the complex logic to determine the difference between the previous and next DOM and apply it as efficient as possible.
 
You can browse the complete source code, including unit tests, here.
 
Enjoy Smile

Introducing the React Tutorial

React is hot and it seems that almost every front-end web developer wants a piece of it. Not surprising maybe because Facebook created and open sourced React. And React not only powers the Facebook website but also many others like Netflix and Airbnb.

Because I have been using and teaching React for the last year I decided to try doing things a bit bigger. If you want to learn React I want to help you with a series of online video tutorials. Each video covers one part and the whole series will give you a deep understanding of React. Of course this takes quite some work so I decided to start a Kickstarter campaign to fund the whole project. You can find the Kickstarter project here.

If you become one of the backers you can get early access to the videos if you want to. All you need to do is choose the appropriate backer level. Regardless of the level you back me you will get access to the videos before others who buys after the Kickstarter campaign finishes. And not just earlier access, you will also pay less :-).

http://bit.ly/the-react-tutorial

Turbocharging Docker build

Building a Docker image can take a bit of time depending on what you have to do. Specially when you have to do something like DNU Restore, DotNet Restore, NPM Install or Nuget Restore builds can become slow because packages might have to be downloaded from the internet.

Take the following Dockerfile which does a DNU Restore.

FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

MAINTAINER Maurice de Beijer <maurice.de.beijer@gmail.com>

COPY . ./app

WORKDIR ./app
RUN dnu restore

EXPOSE 5000

CMD ["--server.urls", "http://*:5000"]
ENTRYPOINT ["dnx", "web"]

Running the Docker build multiple times without any changes is quite fast. To time it I am using the command:

time docker build -t dotned .
This reports it takes between 1.3 and 1.5 seconds on my aging laptop. Not too bad really.
 
Unfortunately this changes quite a bit when I make a change to the source code of the application. Just adding some non significant whitespace slows the build from 1.5 seconds to 58 seconds which is quite a bit of time to wait before being able to run the container.
 
The reason for this slowdown is that Docker has to do a lot more work. When you build a Docker container for the second time Docker creates a layer for each command executed. And each layer is cached to be reused. But if a cached layer depends on another layer that is changed it can’t be reused anymore. This means that once the source code is changes the result of the COPY command is a different layer and the DNU Restore layer has to be recreated which takes a long time.
 
A much faster approach is to copy just the project.json file so we can do a DNU Restore before copying the rest of the source code. With this approach Docker build are down to quite a reasonable 3.3 seconds  and only take a long time when there is a change to the project.json file. Something that should not happen very often. The functionally identical but much faster Dockerfile looks like this:
 
FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

MAINTAINER Maurice de Beijer <maurice.de.beijer@gmail.com>

COPY ./project.json ./app/

WORKDIR ./app
RUN dnu restore

COPY . ./app

EXPOSE 5000

CMD ["--server.urls", "http://*:5000"]
ENTRYPOINT ["dnx", "web"]

Enjoy Smile
 
 

JavaScript functional goodness

10522520853_efce3f51df_z

Using some functional principals and using immutable data can really make your JavaScript a lot better and easier to test. While using immutable data in JavaScript seems like something really complex it turns out is really isn’t that hard to get started with if you are already using Babel. And while libraries like Immutable.js are highly recommended we can start even simpler.

Babel does a lot of things for you as it lets you use all sorts of next generation JavaScript, or ECMAScript 2015 to be more correct. And it is quite easy to use whatever your build pipeline is or even as a standalone transpiler if you are not using a build pipeline yet.

When you want to use immutable data the functional array functions map() and filter() as well as spread properties are really useful. Here are a few examples to get you started.

Changing a property on an object

var originalPerson = {
firstName: 'Maurice',
lastName: ''
};

var newPerson = {
...originalPerson,
lastName: 'de Beijer'
};

console.log(newPerson);

The …originalPerson is using the spread properties which expands all properties. The lastName: ‘de Beijer’ comes after it so it overrules the lastName from the originalPerson object. And the result is a new object.

{
firstName: "Maurice",
lastName: "de Beijer"
}

Simple and easy. And as we are never changing objects we can replace the var keyword with the new const keyword to indicate variables are never reassigned.

const originalPerson = {
firstName: 'Maurice',
lastName: ''
};

const newPerson = {
...originalPerson,
lastName: 'de Beijer'
};

console.log(newPerson);

 

Adding something to an array

Usually when adding something to an array either an assignment or a push() function is used. But both just mutate the existing array instead of creating a new one. And with the pure functional approach we do not want to modify the existing array but create a new one instead. Again really simple using spread properties.

const originalPeople = [{
firstName: 'Maurice'
}];

const newPeople = [
...originalPeople,
{firstName: 'Jack'}
];

console.log(newPeople);

In this case we end up with a new array with two objects:

[{
firstName: "Maurice"
}, {
firstName: "Jack"
}]

 

Removing something from an array

Deleting from an array is just as simple using the array filter() function.

const originalPeople = [{
firstName: 'Maurice'
}, {
firstName: 'Jack'
}];

const newPeople = originalPeople.filter(p => p.firstName !== 'Jack');

console.log(newPeople);

And we end up with an array with just a single person.

[{
firstName: 'Maurice'
}]

 

Updating an existing item in an array

Changing an existing items is just as easy when we combine spread properties with the array map() function.

const originalPeople = [{
firstName: 'Maurice'
}, {
firstName: 'Jack'
}];

const newPeople = originalPeople.map(p => {
if (p.firstName !== 'Jack') {
return p;
}

return {
...p,
firstName: 'Bill'
};
});

console.log(newPeople);

And that is all it takes to change Jack to Bill.

[{
firstName: "Maurice"
}, {
firstName: "Bill"
}]

 

Really nice and easy and it makes for very easy to read code once you are familiar with the new spread properties.