React server-side rendering with Webpack

There are times when the initial blank html page being download for a React application is not perfect. One reason might be Search Engine Optimization, another might be a slower initial render, especially on mobile devices.

Search Engine Optimization (SEO) is a good reason you might want to do server-side rendering. While the Google bot executes JavaScript these days, and can potentially index your React application, other search engine bots don’t do so. The result is they just see the empty div where your application will be rendered on the real client and nothing worth indexing at all. The end result is no traffic from them. So, if you want to maximize traffic to your React application using SEO Server-side rendering is a must have.

A slower initial render is another reason to use Server-side rendering. This is especially true with slow mobile connections but even helps on fast desktops. With a standard React application, the browser downloads the index.html, parses and renders it. This results in a blank screen as there is no content. While this is done the React application JavaScript is downloaded, executed and injected into the DOM. Only now is there a meaningful page to display in the browser. With a Server-side rendering style application, the initial HTML page already contains all, or most, of the markup so it can be displayed much faster. The initial page might not be fully functional but it appears to be there faster to the user.

Of course, it isn’t all just good. Server-side rendering means there is more work to be done on the server and thus a slower response. It just appears to be faster.

 

Doing Server-Side Rendering with Create React App

There are several approaches to doing Server-side rendering with a React application generated using Create React App (CRA). One would be to use the babel-node utility from Babel-CLI to parse the JSX and render it at runtime. This is possible but using babel-node is not recommended for production use. Another issue with this is that common Webpack practices, like importing CSS and images is not supported by babel-node. You can go without but even a standard CRA generated application does this and already fails.

A much better approach would be to use Webpack to generate two JavaScript bundles, one for use with Server-side Rendering and a second for use in the browser. There is a need for two different bundles because of the difference in the execution environment. One will execute in NodeJS on the server, the other in the browser on the user’s machine.

Webpack actually supports exporting multiple configurations but as the Webpack config is contained in React-Scripts we can’t change it without ejecting. As ejecting adds a lot of maintenance work to my applications I would rather not do that. A better approach is to use the standard Webpack config and update it to make it suitable for a Server-side rendering bundle.

There are just a few changes to be made for this to work. Most of those are about the NodeJS execution environment and a few to prevent some output from being generated.

const HtmlWebpackPlugin = require("html-webpack-plugin");
const ManifestPlugin = require("webpack-manifest-plugin");
const SWPrecacheWebpackPlugin = require("sw-precache-webpack-plugin");

const config = require("react-scripts/config/webpack.config.prod");

config.entry = "./src/index.ssr.js";

config.output.filename = "static/ssr/[name].js";
config.output.libraryTarget = "commonjs2";
delete config.output.chunkFilename;

config.target = "node";
config.externals = /^[a-z\-0-9]+$/;
delete config.devtool;

config.plugins = config.plugins.filter(
  plugin =>
    !(
      plugin instanceof HtmlWebpackPlugin ||
      plugin instanceof ManifestPlugin ||
      plugin instanceof SWPrecacheWebpackPlugin
    )
);

module.exports = config;

Now we can create the SSR bundle using webpack –config ./webpack.ssr.config.js which can be automated using the build script in the package.json.

{
  "name": "server-side-rendering-with-create-react-app",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "react": "^16.1.1",
    "react-dom": "^16.1.1",
    "react-scripts": "1.0.17"
  },
  "scripts": {
    "start": "react-scripts start",
    "start:ssr": "node ./server/",
    "build": "react-scripts build && npm run ssr",
    "ssr": "cross-env NODE_ENV=production webpack --config ./webpack.ssr.config.js",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  },
  "devDependencies": {
    "cross-env": "^5.1.1"
  }
}

An Express server to serve the React application

Creating a simple NodeJS based Express server to host and server the rendered React application is simple. This server will serve the files from the build folder as well as the server-rendered react application.

const path = require("path");
const express = require("express");
const serveStatic = require("serve-static");
const reactApp = require("./react-app");

const PORT = process.env.PORT || 3001;
const app = express();

app.use(reactApp);
app.use(serveStatic(path.join(__dirname, "../build")));

app.listen(PORT, () => {
  console.log(`Server listening on port ${PORT}!`);
});

 

Doing the actual SSR isn’t very hard now. Just load the SSR bundle created above and use the react-dom/server API to render the application. Using the new React 16 renderToNodeStream() API make this a bit more efficient than using the older renderToString() API but not a lot harder.

const fs = require("fs");
const path = require("path");
const router = require("express").Router();

const { renderToNodeStream } = require("react-dom/server");
const React = require("react");
const ReactApp = require("../build/static/ssr/main").default;

console.log(ReactApp)

router.get("/", (req, res) => {
  var fileName = path.join(__dirname, "../build", "index.html");

  fs.readFile(fileName, "utf8", (err, file) => {
    if (err) {
      throw err;
    }

    const reactElement = React.createElement(ReactApp);

    const [head, tail] = file.split("{react-app}");
    res.write(head);
    const stream = renderToNodeStream(reactElement);
    stream.pipe(res, { end: false });
    stream.on("end", () => {
      res.write(tail);
      res.end();
    });
  });
});

module.exports = router;

 

In this case, I added {react-app} as the content of the application div to replace it with the rendered React application on the server.

You can find the sample code on GitHub.

Enjoy.

 

Developing Angular applications using Docker

Using Docker to deploy applications is great but there is so much more you can do with Docker if you want to. And it can solve some interesting problems along the way.

One problem when developing with NPM to manage dependencies is keeping all dependencies in sync. If I pull a repository from GitHub and an NPM install I will get a local copy, on my hard disk, of all dependencies. Fine, great that is just what it should do.

But if you pull exactly the same commit 10 minutes later you might end up with a different result. Sure the package.json is the same but the semantic ranges of packages listed in your as well as dependent packages might result in a different set of packages.

Is NPM doing random things here? No NPM does exactly the same thing every time but the issue is that package authors might have released a new version. And if everyone stuck to proper usage of Semantic Versioning and all code was perfectly tested this would probably be fine. Maybe not deterministic but okay.

Turns out that not everyone is doing the right Semantic Versioning thing, sometimes by accident and sometimes because they choose not to, it isn’t the law after all. The result is that the same npm install can have different result and break things. I have seen plenty of CI builds fail all of a sudden because of a buggy package being released on NPM and automagically be fixed again a day later when the bug was fixed. Of course after wasting our time investigating what was wrong. I know NPM shrinkwrap or Yarn are supposed to help here but we can do better.

And that is just one of the things you can fix using Docker.

Instead of installing all NPM packages locally (and on the CI server) install them in a Docker image and share the images between developers and the CI build. That way everyone shares exactly the same code and as a side benefit a docker pull is quite a bit faster than a npm install Smile On my machine pulling the image from the Docker hub takes 23 seconds. Compare that to an npm install that takes 177 seconds, a full 7.5 times longer Sad smile

 

Developing an Angular application inside a Docker container

Before starting make sure to gave the Angular CLI installed.

Create a new project using the Angular CLI. Make sure to use the –skip-install option because we don’t want to install the NPM dependencies locally.

   1: ng new docker-demo --skip-install

 

Next add a Dockerfile with the following content:

   1: FROM node:6.10.2-alpine

   2:

   3: RUN mkdir -p /app

   4: WORKDIR /app

   5:

   6: COPY package.json /app/

   7:

   8: RUN ["npm", "install"]

   9:

  10: COPY . /app

  11:

  12: EXPOSE 4200/tcp

  13:

  14: CMD ["npm", "start", "--", "--host", "0.0.0.0", "--poll", "500"]

 

Most of this is quite straightforward. There are two special things to note.

First only the package.json is copied and npm install is executed. Because of the layered system of the Docker file system this means that the slowest part of the Docker build will only execute again if the package.json is updated.

The second part is starting the container. First using npm start means that the Angular CLI doesn’t need do be installed globally in the container image. The additional parameters ensure that we can access the development web server from the outside and that changes made to the source code on the host are picked up by the Webpack development server running inside of the container.

With that done you can build the container with the following command.

   1: docker build -t docker-demo-dev .

 

As soon as it is build you can run it.

   1: docker run -it --rm -p 4200:4200 -v ${pwd}/src:/app/src docker-demo-dev

 

I am running from Windows 10 using PowerShell here. The –v volume mapping means that I can make changes on the host and these are immediately available inside the Docker container. Open the blower on http://localhost:4200/ and make a change to the title of the AppComponent. You should see the bowser automatically update when you save the changes.

image

Great, push the Docker image to the Docker hub or another Docker repository and all your coworkers can pull and run the same container image with exactly the same NPM packages. No need to update this image until you make a change outside of the ./src folder.

 

Creating a distribution version of your app

Regardless of how you intend to deploy your application you can use the same Docker container image to build a distributable version of your application with a simple command.

   1: docker run -it --rm -v ${pwd}/src:/app/src -v ${pwd}/dist:/app/dist docker-demo-dev npm run build

 

Here I added volume mappings for both the ./src folder as input and the ./dist folder as output. After running this, preferably on the CI server, you will have the ./dist folder just like you would have with a normal build.

 

Want to try it? You can pull this docker container from the Docker hub with the command docker pull mauricedb/docker-demo-dev. The source code in on GitHub here.

 

Cool stuff Smile.

What should be in a Single Page Application?

Single Page Applications (SPA) are really popular these days. And that is understandable as React, Angular and similar libraries with browser based routing make it quite easy to do so for developers. End users like SPA’s as well because navigating around is really fast. Much faster than the traditional browser-based applications that do lots of postbacks. So everyone is happy right?

Well everyone is happy if SPA’s are done right but quite often they are not Sad smile

 

The problem with SPA’s

The problem with SPA’s is quite typical:

If a bit of something is good then more of the same must be better.

This is something that comes up with lots of things, not just with SPA’s, and it’s usually wrong.

The first reason developers notice that creating one big SPA is an issue, is when the load time of the initial page starts taking way too long. After all everything loads up front when the user first loads the application. But then they learn about Webpack bundle splitting and lazy loading and the problem seems to go away. Depending on the technology used, and the application architecture, lazy loading can be really simple. Read this blog post for an example of how to do so in Angular.

The real reason one large SPA is wrong is because of coupling. If you build your whole application as a SPA then everything is coupled together. Making a small change in one obscure part of your application could have a global effect. With different pages the chance of an accidental global change is much smaller. It can only happen if the code is actually loaded on another page.

 

So should we stop building SPA’s?

No, absolutely not!

But we should stop making them as big as some people are doing.

Instead, split your application into functional parts. Turn each functional part into its own SPA and use normal browser navigation to navigate between them. These SPA’s are typically called mini SPA’s as many of them together form the complete application.

 

How big should a mini SPA be?

That is an easy one: as big as they need to be and no bigger than that.

Which is still kind of vague of course.

In general, they should be about one piece of functionality. Or, in case you are into DDD, one bounded context is the right size.

What that means in practice is that you split your application into functional parts. Just suppose for a moment you are building a large sales application. Then you probably need to build a customer management piece. There will also be an article management part. There will be an order entry part and finally there will be an order fulfillment part of the application. In reality there will be more but as an example these four are enough. These four are quite distinct, each has it’s own job to do and end users will not likely switch between them all the time to get their job done.

Sure there is overlap. Both order entry and order fulfillment will need to use customer and article data. But the shape and form of that data will most likely be different and certainly be read only. So while they have a dependency on the data from the other parts they don’t need the actual code from the other modules to function.

szczecinjezioroquotrusalkaquot-zawody_w_pokonywaniu_mini_wodospadu-kibicow_nie_brakuje_-_panoramio

This makes for a great split. Create four different mini SPA’s and have a shell application so everything feels like a single application to the end user. The navigation inside a mini SPA is really fast, it is a SPA after all. The navigation between different modules is a bit slower but far less frequent. And use of HTTP caching or, in a new browser the ServiceWorker, will even make that quite fast.

Enjoy building your mini SPA’s Smile

Lazy loading and Angular routing

One problem with creating a Single Page Application (SPA) is that you might load the entire application when the user first starts the application but most of it is never used. Sure navigation is nice and fast after the initial load but that slows down the initial load as that becomes larger. And that might be worth it if the user navigates to all, or most, routes loaded. But when most routes are never activated that is just a big waste. Fortunately with Angular that is easy enough to fix with lazy loading. And lazy loading is really easy to set up in Angular as well.

BTW This blog post is about Angular 2 but as we are not supposed t0 use the 2 anymore it’s just Angular Smile

 

Eager loading of routes

Below an example of eager loading with Angular routes. Notice there is no HTTP traffic at all when I click on the different links and different routes are activated. Of course this is a really simple example and in a real application you would most likely be fetching some data using an AJAX call

2017-01-02_14-11-21

 

So what code did I use here?

Most of it was just generated with the Angular CLI. You can see all the code here but it boils down to the main AppModule with two sub modules: Mod1Module and Mod2Module both generated with the CLI and each has a single route and component added to it. The Mod1Module is about books and Mod2Module is about authors. Below is Mod1Module as an example.

   1: import { NgModule } from '@angular/core';

   2: import { CommonModule } from '@angular/common';

   3: import { Routes, RouterModule } from '@angular/router';

   4:

   5: import { BooksComponent } from './books/books.component';

   6:

   7: const routes: Routes = [

   8:   { path: 'books', component: BooksComponent },

   9: ];

  10:

  11: @NgModule({

  12:   imports: [

  13:     CommonModule,

  14:     RouterModule.forChild(routes)

  15:   ],

  16:   declarations: [BooksComponent],

  17: })

  18: export class Mod1Module { }

 

The AppModule is also quite straightforward. It adds the two sub modules and initializes the routing module. As both routes used are defined in the sub modules the routing table is actually empty here.

   1: import { BrowserModule } from '@angular/platform-browser';

   2: import { NgModule } from '@angular/core';

   3: import { FormsModule } from '@angular/forms';

   4: import { HttpModule } from '@angular/http';

   5: import { Routes, RouterModule } from '@angular/router';

   6:

   7: import { AppComponent } from './app.component';

   8:

   9: import { Mod1Module } from './mod1/mod1.module';

  10: import { Mod2Module } from './mod2/mod2.module';

  11:

  12: const routes: Routes = [];

  13: const routerModule = RouterModule.forRoot(routes);

  14:

  15: @NgModule({

  16:   declarations: [

  17:     AppComponent

  18:   ],

  19:   imports: [

  20:     BrowserModule,

  21:     FormsModule,

  22:     HttpModule,

  23:     routerModule,

  24:     Mod1Module,

  25:     Mod2Module

  26:   ],

  27:   providers: [],

  28:   bootstrap: [AppComponent]

  29: })

  30: export class AppModule { }

The view for the main AppComponent is also quite simple. There are two links with anchor tags using the RouterLink directive to let the user navigate. Finally there is the RouterOutlet directive to show the route components template.

   1: <h1>

   2:   {{title}}

   3: </h1>

   4:

   5: <nav>

   6:   <a routerLink="/books">Books</a>

   7:   <a routerLink="/authors">Authors</a>

   8: </nav>

   9:

  10: <hr />

  11:

  12: <router-outlet>

  13: </router-outlet>

 

Running this with ng serve works and as you can see above the application loads everything at startup, no additional requests are done when we click on the route links.

 

Switching to lazy loading

It turns out that switching to lazy loading is really simple and just requires a few small changes.

We want the sum modules only to be loaded when needed so the main routes need to be defined in AppModule instead. In both Mod1Module and Mod2Module we need to change the route path to an empty string. The only change is on line 8 here.

   1: import { NgModule } from '@angular/core';

   2: import { CommonModule } from '@angular/common';

   3: import { Routes, RouterModule } from '@angular/router';

   4:

   5: import { BooksComponent } from './books/books.component';

   6:

   7: const routes: Routes = [

   8:   { path: '', component: BooksComponent },

   9: ];

  10:

  11: @NgModule({

  12:   imports: [

  13:     CommonModule,

  14:     RouterModule.forChild(routes)

  15:   ],

  16:   declarations: [BooksComponent],

  17: })

  18: export class Mod1Module { }

 

The majority of the work needs to be done in AppModule. Still it is just a few lines and most of it is deleting code Smile

   1: import { BrowserModule } from '@angular/platform-browser';

   2: import { NgModule } from '@angular/core';

   3: import { FormsModule } from '@angular/forms';

   4: import { HttpModule } from '@angular/http';

   5: import { Routes, RouterModule } from '@angular/router';

   6:

   7: import { AppComponent } from './app.component';

   8:

   9: const routes: Routes = [

  10:   { path: 'books', loadChildren: './mod1/mod1.module#Mod1Module' },

  11:   { path: 'authors', loadChildren: './mod2/mod2.module#Mod2Module' },

  12: ];

  13: const routerModule = RouterModule.forRoot(routes);

  14:

  15: @NgModule({

  16:   declarations: [

  17:     AppComponent

  18:   ],

  19:   imports: [

  20:     BrowserModule,

  21:     FormsModule,

  22:     HttpModule,

  23:     routerModule

  24:   ],

  25:   providers: [],

  26:   bootstrap: [AppComponent]

  27: })

  28: export class AppModule { }

 

Note that both the TypeScript as well as the module imports for Mod1Module and Mod2Module are gone. New are the two routes where the books and the authors are defines. Instead of specifying the component to load there is a string ‘./mod1/mod1.module#Mod1Module’ which points to the module file and the module class to load. This is in a string because we don’t want to import the type, they should be in a separate module and only loaded when needed. The documentation here is still a bit sparse as it point to the LoadChildren section which points back to Routes Sad smile Oh well, the documentation is open source so I guess I should open a pull request Smile

Anyway, with these small changes both Mod1Module and Mod2Module are lazily loaded when first needed and not on the initial page load. Below you can see 0.chunk.js and 1.chunk.js being loaded the first time I click on a link to the respective module.

2017-01-02_14-46-27

You can see the compete commit with all the changes here or the GitHub repository with the final code here.

Enjoy!

Angular 2 and HTTP Request headers

Angular 2 does a lot of awesome work for you. But it doesn’t do everything you might expect. One thing it doesn’t do is set a lot of HTTP request headers.

Now that makes sense as Angular doesn’t know what you are doing with a request so you really need to do so. But most Http request made with the Http service are going to be for JSON serialized data. The default request however add no headers to let the server know this. The result is that different browsers will do very different requests.

This is an example of an Http request made by Chrome:

GET http://localhost:4200/movies.json HTTP/1.1
Host: localhost:4200
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
Accept: */*
Referer: http://localhost:4200/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,nl;q=0.6
If-None-Match: W/"3a4c7-1590757b458"
If-Modified-Since: Fri, 16 Dec 2016 11:15:05 GMT

 

And this is the same request made with FireFox:

GET http://localhost:4200/movies.json HTTP/1.1
Host: localhost:4200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://localhost:4200/
Connection: keep-alive
If-Modified-Since: Fri, 16 Dec 2016 11:15:05 GMT
If-None-Match: W/"3a4c7-1590757b458"
Cache-Control: max-age=0

 

Notice the difference between the two requests? And specially the Accept header where Chrome will claim to accept anything and FireFox indicates a preference for HTML or XML.

 

Adding HTTP headers

Setting HTTP headers for a request is not hard. When calling the Http.get() function you can specify the headers and you are all good.

@Injectable()
export class MoviesService {

  constructor(private http: Http) { }

  getMovies(): Observable<Movie[]> {
    var options = new RequestOptions({
      headers: new Headers({
        'Accept': 'application/json'
      })
    });

    return this.http
      .get('/movies.json', options)
      .map(resp => resp.json());
  }
}

 

However doing this in every service that does an HTTP request is rather tedious and easy to forget so there must be an easier way.

 

BaseRequestOptions and dependency injection to the rescue

 

As it happens Angular use the BaseRequestOptions type as the default for all options. So if we can just change this default we are good to go. And that is exactly what dependency injection will let us do.

First we need to define our own default request options class with whatever settings you would like. In this case I am just adding two headers.

@Injectable()
export class DefaultRequestOptions extends BaseRequestOptions {
  headers = new Headers({
    'Accept': 'application/json',
    'X-Requested-By':'Angular 2',
  });
}

 

Next we configure the DI provider to use out class instead of the default.

@NgModule({
  // Other settings
  providers: [
    MoviesService,
    {provide: RequestOptions, useClass: DefaultRequestOptions }
  ],
})
export class AppModule { }

 

And we are good to go with every Http request using our default two headers. Here the example from Chrome.

GET http://localhost:4200/movies.json HTTP/1.1
Host: localhost:4200
Connection: keep-alive
Cache-Control: max-age=0
Accept: application/json
X-Requested-By: Angular 2
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
Referer: http://localhost:4200/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,nl;q=0.6
If-None-Match: W/"3a4c7-1590757b458"
If-Modified-Since: Fri, 16 Dec 2016 11:15:05 GMT

Adding dynamic Http headers

These headers are great but there is one limitation. These headers are always the same. And with some headers, for example authentication you might want to control the actual values at request time. Turns out his isn’t very hard either. Each requests merges the options, and thus headers, from the current request with the default request options using the merge function. As we can override this we can add whatever dynamic header we want.

@Injectable()
export class DefaultRequestOptions extends BaseRequestOptions {
  headers = new Headers({
    'Accept': 'application/json',
    'X-Requested-By':'Angular 2',
  });
 
  merge(options?: RequestOptionsArgs): RequestOptions {
    var newOptions = super.merge(options);
    newOptions.headers.set('X-Requested-At', new Date().toISOString());
    return newOptions;
  }
}

 

Sweet Smile

Creating a React based Chrome extension

Creating Chrome extensions is quite easy. In fact it is so easy I found it hard to believe how quick I had a sample up and running. And given how useful Chrome extensions can be I wondered how hard it would be to create one using React. It turns out that it is easy Smile

 

Creating a basic React app

To get started I decided to create a React app using create-react-app.  This gave me a working React application to get started with. To turn an HTML page with JavaScript into a Chrome extension you need to add a package.json file. The Chrome developer Getting Started tutorial has a nice template to get started with. As this file needs to be in the root of the folder you deploy as an extension I added it to the public folder. As it needs an PNG to display I downloaded the logo_small.png from the React GitHub repository and also added that to the public folder. After updating the page to open to index.html I ended up with the following manifest.json:

{<br>  <span style="color: #006080">"manifest_version"</span>: 2,<br><br>  <span style="color: #006080">"name"</span>: <span style="color: #006080">"Demo React-Chrome extension"</span>,<br>  <span style="color: #006080">"description"</span>: <span style="color: #006080">"This extension shows how to run a React app as a Chrome extension"</span>,<br>  <span style="color: #006080">"version"</span>: <span style="color: #006080">"1.0"</span>,<br><br>  <span style="color: #006080">"browser_action"</span>: {<br>    <span style="color: #006080">"default_icon"</span>: <span style="color: #006080">"logo_small.png"</span>,<br>    <span style="color: #006080">"default_popup"</span>: <span style="color: #006080">"index.html"</span><br>  }<br>}

 

This is already enough but it is helpful to give the window a specific size. If you don’t the window will be as small as is possible which isn’t nice in this case. Again easy to do by adding a height and width to the body tag in the index.css.

<span style="color: #0000ff">body</span> {<br>  <span style="color: #0000ff">margin</span>: 0;<br>  <span style="color: #0000ff">padding</span>: 0;<br>  <span style="color: #0000ff">font-family</span>: sans<span style="color: #006080">-serif;</span><br><br>  <span style="color: #008000">/* Added body dimensions */</span><br>  <span style="color: #0000ff">height</span>: <span style="color: #006080">300px;</span><br>  <span style="color: #0000ff">width</span>: <span style="color: #006080">250px;</span><br>}<br>
A chrome extension needs to be static files that get packed up. Again easy to do, just run npm run build and the resulting build folder contains exactly what you need.

Testing the extension locally

Doing a local test with the extension is easy. Open up Chrome and select Setting/Extensions or just navigate to chrome://extensions/. At the right top there Developer mode checkbox. Make sure that it’s checked.

image

Next you can drag the build folder into the extensions window. Now you should see the extension appear in the top bar of Chrome. There you should see the React icon appear.

image

When you click the React icon the extension should start. You will see the default create-react-app Welcome to React home screen. Complete with its animation and just like you would see it in the main browser window.

image

How cool is that Smile

 

 

Deploying to the Chrome web store

Publishing the extension to the Chrome web store is easy as well. You will need to create an account for a small one time fee. Once your account is setup just follow these steps and you will be done in no time. You can install and try this demo extension here.

 

The complete source code for the plugin can be found here.

Bring your own React

React is a a great UI library from Facebook that works well for creating fast browser based user interfaces. Working with it is quite easy. Once you learn the basics you can be quite productive. But to create great applications it is helpful to understand some of the internals of React. But that is where things become a bit more complex. The React source code is not easy to understand. There are a lot of performance optimizations that make code harder to read. Also there are  a lot of browser related issues. Small differences that add a lot more complexity. Another reason is that React is not just for browser based applications and their DOM. It’s also for other platforms like React Native.

When going through the original source code is really hard. Like it is with React. But understanding the choices is beneficial, there is a good alternative. That alternative is to create a simplified own implementation. The goal is not to create a new UI library. No there are other simpler alternatives out there like Preact. The goal is just a teaching tool to better understand React.

 

To JSX or not to JSX?

Using JSX to write React code is not necessary or required. But it is the de-facto standard way of writing React. It makes code a  lot easier to read compared to the plain JavaScript style. Fortunately  JSX is just a format that can be used with different UI libraries.  Transpiling JSX into JavaScript isn’t even done by the React team these days. Instead that is left up to Babel which is pretty much the de-facto standard for transpiling ECMAScript 2015 and JSX.

The way to do this is in fact quite simple. There is a Babel plugin called transform-react-jsx. This is the normal way to transpile JSX code. By default it turns JSX markup elements into React.createElement() functions. Yet by specifying a pragma option you can make it output anything you want. In this case I am going to replace React.createElement() with my own ByoReact.createElement() using the following .babelrc file.

{
"presets": ["es2015", "stage-0", "react"],
"plugins": [
["transform-react-jsx", {
"pragma": "ByoReact.createElement"
}]
]
}

This will allow me to use any new ECMAScript feature and transpile JSX code to my own library.

 

The Hello World of Bring Your Own React

Most development starts with Hello World and there is no reason why not to start there. The fist version of the code is just going to render the following:

image

Not impressive but we have to start  somewhere Smile

The code to render this Hello World is as follows:

import ByoReactDOM from '../../src/bring-your-own-react-dom';
import ByoReact from '../../src/bring-your-own-react'; // eslint-disable-line no-unused-vars

class HelloWorld extends ByoReact.Component {
render() {
return <div>Hello world</div>;
}
}

ByoReactDOM.render(<HelloWorld />,
document.getElementById('app'));

Doing the minimal required means mostly just implementing the ByoReact.createElement(). React itself uses a virtual DOM but in this case I am just going to stick with the real browser DOM. This will change soon enough but this is a nice start. The function is passed three parameters: The tag to render which can be a HTML tag name or a child component. The second is the properties, something we will ignore for now. The last is the list of child components. These child components can either be a string literal or another component.

The code for this is quite simple and always returns a HTML element:

const createElement = (tag, props, ...childeren) => {
let result;
if (typeof tag === 'string') {
result = document.createElement(tag);
} else {
const component = new tag(); // eslint-disable-line new-cap
result = component.render();
}

for (const child of childeren) {
if (typeof child === 'string') {
const textNode = document.createTextNode(child);
result.appendChild(textNode);
} else {
result.appendChild(child);
}
}

return result;
};

The base class Component is there but as it doesn’t contain any functionality yet there is not much to see. Again this will change as we get further along.

This leaves rending the <HelloWorld /> component in the browser using ByoReactDOM.render(). Again there is little to this yet as the ByoReact.createElement() returns a DOM object.

const render = (reactElement, domContainerNode) => {
domContainerNode.innerHTML = reactElement.outerHTML; // eslint-disable-line no-param-reassign
};
 
When we switch to a virtual DOM and updating existing UI components this will become a lot more complex. This will basically trigger the reconciliation process. This is the complex logic to determine the difference between the previous and next DOM and apply it as efficient as possible.
 
You can browse the complete source code, including unit tests, here.
 
Enjoy Smile

Introducing the React Tutorial

React is hot and it seems that almost every front-end web developer wants a piece of it. Not surprising maybe because Facebook created and open sourced React. And React not only powers the Facebook website but also many others like Netflix and Airbnb.

Because I have been using and teaching React for the last year I decided to try doing things a bit bigger. If you want to learn React I want to help you with a series of online video tutorials. Each video covers one part and the whole series will give you a deep understanding of React. Of course this takes quite some work so I decided to start a Kickstarter campaign to fund the whole project. You can find the Kickstarter project here.

If you become one of the backers you can get early access to the videos if you want to. All you need to do is choose the appropriate backer level. Regardless of the level you back me you will get access to the videos before others who buys after the Kickstarter campaign finishes. And not just earlier access, you will also pay less :-).

http://bit.ly/the-react-tutorial

Turbocharging Docker build

Building a Docker image can take a bit of time depending on what you have to do. Specially when you have to do something like DNU Restore, DotNet Restore, NPM Install or Nuget Restore builds can become slow because packages might have to be downloaded from the internet.

Take the following Dockerfile which does a DNU Restore.

FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

MAINTAINER Maurice de Beijer <maurice.de.beijer@gmail.com>

COPY . ./app

WORKDIR ./app
RUN dnu restore

EXPOSE 5000

CMD ["--server.urls", "http://*:5000"]
ENTRYPOINT ["dnx", "web"]

Running the Docker build multiple times without any changes is quite fast. To time it I am using the command:

time docker build -t dotned .
This reports it takes between 1.3 and 1.5 seconds on my aging laptop. Not too bad really.
 
Unfortunately this changes quite a bit when I make a change to the source code of the application. Just adding some non significant whitespace slows the build from 1.5 seconds to 58 seconds which is quite a bit of time to wait before being able to run the container.
 
The reason for this slowdown is that Docker has to do a lot more work. When you build a Docker container for the second time Docker creates a layer for each command executed. And each layer is cached to be reused. But if a cached layer depends on another layer that is changed it can’t be reused anymore. This means that once the source code is changes the result of the COPY command is a different layer and the DNU Restore layer has to be recreated which takes a long time.
 
A much faster approach is to copy just the project.json file so we can do a DNU Restore before copying the rest of the source code. With this approach Docker build are down to quite a reasonable 3.3 seconds  and only take a long time when there is a change to the project.json file. Something that should not happen very often. The functionally identical but much faster Dockerfile looks like this:
 
FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

MAINTAINER Maurice de Beijer <maurice.de.beijer@gmail.com>

COPY ./project.json ./app/

WORKDIR ./app
RUN dnu restore

COPY . ./app

EXPOSE 5000

CMD ["--server.urls", "http://*:5000"]
ENTRYPOINT ["dnx", "web"]

Enjoy Smile