5 steps to targeting multiple .NET frameworks

When designing an API or libraries, we aim to have maximum coverage of available .NET frameworks so that we can have maximum number of clients adopt our APIs.  The key challenge in such scenarios is to have a clean code and an efficient way to manage multiple versions of code, Nuget packages and builds.

This article will outline a quick and easy way to manage single code base and target multiple .NET framework versions.  I’ve used the same concept in KonfDB

Step 1 – Visual Studio Project Configuration


First, we need to use Visual Studio to create multiple build definitions.  I would prefer 2 definitions per .NET configuration like

  • .NET 4.0 — DebugNET40, ReleaseNET40
  • .NET 4.5 — DebugNET45 and ReleaseNET45

When adding these configurations, clone them from Debug and Release and make sure you have selected ‘Create New Project Configurations’

This will modify your solution (.sln) file and Project (.csproj) files.

If certain projects do not support both versions, you can uncheck them before clicking on Close button.   This is usually done, when your solution has 2 parts – API and Server and you want the API to be multi-framework target and Server code to run on a particular version of .NET

Step 2 – Framework Targeting in Projects


There are 2 types of changes required in the Project (.csproj) files to manage multiple .NET versions

Every project has default configuration.  This is usually the lowest or base configuration.  This is defined by xml property like

<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>


Change this to

<Configuration Condition=" '$(Configuration)' == '' ">DebugNET40</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>


Make sure that all the projects in solution have same default Configuration and TargetFrameworkVersion

When we added multiple configurations to our solution, there is one PropertyGroup per configuration added to our Project (.csproj) files.  This appears something like,

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'DebugNET40|AnyCPU' ">


We need to add/modify 3 lines in each of these PropertyGroup tags to change OutputPath, TargetFrameworkVersion and DefineConstants

For .NET 4.0:



For .NET 4.5:



We will use these settings later in the article.

Step 3 – References Targeting in Projects


Our dependent libraries may have different versions for different versions of .NET. A classic example is Newtonsoft JSON libraries which are different for .NET 4.0 and .NET 4.5. So we may require framework dependent references – be it Standard References or Nuget References.

When we are using standard references, we can organize our libraries in framework specific folders and alter the project configuration to look like,

<Reference Include="Some.Assembly">


To reference Nuget packages, we can add conditions to the references as shown below

<Reference Include="Newtonsoft.Json, Version=, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"
Condition="'$(TargetFrameworkVersion)' == 'v4.5'">
<Reference Include="Newtonsoft.Json, Version=, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"
Condition="'$(TargetFrameworkVersion)' == 'v4.0'">


When we now do a batch build in Visual Studio, the solution should compile without errors.

Step 4 – Managing Clean Code with multiple frameworks


There are 2 ways to manage our code with different versions of .NET.

Bridging the gap of .NET 4.5.x in .NET 4.0


Let’s assume we are creating an archival process where we want to zip the log files and delete the log files after zipping them. If we build this functionality with .NET 4.5 framework, we can use the ZipArchive class (in System.IO.Compression) in .NET 4.5 but there is no such class in .NET 4.0. In such cases, we should go for interface driven programming and define 2 implementations – one for .NET 4.0 and one for .NET 4.5.

These 2 implementations cannot co-exist in the solution as they may give compilation issues. To avoid these we need to edit the Project (.csproj) file to

<Compile Include="LogFileMaintenance40.cs" Condition=" '$(TargetFrameworkVersion)' == 'v4.0' " />
<Compile Include="LogFileMaintenance45.cs" Condition=" '$(TargetFrameworkVersion)' == 'v4.5' " />


Both these files can have the same class names as at a given time, only one of them will compile

The unclean way


The unclean way is where we use the DefineConstants to differentiate between the framework versions. Earlier in the project configuration, we changed the DefineConstants to have NET40 and NET45. We can use these DefineConstants as pre-processor directives to include framework specific code like,

#if NET40
#if NET45


This methodology should be adopted only if there is minor change in the functionalities as it is very difficult to debug this code.

Step 5 – Build without Visual Studio


While Visual Studio allows us to trigger builds for any configuration by manually selecting the configuration from the dropdown, we can also create a batch file to allow us build our solution with different .NET frameworks. This batch file can be used with any Build System like TFS, Jenkins, TeamCity, etc.

REM Build Solution
set PATH_SOURCE_SLN="%cd%\OurSolution.sln"
if [%1]==[] (

This 5 step process allows us to develop our solution targeting multiple .NET frameworks and allows us to narrow down the implementation to a particular .NET framework during the build.




Event: Windows 10 Dev Readiness Webcasts

Coming soon is a series of live webcasts that deliver first-hand guidance on how you can leverage the new Windows 10 development model. The webcasts will be presented and moderated by Microsoft MVPs around the world at no charge and are a great opportunity for you to not only learn the foundations of Universal App Development in Windows 10, but also to connect with some of the top experts in your country and/or language. Bring your Windows Store app development questions and have them answered live, by the experts, and learn how to take advantage of the great opportunities ahead in the Universal Windows Platform.

  • Each webcast will deliver the same content in different countries from June 8 – 12.
  • They will last from one to three hours, depending on the amount of community interaction.


View the agenda and register for these free webcasts at: http://ln.ganshani.com/win10mvp2015

I’m also glad to have some of my SEA MVP friends Walter Wong and Tim Chew present a session on Universal Windows Platform.

5 steps to create Ubuntu Hyper V Image

For quite some time now, I’ve been trying .NET 2015 on Azure Virtual Machines – Windows Server and Ubuntu and have been trying my hands at Shell Scripts. I’ve also been trying IoT using Linux on Raspberry Pi, Arduino and Intel Galileo Gen 2 boards.

To avoid running out of Azure credits, this time, I thought of creating a Hyper-V based Virtual Machine of Ubuntu on my laptop that could run in parallel with Windows OS. This article will outline 5 basic steps to create Ubuntu VM on your laptop that connects to the Internet. Once I have setup Ubuntu, I can use this VM to explore more of ASP.NET vNext

Step 1: Enable Hyper-V on your Windows 8.1 / 10 laptop


Ensure that hardware virtualization support is turned on in the BIOS settings.

Save the BIOS settings and reboot the machine. At the Start Screen, type ‘turn windows features on or off’ and select that item. Select and enable Hyper-V

If Hyper-V was not previously enabled, reboot the machine to apply the change. Ensure that hardware virtualization support is turned on in the BIOS settings


Step 2: Create a Virtual Switch for your Wireless Network


In Hyper-V Manager, select ‘Virtual Switch Manager’ in the Action pane. Ensure that you have at least one Virtual Switch that enables ‘External’ connectivity

Step 3: Download Ubuntu ISO image and Create New VHDX


Download latest image of Ubuntu ISO image – Server or Desktop from http://www.ubuntu.com/download and store it in local disk.

Open Hyper-V Manager, and select “New > Virtual Machine”. In the wizard, provide a friendly name like “Ubuntu VM” and select “Generation 2”. Assign min 512 MB memory and check the box “Use Dynamic Memory for Virtual Machine.”

In Configure Networking step, select the same Virtual Switch that has external network connectivity (configured in step 2)

In Connect Virtual Hard Disk, ensure that you have allocated at least 10GB of disk space. In the Installation Options, select the option “Install the operating system from bootable image file” and select the ISO file downloaded from Ubuntu.com and click Finish.

Step 4: Disable Secure Boot


In Hyper-V Manager, select the “Ubuntu VM” and click on Settings in the Action pane and uncheck ‘Enable Secure Boot’

Step 5: Start Ubuntu VM


In Hyper V Manager, right click on “Ubuntu VM” and click on Start and then on Connect. This will start Ubuntu on Hyper V.

Select Install Ubuntu and press ENTER and wait for some time.

Once this wizard completes, you will have a working version of Ubuntu on your machine, running in parallel with Windows 8.1 / 10

Getting Started with IaaS and Open Source on Azure

As a developer, we often spend time in using our favourite developer tools, design patterns, deployment practices and we also brag about DevOps.  When it comes to developing for cloud, knowing development practices isn’t sufficient.  For green-field projects, we can definitely adopt PaaS model and leverage the best of the cloud world. However, when we want to leverage cloud for existing applications (with less or negligible code changes), having knowledge of IaaS is essential.

Three fundamental courses on MVA are key to understanding and exploring IaaS

  • Fundamentals of IaaS
    As the name suggests, take a dig on managing server on Azure and some of the management practices

These courses provide an excellent insight to how infrastructure can be best managed on Azure!

Microphone detection in Arduino / Galileo (IoT) using VC++

After setting up Intel Galileo in our last post, let’s get going with the first sensor – Microphone. I had to refresh some of the basics that I had learnt during my bachelor studies – yes I did my undergraduate engineering studies in Automation and I’ve played with different microprocessors, controllers and sensors. So this post is going to be about voice detection using Microphone detector and pulsating LED when voice crosses few decibels.

Basics first, the wiring


You need a Galileo board and an Arduino compatible shield that can help you wire your sensors in a clean way. So with the shield, your board will look like

Now you need 2 different Grove sensors for this. Ideally, you can use sensors of any brand with any IoT device. All you need to remember is that all sensors will have minimum 2 pins

  • Voltage – Often abbreviated as V or VCC
  • Ground – Often abbreviated as GND
  • Data Pins – Often abbreviated as Dx (where x is a number)
  • Not connected Pins – Often abbreviated as NC

A point to remember is that you always have to connect V/VCC with another V/VCC and GND with another GND on any board. If you connect otherwise, your circuit will not be complete (and current will not flow).

When you are using an “Analog” sensor that will provide you some data, you will have a pin that says OUT. This OUT pin will have a voltage signal that will represent the signal captured by your sensor. This may not make perfect sense at first go. So let us go a bit deeper. There are 2 types of sensors – Analog ones that provide signals back in Voltage form and Digital ones that provide signals in bit/byte form. A weighing scale uses a sensor that can be analog or digital.

Any signal measured in analog format will require some calibration i.e. a conversion mechanism to digital or the other way.

Microphone Sensor and LED kit


A microphone sensor has 4 pins – VCC, GND, NC and OUT. You will get the voltage as sensor signal in the OUT pin

A LED sensor kit has 4 pins as well – VCC, GND, NC, SIG. You can set 5V on the SIG pin to light up the LED and can set 0V to SIG pin to light it off

So essentially what we are planning to do is to get the OUT signal of microphone into the SIG pin of LED kit. Ideally, you do not need a powerful processor like Galileo for such a trivial work. You can do this with few electronics fundamentals. But considering that you want to build something more sophisticated and this is the first step, we can go through the rest of the tutorial.

Setting up the sensor and the kit


I’ve setup Microphone sensor on A0 (as INPUT) and LED sensor kit on D3 (as OUTPUT) of the shield. You can use any other ports of your choice. Next is opening up VS 2013 and creating a new project of type Visual C++ > Windows for IoT

And in the main.cpp, you can paste the below code

#include "stdafx.h"
#include "arduino.h"

#define LED D3

void pins_init()
	pinMode(LED, OUTPUT);
void turnOnLED()
	digitalWrite(LED, HIGH);
void turnOffLED()
	digitalWrite(LED, LOW);

int _tmain(int argc, _TCHAR* argv[])
	return RunArduinoSketch();

void setup()

void loop()
	int sensorValue = analogRead(MICROPHONE);

	if (sensorValue > THRESHOLD_VALUE)
		Log("OK, got something worth listening\n");


Understanding the Code



The above statement is a digital value for the sound threshold. A microphone captures analog signal (0-5V) which is provided to your Galileo in form of a digital signal (0-1024). This means 0v = 0 in digital and 5v = 1024 in digital. To eliminate the environmental sounds, I prefer a threshold to be at least 33% i.e. 2v. So a digital value of 450, converts to 2.19v (= 450* 5 / 1024). At my place, I found that environmental sounds where contributing to a value of 291 (i.e. 1.42v)

The next important bits are the port definitions,

pinMode(LED, OUTPUT);

Here, we have directed that we will take input from A0 and output the data to D3. Now let’s understand the core of our program – the loop function

We are reading the analog value of microphone sensor using below code which converts the analog value into digital number

int sensorValue = analogRead(MICROPHONE);

When this value goes beyond the defined threshold, you want to send a 5v to LED (by sending a HIGH bit) using code

digitalWrite(LED, HIGH);

When you play some loud music you will see that the LED light will lighten-up for 2 seconds (delay=2000ms) and will turn off.

When you run/execute this project from Visual Studio using Remote Debugger, VS will deploy this code to your Galileo device. You will be prompted for your Galileo user name and password.

You can say something aloud or play some video on YouTube to test this functionality.

This code is also available on GitHub at: https://github.com/punitganshani/ganshani/tree/master/Samples/IntelGalileo/GroveMic


Getting Started with Windows on Intel Galileo (IoT)

At //Build 2014 conference, Microsoft demonstrated a version of Windows running on Intel Galileo board. It was not the first time Microsoft has showcased Windows running on smaller devices. They had around 8 different versions of Windows Embedded that ran on POS terminals since Windows 3.1 release and most of the retail POS terminals, arcade games, set-up boxes and ATMs, even today, across the globe run on Windows Embedded. And what more the later versions also allowed running applications developed using .NET 3.5

So what has changed with Intel Galileo? It’s the scale, licensing and availability.

Microsoft has shipped a pared-down version of Windows Embedded on Galileo (and coming soon will be a version for Raspberry PI v2) to reach out to DIYers, hardware makers and developers like you and me. Tons of opportunities lay wide open in front of us to create applications that can gather real-time analog data using sensors and transmit them for analysis.

So let’s get started with configuring our Intel Galileo V2

Prerequisites, first!

Let’s start with our checklist of hardware and software you would need to get started


  • Intel Galileo V2 Board (with 12v power supply)
  • microSD card with adapter – Minimum 16GB, Class 10
  • Ethernet/LAN cable
  • USB cable
  • *Laptop with USB port and Ethernet port
  • Internet connection


*If you have an Ultrabook (like I do) and don’t have an Ethernet port on your laptop, you will need a router that has an unused Ethernet port.


The Intel Galileo V2 Board

The Intel Galileo V2 board comes packed in a static-resistant bag with a wide range of adapters for power adapters.

The board looks like one shown above,

  1. USB port to connect to PC
  2. Ethernet port to connect to PC or router
  3. 12 volt power supply
  4. microSD slot to load WIM
  5. Additional USB port

Once your board has been initialized, you can see 2 LEDs light up as shown below

Point to note is that I’ve not inserted my microSD card and Ethernet cable in their slots on Galileo board.

Associating Galileo to COM port on your laptop

On your Start Menu, as an administrator, type Device Manager. Navigate to Other devices > Gadget Serial v2.4 and Update Driver Software

Select the folder ‘C:\arduino-1.5.3\hardware\arduino\x86\tools‘ to browse the drivers

This will associate a COM port (serial port) for Galileo under the Ports section in Device Manager


Loading Windows Image for Embedded devices to microSD


Connect your microSD card to your PC using a microSD adapter. Once the microSD card has been detected, format it using FAT32 (not NTFS). Let’s assume the microSD has a drive letter E:

Open Command Prompt as an administrator and navigate to the directory where you downloaded all the files from Microsoft Connect website and execute following command,

apply-bootmedia.cmd -destination {WhateverYourSDCardDriveLetterIS} -image {latest WIM image} -hostname mygalileo -password admin

In my case, it was

apply-bootmedia.cmd -destination E: -image 9600.16384.x86fre.winblue_rtm_iotbuild.141114-1440_galileo_v2.wim -hostname mygalileo -password admin

The process will take some time and the imaging output will look like,

Once this is done, you can view the contents of microSD card in Windows Explorer.

An interesting point to note is that Windows OS takes less than 1GB of disk space.

You can now eject the microSD card from laptop and insert it in Galileo. You can also connect the Ethernet cable to it. The setup should appear like,

Run the MSI you downloaded from Microsoft Connect website and boot up Galileo. The boot process will take around 2 minutes and then the Galileo Watcher will detect your device

You can right click on your Galileo in Galileo Watcher and Telnet to your device. Or you can open Web Browser by right clicking on your device. My device IP is, so I can view the memory consumed by Windows by browsing the URL

You can view the contents of microSD card by connecting to \\\c$. The username should be administrator and password should be admin, unless you have changed it when applying the image

Shutting down Galileo


Well, it’s always advisable to shutdown Windows safely and so it is for Galileo. You can type the standard shutdown command on telnet

shutdown /s /t 0

And that’s how you can setup Galileo to run Windows.

Introduction to Cloud Patterns for Enterprise Apps

The Azure Weekly runs every Tuesday as Microsoft UK initiative and is aimed at the techie who has not yet had any/much exposure to Azure but who just wants a leg-up to get started. This is a more practically focused session than a theoretical/architectural session.

On January 27th, 2015 12:30-14:00 (UK time zone), I will be presenting as a guest speaker on Introduction to Cloud Patterns for Enterprise Apps.

The overall agenda of the session is-

Steve Plank – Demo’s on

  • Creating a Microsoft Azure WordPress website
  • Creating a Microsoft Azure ASP.Net website
  • Creating a Microsoft Azure Virtual Machine
  • Creating a Microsoft Azure Mobile Services (with Android client)
  • Creating a Microsoft Azure Cloud Service
  • How to sign up for a free Microsoft Azure trial

Punit Ganshani – Insight on

  • How best we can transform an on-premise application to an Azure-hosted cloud-aware application
  • Reap the benefits of scalability and high-availability
  • Critical design decisions every developer or architect has to make and associate their solutions with Cloud Patterns

You can register for the session on: Microsoft UK Azure Weekly Portal

Other Azure Weekly Sessions planned for January 2015 can be viewed at Get Started with Azure Weekly Series.

Cross Origin Resource Sharing with WCF JSON REST Services

My KonfDB platform provides a reliable way of configuration management as a service for cross-platform multi-tenant applications. When we refer to cross-platform capabilities, one of the ways to allow clients built using native technologies is by the way of REST services. WCF allows us to host a service and expose multiple endpoints using different protocols. So when KonfDB was in the design phase, I chose WCF as a tech-stack to support multiple endpoints and protocols.

I had written an article REST services with Windows Phone which should be a good starting point to understand WCF-REST services. Now, when you want this service to be accessible from different platforms – web, mobile, or across domains (in particular, Ajax requests) then we need to design few interceptors and behaviours that could allow Cross Origin Resource Sharing (CORS)

For this post, I will use the code from my own KonfDB platform. So those interested can actually visit the GitHub repository and explore more as well.

First, how CORS works


CORS works by providing specific instructions (sent from server) to the browsers which the browsers respect. These specific instructions are “additional” HTTP headers which are based on HTTP methods – GET or POST with specific MIME types. When we have HTTP POST method with specific MIME, the browser needs to “preflight” the request. Preflight means that the browser first sends an HTTP OPTIONS request header. Upon approval from the server, browser then sends the actual HTTP request.

So in a nutshell, we need some provision to handle these additional HTTP headers. In this post, we will see how we can change a RESTful service to support CORS.

REST Service Interface


A typical non-REST service interface defines methods and decorates them with OperationContract attribute. A REST service requires an additional attribute – one of these WebGet, WebPut or WebInvoke. So in the below example, to support Cross Origin Resource Sharing (CORS), we will decorate the method with attribute WebInvoke and set its Method=”*”

[ServiceContract(Namespace = ServiceConstants.Schema, Name = "ICommandService")]
public interface ICommandService : IService
        [OperationContract(Name = "Execute")]
        [WebInvoke(Method = "*", ResponseFormat = WebMessageFormat.Json,
            BodyStyle = WebMessageBodyStyle.Bare,
            UriTemplate = "/Execute?cmd={command}&token={token}")]
        ServiceCommandOutput ExecuteCommand(string command, string token);


RESTful Behaviour and Endpoint


In KonfDB, WCF service is hosting in the Windows Service container. To provide consistent behaviour to bindings and for purpose of future extensibility, I have derived bindings from the native bindings available in .NET framework. So my REST binding looks like,


    public class RestBinding : WebHttpBinding
        public RestBinding()
            this.Namespace = ServiceConstants.Schema;
            this.Name = ServiceConstants.ServiceName;
            this.CrossDomainScriptAccessEnabled = true;


The important point to note is CrossDomainScriptAccessEnabled is set to true. This is very essential for WCF service to work with CORS – and yes, it is safe!

Defining a CORS Message Inspector and Header

As said in the earlier part of the post, we need a mechanism to intercept the request and add additional HTTP headers to tell the browser that the service does support CORS. Since this functionality is required at an endpoint level, we will define an endpoint behaviour for this. The code for the EnableCorsEndpointBehavior looks like,


public class EnableCorsEndpointBehavior : BehaviorExtensionElement, IEndpointBehavior
        public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) {        }

        public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime) { }

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
            var requiredHeaders = new Dictionary();

            requiredHeaders.Add("Access-Control-Allow-Origin", "*");
            requiredHeaders.Add("Access-Control-Request-Method", "POST,GET,PUT,DELETE,OPTIONS");
            requiredHeaders.Add("Access-Control-Allow-Headers", "X-Requested-With,Content-Type");

            var inspector = new CustomHeaderMessageInspector(requiredHeaders);

        public void Validate(ServiceEndpoint endpoint) { }

        public override Type BehaviorType
            get { return typeof(EnableCorsEndpointBehavior); }

        protected override object CreateBehavior()
            return new EnableCorsEndpointBehavior();


Few important points to note

  • First, here are the headers Access-Control-Allow-Origin=* and Access-Control-Request-Method has OPTIONS set in it. If you want requests only from a particular domain name, you can change the value of Access-Control-Allow-Origin=http://www.mydomain.com and it should work correctly.
  • Second, we have passed these additional headers to a MessageInspector using another class CustomHeaderMessageInspector

The CustomHeaderMessageInspector
class, that acts as a DispatchInspector
, has the functionality to add these headers to the reply so that the client is aware of CORS. The

    internal class CustomHeaderMessageInspector : IDispatchMessageInspector
        private readonly Dictionary _requiredHeaders;
        public CustomHeaderMessageInspector(Dictionary headers)
            _requiredHeaders = headers ?? new Dictionary();

        public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
            return null;

        public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
            var httpHeader = reply.Properties["httpResponse"] as HttpResponseMessageProperty;
            foreach (var item in _requiredHeaders)
                httpHeader.Headers.Add(item.Key, item.Value);


Last bit, adding this behaviour to the endpoint. Since the service is self-hosted and there is no WCF configuration file, the code looks like

            var serviceEndpoint = host.AddServiceEndpoint(typeof (T), binding.WcfBinding, endpointAddress);
            serviceEndpoint.Behaviors.Add(new WebHttpBehavior());
            serviceEndpoint.Behaviors.Add(new FaultingWebHttpBehavior());
            serviceEndpoint.Behaviors.Add(new EnableCorsEndpointBehavior());
            return serviceEndpoint;


Hosting and Testing this service

Using usual ServiceHost you can host this service and the service should run perfectly.  To test this service, you can write a jQuery code


            $('#btnGet').click(function () {
                var requestUrl = 'http://localhost:8882/CommandService/Execute?cmd=someCommand&token=alpha';
                var token = null;
                    url: requestUrl,
                    type: "GET",
                    contentType: "application/json; charset=utf-8",
                    success: function (data) {
                        var outputData = $.parseJSON(data.Data);
                        token = outputData.Token;

                    error: function (e) {
                        alert('error:' + JSON.stringify(e));

If you test this on Chrome with Inspector (F12), the network interaction would appear as shown in the screenshot below,

For a single Ajax request, as expected there is an HTTP OPTIONS request followed by HTTP GET request. If CrossDomainScriptAccessEnabled is not set to true in RestBinding, then we would get a HTTP 403 error – METHOD NOT FOUND.

If we look into the headers of the first request, we see that our WCF service (CustomHeaderMessageInspector) has added additional headers (highlighted) back into the request.

Since the browser got a HTTP Status Code = 200, it initiated the second (actual) request which is a HTTP GET request.

You can view the source code in KonfDB GitHub repository.