Introduction

One of the interesting things that C#9 brought is the introduction of Code Generators. When compiling the code, the C# compiler can generate extra code and add it to your project, thus complementing your code. This has a lot of possible uses:

  • Add an attribute to your code to generate boilerplate code: you can add an attribute to a private member and the code generator will add the property and the INotifyPropertyChanged to your class.
  • Discover the dependencies needed at compile time and wire them in the executable. This can improve execution time, because wiring the dependencies using reflection is very slow. In this case, the dependencies would be already set up.
  • Parse files and generate code for them: you could have a json file with data and the compiler would add the classes without the need of creating them manually. Once the file has changed (and thus the class structure), a new class would be created.

One interesting use of this feature was used by the Windows SDK Team for using the Win32 APIs in our C# code. Instead of using the traditional PInvoke (which you must go to http://pinvoke.net to get the signatures and structures, add them as external methods and call them), you can use C#/Win32, developed to simplify the usage of Win32 APIs.

Using C#/Win32

To use C#/Win32, you must have .NET 5.102 installed in your machine and you must be using Visual Studio 16.8 or newer. With these prerequisites, you can create a new console application in the command line with:

dotnet new console -o UseWin32

This line will create a new console project in the UseWin32  folder. Then you must add the CsWin32  NuGet package with:

dotnet add package Microsoft.Windows.CsWin32 --prerelease

Now you are ready to open the project in Visual Studio or Visual Studio Code and use it. This first project will  enumerate the files in the current directory. Yes, I know that .NET already has methods to do that, but we can also do it using Win32.

Create a new text file and name it NativeMethods.txt. In this file, add the names of the thre API functions that we need:

FindFirstFile
FindNextFile
FindClose

Now, we are ready to use these methods in our program. In Program.cs, erase everything and add this code:

using System;
using System.Linq;
using Microsoft.Windows.Sdk;

var handle = PInvoke.FindFirstFile("*.*", out var findData);
if (handle.IsInvalid)
    return;

We are using another feature of C#9, Top Level Statements. We can see that CsWin32 has generated a new class named PInvoke  and has added the FindFirstFile  method. If you hoveer the mouse over it, you will see something like this:

As you can see, it has added the function and  also added the documentation for it. If you right-click in the function and select Go to Definition, it will open the generated code:

We can then continue the code to enumerate the files:

using System;
using System.Linq;
using Microsoft.Windows.Sdk;

var handle = PInvoke.FindFirstFile("*.*", out var findData);
if (handle.IsInvalid)
    return;
bool result;
do
{
    Console.WriteLine(ConvertFileNameToString(findData.cFileName.AsSpan()));
    result = PInvoke.FindNextFile(handle, out findData);
} while (result);
PInvoke.FindClose(handle);

string ConvertFileNameToString(Span<ushort> span)
{
    return string.Join("", span.ToArray().TakeWhile(i => i != 0).Select(i => (char)i));
}

This code opens the enumeration with FindFirstFile. If the returned handle is invalid, then there are no files in the folder, so the program exits. Then, it will print the file name to the console and continue the enumeration until the last file, when it calls FindClose, to close the handle. The filename is returned as a structure named __ushort_260, that can be converted to a Span<ushort> with the AsSpan method. To convert this to a string, we use the ConvertFileNameToString method, that uses Linq to convert it to a string: it takes all the items until it finds a 0, converts them to an IEnumerable<char> and then uses string.Join to convert this to a string.

If you use this code, you will see that FindClose(handle) has an error. That’s because the FileClose function receives a parameter of type HANDLE, while the handle variable is of the FileCloseSafeHandle type and both are not compatible (FileCloseSafeHandle has a handle field, but it’s protected and cannot be used). The solution, in this case, is to dispose the handle variable, that will call FindClose. This code shows how this is done:

using var handle = PInvoke.FindFirstFile("*.*", out var findData);
if (handle.IsInvalid)
    return;
bool result;
do
{
    Console.WriteLine(ConvertFileNameToString(findData.cFileName.AsSpan()));
    result = PInvoke.FindNextFile(handle, out findData);
} while (result);

We are using here the C#8’s Using Statement, so we don’t need to use a block. When you run this code, you will see the enumeration of the files in the console:

Function callbacks

As you can see, there is no need to dig to use the Win32 APIs, but there are some APIs that are more complex and use a callback function. These can also be used in the same way. to see that, we can enumerate all resources in an executable. To do that, we load the executable with LoadLibraryEx. Once loaded, we use EnumerateResourceTypes to enumerate all resource types and, for every resource type, we enumerate the resources with EnumerateResourceNames.

Create a new project with

dotnet new console -o EnumerateResources

Then add the CsWin32 NuGet package with

dotnet add package Microsoft.Windows.CsWin32 --prerelease

Then, open the project in Visual Studio and add a new text file and name it NativeMethods.txt. Add these functions in the file:

LoadLibraryEx
FreeLibrary
EnumResourceTypes
EnumResourceNames

In Program.cs, erase all text and add this code:

using System;
using Microsoft.Windows.Sdk;

var hInst = PInvoke.LoadLibraryEx(@"C:\Windows\Notepad.exe",
    null, LoadLibraryEx_dwFlags.LOAD_LIBRARY_AS_DATAFILE);
try
{

}
finally
{
    PInvoke.FreeLibrary(hInst);
}

This code calls LoadLibraryEx to load Notepad. The LOAD_LIBRARY_AS_DATAFILE constant was changed to an enum. This function returns the Instance handle, that will be used to enumerate the resources. At the end, we free the instance using FreeLibrary.

Now, we’ll start to enumerate the resources with:

var hInst = PInvoke.LoadLibraryEx(@"C:\Windows\Notepad.exe",
    null, LoadLibraryEx_dwFlags.LOAD_LIBRARY_AS_DATAFILE);
try
{
    PInvoke.EnumResourceTypes(hInst, EnumerateTypes, 0);
}
finally
{
    PInvoke.FreeLibrary(hInst);
}

You can see that the second parameter in EnumerateResourceTypes is a callback function (which I don’t even know the signature :-)). Let’s see if Visual Studio will help us with this. We type Ctrl-. and it will propose us to create the method. We select it and it is created:

BOOL EnumerateTypes(nint hModule, PWSTR lpType, nint lParam)
{
    throw new NotImplementedException();
}

We can add our code to enumerate the types:

BOOL EnumerateTypes(nint hModule, PWSTR lpType, nint lParam)
{
    Console.WriteLine(PwStrToString(lpType));
    PInvoke.EnumResourceNames(hModule, lpType, EnumNames, 0);
    return true;
}

This code will write the resource type to the console and enumerate the resources for that type. We are using a function, PwStrToString to convert the resource name (a PWSTR) to a string:

string PwStrToString(PWSTR str)
{
    unsafe
    {
        return ((ulong)str.Value & 0xFFFF0000) == 0 ?
            ((ulong)str.Value).ToString() :
            str.AsSpan().ToString();
    }
}

This struct has a Value property, that can be an integer or a string. To know which of them we must use, we test against the high order byte and see if it’s empty. If it is, then the value is an integer and we convert it to a string. If not, we convert it to a Span and get the string from it. All this code must be marked as unsafe, as we are working with the pointers.

EnumerateResourceNames has a callback, which we get the signature in the same way we did before, by using Visual Studio refactoring:

BOOL EnumNames(nint hModule, PCWSTR lpType, PWSTR lpName, nint lParam)
{
    Console.WriteLine("  " + PwStrToString(lpName));
    return true;
}

Now the program is complete and we can run it to list all notepad’s resources:

As you can see, working with the Win32 API is much easier now, we don’t have to use custom P/Invokes, everything is at one place and all we have to do is to add the functions we want to the NativeMethods file. This is really a great and welcome improvement.

Ah, and if you want to know what those resource type numbers are, you can find them here.

  • 4 – Menu
  • 5 – Dialog
  • 6 – String
  • 9 – Accelerator
  • 16 – Version
  • 24 – Manifest

All the source code for this article is at https://github.com/bsonnino/UseWin32

 

With .NET 5.0, two small features were introduced to Asp.NET and were almost unnoticed: Open Api and HTTPRepl. Open Api is not something new, it’s been available for a long time, but it had to be included explicitly in a new project. Now, when you create a new project, it’s automatically included in the project and you can get the Api documentation using Swagger.

Now, when you create a new project with

dotnet new webapi

You will create a new WebApi project with a Controller, WeatherController, that shows 10 values of a weather forecast:

It’s a simple app, but it already comes with the OpenApi (Swagger) for documentation. Once you type the address:

https://localhost:5001/swagger

You will get the Swagger documentation and will be able to test the service:

But there is more. Microsoft introduced also HttpRepl, a REPL (Read-Eval-Print Loop) for testing REST services. It will scan your service and allow you to test the service using simple commands, like the file commands.

To test this new feature, in  new folder create a webapi app with

dotnet new webapi

Then, open Visual Studio Code with

code .

You will get something like that:

Then, delete the WeatherForecast.cs file and add a new folder and name it Model. In it, add a new file and name it Customer.cs and add this code:

public class Customer
{
    public string CustomerId { get; set; }
    public string CompanyName { get; set; }
    public string ContactName { get; set; }
    public string ContactTitle { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string Region { get; set; }
    public string PostalCode { get; set; }
    public string Country { get; set; }
    public string Phone { get; set; }
    public string Fax { get; set; }
}

Create a new file and name it CustomerRepository.cs and add this code:

using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.Xml.Linq;

namespace HttpRepl.Model
{
    public class CustomerRepository
    {
        private readonly IList<Customer> customers;

        public CustomerRepository()
        {
            var doc = XDocument.Load("Customers.xml");
            customers = new ObservableCollection<Customer>((from c in doc.Descendants("Customer")
                                                            select new Customer
                                                            {
                                                                CustomerId = GetValueOrDefault(c, "CustomerID"),
                                                                CompanyName = GetValueOrDefault(c, "CompanyName"),
                                                                ContactName = GetValueOrDefault(c, "ContactName"),
                                                                ContactTitle = GetValueOrDefault(c, "ContactTitle"),
                                                                Address = GetValueOrDefault(c, "Address"),
                                                                City = GetValueOrDefault(c, "City"),
                                                                Region = GetValueOrDefault(c, "Region"),
                                                                PostalCode = GetValueOrDefault(c, "PostalCode"),
                                                                Country = GetValueOrDefault(c, "Country"),
                                                                Phone = GetValueOrDefault(c, "Phone"),
                                                                Fax = GetValueOrDefault(c, "Fax")
                                                            }).ToList());
        }

        #region ICustomerRepository Members

        public bool Add(Customer customer)
        {
            if (customers.FirstOrDefault(c => c.CustomerId == customer.CustomerId) == null)
            {
                customers.Add(customer);
                return true;
            }
            return false;
        }

        public bool Remove(Customer customer)
        {
            if (customers.IndexOf(customer) >= 0)
            {
                customers.Remove(customer);
                return true;
            }
            return false;
        }

        public bool Update(Customer customer)
        {
            var currentCustomer = GetCustomer(customer.CustomerId);
            if (currentCustomer == null)
                return false;
            currentCustomer.CustomerId = customer.CustomerId;
            currentCustomer.CompanyName = customer.CompanyName;
            currentCustomer.ContactName = customer.ContactName;
            currentCustomer.ContactTitle = customer.ContactTitle;
            currentCustomer.Address = customer.Address;
            currentCustomer.City = customer.City;
            currentCustomer.Region = customer.Region;
            currentCustomer.PostalCode = customer.PostalCode;
            currentCustomer.Country = customer.Country;
            currentCustomer.Phone = customer.Phone;
            currentCustomer.Fax = customer.Fax;
            return true;    
        }

        public bool Commit()
        {
            try
            {
                var doc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"));
                var root = new XElement("Customers");
                foreach (Customer customer in customers)
                {
                    root.Add(new XElement("Customer",
                                          new XElement("CustomerID", customer.CustomerId),
                                          new XElement("CompanyName", customer.CompanyName),
                                          new XElement("ContactName", customer.ContactName),
                                          new XElement("ContactTitle", customer.ContactTitle),
                                          new XElement("Address", customer.Address),
                                          new XElement("City", customer.City),
                                          new XElement("Region", customer.Region),
                                          new XElement("PostalCode", customer.PostalCode),
                                          new XElement("Country", customer.Country),
                                          new XElement("Phone", customer.Phone),
                                          new XElement("Fax", customer.Fax)
                                 ));
                }
                doc.Add(root);
                doc.Save("Customers.xml");
                return true;
            }
            catch (Exception)
            {
                return false;
            }
        }

        public IEnumerable<Customer> GetAll() => customers;

        public Customer GetCustomer(string id) => customers.FirstOrDefault(c => string.Equals(c.CustomerId, id, StringComparison.CurrentCultureIgnoreCase));

        #endregion

        private static string GetValueOrDefault(XContainer el, string propertyName)
        {
            return el.Element(propertyName) == null ? string.Empty : el.Element(propertyName).Value;
        }
    }
}

This code will use a file, named Customers.xml and will use it to serve the customers repository. With it, you will be able to get all customers, get, add, update or delete one customer. We will use it to serve our controller. The Customers.xml can be obtained at . You should add this file to the main folder and then add this code:

<ItemGroup >
  <None Update="customers.xml" CopyToPublishDirectory="PreserveNewest" />
</ItemGroup>

This will ensure that the xml file is copied to the output folder when the project is built.

Then, in the Controllers folder, delete the WeatherForecastController and add a new file, CustomerController.cs and add this code:

using System.Collections.Generic;
using HttpRepl.Model;
using Microsoft.AspNetCore.Mvc;

namespace HttpRepl.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class CustomerController : ControllerBase
    {
        [HttpGet]
        public IEnumerable<Customer> Get()
        {
            var customerRepository = new CustomerRepository();
            return customerRepository.GetAll();
        }

        [HttpGet("{id}")]
        public IActionResult GetCustomer(string id)
        {
            var customerRepository = new CustomerRepository();
            Customer customer = customerRepository.GetCustomer(id);
            return customer != null ? Ok(customer) : NotFound();
        }

        [HttpPost]
        public IActionResult Add([FromBody] Customer customer)
        {
            if (string.IsNullOrWhiteSpace(customer.CustomerId))
              return BadRequest();
            var customerRepository = new CustomerRepository();
            if (customerRepository.Add(customer))
            {
                customerRepository.Commit();
                return CreatedAtAction(nameof(Get), new { id = customer.CustomerId }, customer);
            }
            return Conflict();
        }

        [HttpPut]
        public IActionResult Update([FromBody] Customer customer)
        {
            if (string.IsNullOrWhiteSpace(customer.CustomerId))
              return BadRequest();
            var customerRepository = new CustomerRepository();
            var currentCustomer = customerRepository.GetCustomer(customer.CustomerId);
            if (currentCustomer == null)
                return NotFound();
            if (customerRepository.Update(customer))
            {
                customerRepository.Commit();
                return Ok(customer);
            }
            return NoContent();
        }

        [HttpDelete("{id}")]
        public IActionResult Delete([FromRoute]string id)
        {
            if (string.IsNullOrWhiteSpace(id))
              return BadRequest();
            var customerRepository = new CustomerRepository();
            var currentCustomer = customerRepository.GetCustomer(id);
            if (currentCustomer == null)
                return NotFound();
            if (customerRepository.Remove(currentCustomer))
            {
                customerRepository.Commit();
                return Ok();
            }
            return NoContent();
        }
    }
}

This controller will add actions to get all customers, get add, update or delete one customer. The project is ready to be run. You can run it with dotnet run and, when you open a new browser window and type this address:

https://localhost:5001/swagger

You will get something like this:

You can test the service with the Swagger page (as you can see, it was generated automatically wen you compiled and ran the app), but there is still another tool: HttpRepl. This tool was added with .NET 5 and you can install it with the command:

dotnet tool install -g Microsoft.dotnet-httprepl

Once you install it, you can run it with

httprepl https://localhost:5001

When you run it, you will get the REPL prompt:

If you type the uicommand, a new browser window will open with the Swagger page. You can type lsto list the available controllers and actions:

As you can see, it has a folder structure and you can test the service using the command line. For example, you can get all the customers with get customer or get one customer with get customer/blonp:

You can also “change directory” to the Customer “directory” with cd Customer. In this case, you can query the customer with get blonp:

If the body content is simple, you can use the -c parameter, like in:

post -c "{"customerid":"test"}"

This will add a new record with just the customer id:

If the content is more complicated, you must set the default editor, so you can edit the customer that will be inserted in the repository. You must do that with the command:

pref set editor.command.default "c:\windows\system32\notepad.exe"

This will open notepad when you type a command that needs a body, so you can type the body that will be sent when the command is executed. If you type post in the command line, notepad will open to edit the data. You can type this data:

{
  "customerId": "ABCD",
  "companyName": "Abcd Inc.",
  "contactName": "A.B.C.Dinc",
  "contactTitle": "Owner",
  "address": "1234 - Acme St - Suite A",
  "city": "Abcd",
  "region": "AC",
  "postalCode": "12345",
  "country": "USA",
  "phone": "(501) 555-1234"
}

When you save the file and close notepad, you will get something like this:

If you try to add the same record again, you will get an error 409, indicating that the record already exists in the database:

As you can see, the repository is doing the checks and sending the correct response to the REPL. You can use the same procedure to update or delete a customer. For the delete, you just have to pass the customer Id:

Now that we know all the commands we can do with the REPL, we can go one step further: using scripts. You can write a text file with the commands to use and run the script. Let’s say we want to exercise the entire API, by issuing all commands in a single run. We can create a script like this (we’ll name it TestApi.txt)

connect https://localhost:5001
ls
cd Customer
ls
get 
get alfki
post --content "{"customerId": "ABCD","companyName": "Abcd Inc.","contactName": "A.B.C.Dinc"}"
get abcd
put --content "{"customerId": "ABCD","companyName": "ACME Inc"}"
delete abcd
get abcd
cd ..
ls

And then open HttpRepl and run the command

run testapi.txt

We’ll get this output:

As you can see, with this tool, you get a very easy way to exercise your API. It’s not something fancy as a dedicated program like Postman, but it’s easy to use and does its job.

The full code for the project is at https://github.com/bsonnino/HttpRepl

 

Times have changed and Microsoft is not the same: Edge, the new Microsoft browser has been remodeled and now it’s using the Chromium engine, an open source browser engine developed by Google.

With that, it has also changed the way you can develop browser apps – you will be able to use the same browser engine Microsoft uses in its browser to develop your browser apps. In order to do that, you will have to use the new WebView control. The new WebView control is not tied to a specific Windows version or development platform: You can use it in any Windows version, from 7 to 10 or use it in a Win32, .NET (core or full framework) or UWP app.

This article will show how to use it and interact with it in a WPF app. We will develop a program that will search in the Microsoft docs site, so you will be able to easily search there for information.

Introduction

Using the new WebView2 control in a .NET app is very simple, it’s just a matter of adding a NuGet package and you’re already setup. In Visual Studio, create a new WPF app. Right click the dependencies node in the Solution Explorer and select “Manage NuGet Packages” , the select the Microsoft.Web.WebView2 package. Then, in MainWindow.xaml, add the main UI:

<Grid>
    <Grid.RowDefinitions>
        <RowDefinition Height="40"/>
        <RowDefinition Height="*"/>
    </Grid.RowDefinitions>
    <StackPanel Orientation="Horizontal">
        <TextBlock Text="Search Text" Margin="5" VerticalAlignment="Center"/>
        <TextBox Text="" x:Name="SearchText" Margin="5" VerticalAlignment="Center"
                 Width="400" Height="30" VerticalContentAlignment="Center"/>
        <Button Content="Find" Width="65" Height="30" Margin="5" Click="ButtonBase_OnClick"/>
    </StackPanel>
    <wpf:WebView2 x:Name="WebView" Grid.Row="1" Source="" />
</Grid>

You will have to add the wpf  namespace to the xaml:

xmlns:wpf="clr-namespace:Microsoft.Web.WebView2.Wpf;assembly=Microsoft.Web.WebView2.Wpf"

As you can see, we are adding a textbox for the text to search and a button to activate the search. In the WebView control, we left the Source property blank, as we don’t want to go to a specific site. If we wanted to start with some page, we would have to fill this property. For example, if you fill the Source property with  https://docs.microsoft.com you would have something like this:

 

Navigating with the WebView

As you can see, it’s very easy to add a browser to your app, but we want to add more than a simple browsing experience. We want to make our app a custom way of browsing. To do that, we will use the following button click event handler:

private void ButtonBase_OnClick(object sender, RoutedEventArgs e)
{
    if (string.IsNullOrWhiteSpace(SearchText.Text))
        return;
    var searchAddress =
        $"https://docs.microsoft.com/en-us/search/?terms={HttpUtility.UrlEncode(SearchText.Text)}";
    WebView?.CoreWebView2?.Navigate(searchAddress);
}

We’ll take the text the user will want to search, create an URL for searching in the Microsoft docs website and then navigate to it. Once we do that, we can search the docs, just by typing the wanted text and clicking the button:

We can also add back and forward navigation by adding these two buttons:

<StackPanel Orientation="Horizontal" Grid.Row="0" HorizontalAlignment="Right" TextElement.FontFamily="Segoe MDL2 Assets">
    <Button Content="&amp;#xE0A6;" Width="30" Height="30" Margin="5" ToolTip="Back" Click="GoBackClick"/>
    <Button Content="&amp;#xE0AB;" Width="30" Height="30" Margin="5" ToolTip="Forward" Click="GoForwardClick"/>
</StackPanel>

The click event handler that will allow the browser to navigate to the previous or next page in history is:

private void GoBackClick(object sender, RoutedEventArgs e)
{
    if (WebView.CanGoBack)
        WebView.GoBack();
}

private void GoForwardClick(object sender, RoutedEventArgs e)
{
    if (WebView.CanGoForward)
        WebView.GoForward();
}

Once you have this code in place, you are able to use the two buttons to navigate in the browser history.

Customizing the page

One thing that bothers me in this app is the fact that we have several items in the page that don’t belong to our search: the top bar, the footer bar, the search box, and so on. Wouldn’t it be nice to clean these items from the page when we are showing it? Well, there is a way to do that, but we’ll have to resort to JavaScript to do that: when the page is loaded, we’ll inject a JavaScript script in the page that will remove the parts we don’t want. For that, we must create this code:

        async void InitializeAsync()
        {
            await WebView.EnsureCoreWebView2Async(null);
            WebView.CoreWebView2.DOMContentLoaded += CoreWebView2_DOMContentLoaded;
        }

        private async void CoreWebView2_DOMContentLoaded(object sender, CoreWebView2DOMContentLoadedEventArgs e)
        {

            await WebView.ExecuteScriptAsync(
                @"
window.onload = () => {
  var rss = document.querySelector('[data-bi-name=""search-rss-link""]');
  console.log(rss);
  if (rss)
    rss.style.display = 'none'; 
  var form = document.getElementById('facet-search-form');
  console.log(form);
  if (form)
    form.style.display = 'none';  
  var container = document.getElementById('left-container');
  console.log(container);
  if (container)
    container.style.display = 'none';  
  var hiddenClasses = ['header-holder', 'footerContainer'];
  var divs = document.getElementsByTagName('div');
  for( var i = 0; i < divs.length; i++) {
    if (hiddenClasses.some(r=> divs[i].classList.contains(r))){
      divs[i].style.display = 'none';
    }
  }
}");
        }

We have two parts in this code: initially, we ensure that the CoreWebView2 component is created and then we set a DomContentLoaded event handler, that will be called when the HTML content is loaded in the browser. In the event handler, we will inject the script and execute it with ExecuteScriptAsync. That is enough to remove the parts we don’t want from the page. The JavaScript code will retrieve the parts we want to hide and set their display style to none. This code is called from the constructor of the main window:

public MainWindow()
{
    InitializeComponent();
    InitializeAsync();
}

 

You can also see the console.log commands in the code. You can debug the JavaScript code when browsing by using the F12 key. The developer window will open in a separate window:

As you can see from the image, the top bar was removed, we now have a cleaner page. You can use the same technique to remove the context menu or add other functionality to the browser. Now we will get some extra info from the page.

Communicating between the app and the browser

When we are browsing the results page, we can get something from it. We can copy the results to the clipboard, so we can use them later. To do that we must use the communication between the app and the WebView. This is done by a messaging process. The app can send a message to the WebView, using something like

WebView?.CoreWebView2?.PostWebMessageAsString("message");

The WebView will receive the message and can process it by adding an event listener like in

window.chrome.webview.addEventListener('message', event => {
  if (event.data === 'message') {
    // process message
  }
});

When we want to send messages in the other direction, from the WebView to the app, we can send it from JavaScript, using

window.chrome.webview.postMessage(message);

It will be received by the app with an event handler like

WebView.CoreWebView2.WebMessageReceived += (s, args) =>
{
  data = args.WebMessageAsJson;
  // Process data
}

That way, we can have full communication between the app and the WebView and we can add the functionality we want: copy the results from the results page to the clipboard. The first step is to add the button to copy the results in MainWindow.xaml:

<StackPanel Orientation="Horizontal">
    <TextBlock Text="Search Text" Margin="5" VerticalAlignment="Center"/>
    <TextBox Text="" x:Name="SearchText" Margin="5" VerticalAlignment="Center"
             Width="400" Height="30" VerticalContentAlignment="Center"/>
    <Button Content="Find" Width="65" Height="30" Margin="5" Click="ButtonBase_OnClick"/>
    <Button Content="Copy" Width="65" Height="30" Margin="5" Click="CopyClick"/>
</StackPanel>

The click event handler will send a message to the WebView:

private void CopyClick(object sender, RoutedEventArgs e)
{
    WebView?.CoreWebView2?.PostWebMessageAsString("copy");
}

We must inject some code in the web page to receive and process the message, this is done by using the AddScriptToExecuteOnDocumentCreatedAsync method to inject the code when the page is loaded, in the InitializeAsync method:

        async void InitializeAsync()
        {

            await WebView.EnsureCoreWebView2Async(null);
            WebView.CoreWebView2.DOMContentLoaded += CoreWebView2_DOMContentLoaded;
            await WebView.CoreWebView2.AddScriptToExecuteOnDocumentCreatedAsync(@"
window.chrome.webview.addEventListener('message', event => {
  if (event.data === 'copy') {
    var results = [...document.querySelectorAll('[data-bi-name=""result""]')].map(a => {
            let aElement = a.querySelector('a');
            return {
                title: aElement.innerText,
                link: aElement.getAttribute('href')
            };
        });
    if (results.length >= 1){
      window.chrome.webview.postMessage(results);
    }
    else {
      alert('There are no results in the page');
    }
  }
});");
            WebView.CoreWebView2.WebMessageReceived += DataReceived;
        }

The JavaScript code will add the event listener, that will get all results in the page using the querySelectorAll  method, and then it will map it to an array of objects that have the title and link of the result, then will send this array to the app with postMessage. In the case that there are no results in the page, an alert message is shown. The code also sets the event handler for the WebMessageReceived event:

void DataReceived(object sender, CoreWebView2WebMessageReceivedEventArgs args)
{
    var data = args.WebMessageAsJson;
    Clipboard.SetText(data);
    MessageBox.Show("Results copied to the clipboard");
}

The handler will get the sent data with the args.WebMessageAsJson, then it will send it to the clipboard as text, where it can be copied to any program. Now, when you run the program, do a search and click the Copy  button, you will have something like this in the clipboard:

[
    {
        "title": "Getting started with WebView2 for WinForms apps - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/gettingstarted/winforms"
    },
    {
        "title": "Microsoft Edge WebView2 Control - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/"
    },
    {
        "title": "Getting started with WebView2 for WinUI apps - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/gettingstarted/winui"
    },
    {
        "title": "Getting started with WebView2 for WPF apps - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/gettingstarted/wpf"
    },
    {
        "title": "Versioning of Microsoft Edge WebView2 - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/concepts/versioning"
    },
    {
        "title": "Distribution of Microsoft Edge WebView2 apps - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/concepts/distribution"
    },
    {
        "title": "Getting started with WebView2 for Win32 apps - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/gettingstarted/win32"
    },
    {
        "title": "Release Notes for Microsoft Edge WebView2 for Win32, WPF, and WinForms - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/releasenotes"
    },
    {
        "title": "Use JavaScript in WebView2 apps - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/howto/js"
    },
    {
        "title": "Microsoft Edge WebView2 API Reference - Microsoft Edge Development",
        "link": "https://docs.microsoft.com/en-us/microsoft-edge/webview2/webview2-api-reference"
    }
]

Now we have an app that can browse to a page and interact with it.

Conclusion

There are many uses to this feature and the new Chromium WebView is a welcome addition to our toolbox, we can interact with the web page, retrieving and sending data to it.

The full source code for this article is at https://github.com/bsonnino/WebViewWpf

Introduction

You have an old, legacy app (with no tests), and its age is starting to show – it’s using an old version of the .NET Framework, it’s difficult to maintain and every new feature introduced brings a lot of bugs. Developers are afraid to change it, but the users ask for new features. You are at a crossroad: throw the code and rewrite everything or refactor the code. Does this sound familiar to you ?

I’m almost sure that you’re leaning to throw the code and rewrite everything. Start fresh, use new technologies and create the wonderful app you’ve always dreamed of. But this comes with a cost: the old app is still there, functional, and must be maintained while you are developing the new one. There are no resources to develop both apps in parallel and the new app will take a long time before its finished.

So, the only way to go is to refactor the old app. It’s not what you wanted, but it can still be fun – you will be able to use the new technologies, introduce good programming practices, and at the end, have the app you have dreamed. No, I’m not saying it will be an easy way, but it will be the most viable one.

This article will show how to port a .NET 4 WPF app and port it to .NET 5, introduce the MVVM pattern and add tests to it. After that, you will be able to change its UI, using WinUI3, like we did in this article.

The original app

The original app is a Customer CRUD, developed in .NET 4, with two projects – the UI project, CustomerApp, and a library CustomerLib, that access client’s data in an XML file (I did that just for the sake of simplicity, but this could be changed easily for another data source, like a database). You can get the app from here, and when you run it, you get something like this:

The first step will be converting it to .NET 5. Before that, we will see how portable is our app, using the .NET Portability analyzer. It’s a Visual studio extension that you can download from here. Once you download and install it, ou can run it in Visual Studio with Analyze/Portability Analyzer Settings:

You must select the platforms you want and click OK. Then, you must select Analyze/Analyze Assembly Portability, select the executables for the app and click OK. That will generate an Excel file with the report:

As you can see, our app can be ported safely to .NET 5. If there are any problems, you can check them in the Details tab. There, you will have a list of all APIs that you won’t be able to port, where you should find a workaround. Now, we’ll start converting the app.

Converting the app to .NET 5

To convert the app, we’ll start converting the lib to .NET Standard 2.0. To do that, right-click in the Lib project in the Solution Explorer and  select Unload Project, then edit the project file and change it to:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFrameworks>netstandard2.0</TargetFrameworks>
  </PropertyGroup>
  <ItemGroup>
    <Content Include="Customers.xml">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </Content>
  </ItemGroup>
</Project>

The project file is very simple, just set the taget framework to netstandard2.0 and copy the item group relative to the xml file, so it’s included in the final project. Then, you must reload the project and remove the AssemblyInfo  file from the Properties folder, as it isn’t needed anymore (if you leave it, it will generate an error, as it’s included automatically by the new project file).

Then, right click the app project in the Solution Explorer and select Unload Project, to edit the project file:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>WinExe</OutputType>
    <TargetFramework>net5.0-windows</TargetFramework>
    <UseWPF>true</UseWPF>
    <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
  </PropertyGroup>
  <ItemGroup>
    <ProjectReference Include="..\CustomerLib\CustomerLib.csproj">
      <Name>CustomerLib</Name>
    </ProjectReference>
  </ItemGroup>
</Project>

We are setting the output type to WinExe, setting the target framework to net5.0-windows, telling that we will use the .net 5 features, plus the ones specific to Windows. If we don’t do that, we wouldn’t be able to use the WPF features, that are specific to Windows and set UseWPF to true. Then, we copy the lib’s ItemGroup to the project. When we reload the project, we must delete the Properties folder and we can build our app, converted to .NET 5. It should work exactly the way it did before. We are in the good path, porting the app to the newest .NET version, but we still have a long way to go. Now it’s time to add good practices to our app, using the MVVM Pattern.

The MVVM Pattern

The MVVM (Model-View-ViewModel) pattern was created on 2005 by John Gossman, a Microsoft Architect on Blend team, and it makes extensive use of the DataBinding feature existent in WPF and other XAML platforms (like UWP or Xamarin). It provides separation between data (Model) and its visualization (View), using a binding layer, the ViewModel.

The  ViewModel  is a class that implements the INotifyPropertyChanged interface:

public interface INotifyPropertyChanged 
{ 
  event PropertyChangedEventHandler PropertyChanged; 
}

It has just one event, PropertyChanged that is activated when there is a change in a property. The Data binding mechanism present in WPF (and in other XAML platforms) subscribes this event and updates the view with no program intervention. So, all we need to do is to create a class that implements INotifyPropertyChanged and call this event when there is a change in a property to WPF update the view.

The greatest advantage is that the ViewModel is a normal class and doesn’t have any dependency on the view layer. That way, we don’t need to initialize a window when we test the ViewModel. This image shows the basic structure of this pattern:

The model communicates with the ViewModel by its properties and methods. The ViewModel communicates with the View mainly using Data Binding, it receives Commands from the View and can send messages to it. When there are many ViewModels that must communicate , they usually send messages, to maintain a decoupled architecture. That way, the Model (usually a POCO class – Plain Old CSharp Object) doesn’t know about the ViewModel, the ViewModel isn’t coupled with the View or other ViewModels, and the View isn’t tied to a ViewModel directly (the only tie is the View’s DataContext property, that will bind the View and the ViewModel. The rest will be done by Data Binding).

We could implement all this infrastructure by ourselves, it’s not a difficult task, but it’s better to use a Framework for that. There are many MVVM frameworks out there, each one chooses a different approach to implement the infrastructure and select one of them is just a matter of preference. I’ve used MVVM Light toolkit for years, it’s very lightweight and easy to use, but it isn’t maintained anymore, so I decided to search another framework. Fortunately, the Windows Community Toolkit has provided a new framework, inspired on MVVM Light, the MVVM Community Toolkit.

Implementing the MVVM Pattern

In our project, the Model is already separated from the rest: it’s in the Lib project and we’ll leave that untouched, as we don’t have to change anything in the model. In order to use the MVVM Toolkit, we must add the Microsoft.Toolkit.Mvvm NuGet package.

Then, we must create the ViewModels to interact between the View and the Model. Create a new folder and name it ViewModel. In it, add a new class and name it MainViewModel.cs.  This class will inherit from ObservableObject, from the toolkit, as it already implements the interface. Then we will copy and adapt the code that is found in the code behind of MainWindow.xaml.cs:

 

public class MainViewModel : ObservableObject
{
    private readonly ICustomerRepository _customerRepository;
    private Customer _selectedCustomer;

    public MainViewModel()
    {
        _customerRepository =  new CustomerRepository();
        AddCommand = new RelayCommand(DoAdd);
        RemoveCommand = new RelayCommand(DoRemove, () => SelectedCustomer != null);
        SaveCommand = new RelayCommand(DoSave);
        SearchCommand = new RelayCommand<string>(DoSearch);
    }

    public IEnumerable<Customer> Customers => _customerRepository.Customers;

    public Customer SelectedCustomer
    {
        get => _selectedCustomer;
        set 
        { 
            SetProperty(ref _selectedCustomer, value); 
            RemoveCommand.NotifyCanExecuteChanged(); 
        } 
    }

    public IRelayCommand AddCommand { get; }
    public IRelayCommand RemoveCommand { get; }
    public IRelayCommand SaveCommand { get; }
    public IRelayCommand<string> SearchCommand { get; }
    private void DoAdd()
    {
        var customer = new Customer();
        _customerRepository.Add(customer);
        SelectedCustomer = customer;
        OnPropertyChanged("Customers");
    }

    private void DoRemove()
    {
        if (SelectedCustomer != null)
        {
            _customerRepository.Remove(SelectedCustomer);
            SelectedCustomer = null;
            OnPropertyChanged("Customers");
        }
    }

    private void DoSave()
    {
        _customerRepository.Commit();
    }

    private void DoSearch(string textToSearch)
    {
        var coll = CollectionViewSource.GetDefaultView(Customers);
        if (!string.IsNullOrWhiteSpace(textToSearch))
            coll.Filter = c => ((Customer)c).Country.ToLower().Contains(textToSearch.ToLower());
        else
            coll.Filter = null;
    }
}

We have two properties, Customers and SelectedCustomerCustomers  will contain the list of customers shown in the DataGrid. SelectedCustomer will be the selected customer in the DataGrid, that will be shown in the detail pane. There are four commands, and we will use the IRelayCommand interface, declared in the toolkit. Each command will be initialized with the method that will be executed when the command is invoked. The RemoveCommand uses an overload for the constructor, that uses a predicate as the second parameter. This predicate will only enable the button when there is a customer selected in the DataGrid. As this command is dependent on the selected customer, when we change this property, we call the NotifyCanExecuteChanged method to notify all the elements that are bound to this command.

Now we can remove all the code from MainWindow.xaml.cs and leave only this:

public MainWindow()
{
    InitializeComponent();
    DataContext = new MainViewModel();
}

We can run the program and see that it runs the same way it did before, but we made a large refactoring to the code and now we can start implementing unit tests in the code.

Implementing tests

Now that we’ve separated the code from the view, we can test the ViewModel without the need to initialize a Window. That is really great, because we can have testable code and be assured that we are not breaking anything when we are implementing new features. For that, add a new test project and name it CustomerApp.Tests. In the Visual Studio version I’m using, there is no template for the .net 5.0 test project available, so I added a .Net Core test project, then I edited the project file and changed the TargetFramework to net5.0-windows. Then, you can add a reference to the CustomerApp project and rename UnitTest1 to MainViewModelTests.

Taking a look at the Main ViewModel, we see that there is a coupling between it and the Customer Repository. In this case, there is no much trouble, because we are reading the customers from a XML file located in the output directory, but if we decide to replace it with some kind of database, it can be tricky to test the ViewModel, because we would have to do a lot of setup to test it.

We’ll remove the dependency using Dependency Injection. Instead of using another framework for the the dependency injection, we’ll use the integrated one, based on Microsoft.Extensions.DependencyInjection. You should add this NuGet package in the App project to use the dependency injection. Then, in App.xaml.cs, we’ll add code to initialize the location of the services:

public partial class App
{
    public App()
    {
        Services = ConfigureServices();
    }

    public new static App Current => (App) Application.Current;

    public IServiceProvider Services { get; }

    private static IServiceProvider ConfigureServices()
    {
        var services = new ServiceCollection();

        services.AddSingleton<ICustomerRepository, CustomerRepository>();
        services.AddSingleton<MainViewModel>();

        return services.BuildServiceProvider();
    }

    public MainViewModel MainVM => Services.GetService<MainViewModel>();
}

We declare a static property Current to ease using the App  object and declare a IServiceProvider, to provide our services. They are configured in the ConfigureServices method, that creates a ServiceCollection and add the CustomerRepository and the main ViewModel to the collection. ConfigureServices  is called in the constructor of the application. Finally we declare the property MainVM, which will get the ViewModel from the Service Collection.

Now, we can change MainWindow.xaml.cs to use the property instead of instantiate directly the ViewModel:

public MainWindow()
{
    InitializeComponent();
    DataContext = App.Current.MainVM;
}

The last change is to remove the coupling between the ViewModel and the repository using Dependency Injection, in MainViewModel.cs:

public MainViewModel(ICustomerRepository customerRepository)
{
    _customerRepository = customerRepository ??  
                          throw new ArgumentNullException("customerRepository");
    _customerRepository = customerRepository;
    AddCommand = new RelayCommand(DoAdd);
    RemoveCommand = new RelayCommand(DoRemove, () => SelectedCustomer != null);
    SaveCommand = new RelayCommand(DoSave);
    SearchCommand = new RelayCommand<string>(DoSearch);
}

With that, we’ve gone one step further and removed the coupling between the ViewModel and the repository, so we can start our tests.

For the tests, we will use two libraries, FluentAssertions, for better assertions and FakeItEasy, to generate fakes. You should install both NuGet packages to your test project. Now, we can start creating our tests:

public class MainViewModelTests
{
    [TestMethod]
    public void Constructor_NullRepository_ShouldThrow()
    {
        Action act = () => new MainViewModel(null);

        act.Should().Throw<ArgumentNullException>()
            .Where(e => e.Message.Contains("customerRepository"));
    }

    [TestMethod]
    public void Constructor_Customers_ShouldHaveValue()
    {
        var repository = A.Fake<ICustomerRepository>();
        var customers = new List<Customer>();
        A.CallTo(() => repository.Customers).Returns(customers);
        var vm =  new MainViewModel(repository);

        vm.Customers.Should().BeEquivalentTo(customers);
    }

    [TestMethod]
    public void Constructor_SelectedCustomer_ShouldBeNull()
    {
        var repository = A.Fake<ICustomerRepository>();
        var vm = new MainViewModel(repository);

        vm.SelectedCustomer.Should().BeNull();
    }
}

Here we created three tests for the constructor, testing the values of the properties after the constructor. We can continue, testing the commands in the ViewModel:

[TestMethod]
public void AddCommand_ShouldAddInRepository()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);

    vm.AddCommand.Execute(null);
    A.CallTo(() => repository.Add(A<Customer>._)).MustHaveHappened();
}

[TestMethod]
public void AddCommand_SelectedCustomer_ShouldNotBeNull()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.AddCommand.Execute(null);
    vm.SelectedCustomer.Should().NotBeNull();
}

[TestMethod]
public void AddCommand_ShouldNotifyCustomers()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    var wasNotified = false;
    vm.PropertyChanged += (s, e) =>
    {
        if (e.PropertyName == "Customers")
            wasNotified = true;
    };
    vm.AddCommand.Execute(null);
    wasNotified.Should().BeTrue();
}

[TestMethod]
public void RemoveCommand_SelectedCustomerNull_ShouldNotRemoveInRepository()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.RemoveCommand.Execute(null);
    A.CallTo(() => repository.Remove(A<Customer>._)).MustNotHaveHappened();
}

[TestMethod]
public void RemoveCommand_SelectedCustomerNotNull_ShouldRemoveInRepository()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.SelectedCustomer = new Customer();
    vm.RemoveCommand.Execute(null);
    A.CallTo(() => repository.Remove(A<Customer>._)).MustHaveHappened();
}

[TestMethod]
public void RemoveCommand_SelectedCustomer_ShouldBeNull()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.SelectedCustomer = new Customer();
    vm.RemoveCommand.Execute(null);
    vm.SelectedCustomer.Should().BeNull();
}

[TestMethod]
public void RemoveCommand_ShouldNotifyCustomers()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.SelectedCustomer = new Customer(); 
    var wasNotified = false;
    vm.PropertyChanged += (s, e) =>
    {
        if (e.PropertyName == "Customers")
            wasNotified = true;
    };
    vm.RemoveCommand.Execute(null);
    wasNotified.Should().BeTrue();
}

[TestMethod]
public void SaveCommand_ShouldCommitInRepository()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.SaveCommand.Execute(null);
    A.CallTo(() => repository.Commit()).MustHaveHappened();
}

[TestMethod]
public void SearchCommand_WithText_ShouldSetFilter()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.SearchCommand.Execute("text");
    var coll = CollectionViewSource.GetDefaultView(vm.Customers);
    coll.Filter.Should().NotBeNull();
}

[TestMethod]
public void SearchCommand_WithoutText_ShouldSetFilter()
{
    var repository = A.Fake<ICustomerRepository>();
    var vm = new MainViewModel(repository);
    vm.SearchCommand.Execute("");
    var coll = CollectionViewSource.GetDefaultView(vm.Customers);
    coll.Filter.Should().BeNull();
}

Now we have all the tests for the ViewModel and have our project ready for the future. We went step by step and finished with a .NET 5.0 project that uses the MVVM pattern and have unit tests. This project is ready to be updated to WinUI3, or even to be ported to UWP or Xamarin. The separation between the code and the UI makes it easy to port it to other platforms, the ViewModel became testable and you can test all logic in it, without bothering with the UI. Nice, no ?

The full source code for the project is at https://github.com/bsonnino/MvvmApp

 

In the last post I showed how to use Xaml Islands to modernize your .NET app. As you could see, there are a lot of steps and the procedure is a little clumsy. But now it’s getting better, Microsoft has introduced WinUI 3 and, with that, things are getting better: you don’t need special components to add your WinUI code, you can use the WinUI components directly in you app. But there is a pitfall – right now, when I’m writing this article (end of 2020), Win UI 3 is still in preview and you need the preview version of Visual Studio to use it. You can install the preview version side-by-side with the production version, there is no problem with that. You can download the preview version from here. Once you download and install the preview version of Visual Studio, you need to download the WinUI3 templates to create new apps. This is done by installing the VSIX package from here. Now we are ready to modernize our app from the last post with WinUI3. In Visual Studio Preview, create a new app. We’ll create a new blank, packaged desktop app:\

clip_image002

When you create that app, Visual Studio will create a solution with two projects: a WPF app and a package app that will create a MSIX package that will allow your program to run in a sandbox and to send it to Windows Store:

clip_image003

If you run the application right now, the package will install the app, with a button in the center of the window, that will change its text when clicked. We’ll change the app to show our photos. The first step is to add the Newtonsoft.Json NuGet package. The MVVM Light package wasn’t ported to .NET 5.0, so we’ll use the MVVM framework from the Community toolkit (https://github.com/windows-toolkit/MVVM-Samples). In the NuGet package manager, install the Micosoft.Toolkit.MVVM package. Once you installed these packages, you can copy the Converters, Model, Photos and ViewModels paths from the WPFXamlIslands project from my Github. If you build the application, you will see some errors. Some of them are due to the conversion to WinUI3 and other ones are due to the use of the new MVVM Framework. Let’s begin converting the files for the framework. The MVVM Toolkit doesn’t have the ViewModelBase class, but has the similar ObservableObject. In MainViewModel.cs, changed the base class of the viewmodel to ObservableObject:

public class MainViewModel : ObservableObject

Then, we must change the IoC container that is used in MVVM Light to the one used in the MVVM toolkit. For that, you must add this code in App.xaml.cs:

/// <summary> 
/// Gets the current <see cref="App"/> instance in use 
/// </summary> 
public new static App Current => (App)Application.Current; 

/// <summary> 
/// Gets the <see cref="IServiceProvider"/> instance to resolve application services. 
/// </summary> 
public IServiceProvider Services { get; } 

/// <summary> 
/// Configures the services for the application. 
/// </summary> 
private static IServiceProvider ConfigureServices() 
{ 
  var services = new ServiceCollection(); 
  services.AddSingleton<MainViewModel>(); 
return services.BuildServiceProvider(); 
}

This code will configure the service collection, adding the instantiation of MainViewModel as a singleton, when required and will configure the service provider. We should call the ConfigureServices method in the constructor, like this:

public App() 
{
  Services = ConfigureServices();
  this.InitializeComponent();
  this.Suspending += OnSuspending; 
}

Now, we can get a reference to the ViewModel, when we need it, as the DataContext for the view. In MainWindow.xaml.cs, when you try to set the DataContext like this, you get an error, saying that DataContext is not defined:

public MainWindow()
{
    this.InitializeComponent();
    DataContext = App.Current.Services.GetService(typeof(MainViewModel));
}

That’s because the Window in WinUI isn’t a UIElement and thus, doesn’t have a DataContext. There are two workarounds to this issue:

  • Use x:Bind, that doesn’t need a DataContext for data binding
  • Set the DataContext to the main element of the window

So, we’ll choose the second option and initialize the DataContext with this code:

public MainWindow()
{
    this.InitializeComponent();
    MainGrid.DataContext = App.Current.Services.GetService(typeof(MainViewModel));
}

The next step is to remove the ViewModelLocator, as it won’t be needed. One other error is in the converter. The namespaces in the WinUI have changed, and we must change them to use the code. System.Windows.Data has changed to Microsoft.UI.Xaml.Data and System.Windows.Media.Imaging has changed to Microsoft.UI.Xaml.Media.Imaging (as you can see, there was a change from System.Windows to Microsoft.UI.Xaml).

Besides that, in WinUI, the interface IValueConverter is implemented, but the signatures of Convert and ConvertBack are different, so we have to change the signature of the methods (the code remains unchanged):

public object Convert(object value, Type targetType, object parameter, string language)
{
    string imagePath = $"{AppDomain.CurrentDomain.BaseDirectory}Photos\\{value}.jpg";
    BitmapImage bitmapImage = !string.IsNullOrWhiteSpace(value?.ToString()) &&
                              File.Exists(imagePath) ?
        new BitmapImage(new Uri(imagePath)) :
        null;
    return bitmapImage;
}

public object ConvertBack(object value, Type targetType, object parameter, string language)
{
    throw new NotImplementedException();
}

Now we can add the FlipView to the main window. In this case, we don’t need any component nor setup in the code behind – all the code is in MainWindow.xaml:

<Grid x:Name="MainGrid">
 <FlipView ItemsSource="{Binding Photos}">
   <FlipView.ItemTemplate>
     <DataTemplate>
       <Grid Margin="5">
         <Grid.RowDefinitions>
           <RowDefinition Height="*" />
           <RowDefinition Height="40" />
         </Grid.RowDefinitions>
         <Image Source="{Binding PhotoUrl}" Grid.Row="0" Margin="5"
             Stretch="Uniform" />
         <TextBlock Text="{Binding UserName}" HorizontalAlignment="Center" VerticalAlignment="Center" Grid.Row="1"/>
       </Grid>
     </DataTemplate>
   </FlipView.ItemTemplate>
 </FlipView>
</Grid>

With that, we can run the program and we get another error:

clip_image005

We get a DirectoryNotFoundException because of the package. When we are using the package, the current directory has changed and it’s not the install folder anymore, so we have to point to the full path. This is done with a code like this:

public MainViewModel()
{
    var fileName = $"{AppDomain.CurrentDomain.BaseDirectory}Photos\\__credits.json";
    Photos = JsonConvert.DeserializeObject&lt;Dictionary&lt;string, PhotoData&gt;&gt;(
        File.ReadAllText(fileName),
        new JsonSerializerSettings
        {
            ContractResolver = new DefaultContractResolver
            {
                NamingStrategy = new SnakeCaseNamingStrategy()
            }
        }).Select(p =&gt; new PhotoData() { PhotoUrl = $".\\Photos\\{p.Key}.jpg", UserName = p.Value.UserName });
}

Now, when you run the app, it should run ok, but I got a C++ exception:

clip_image007

In this case, I just unchecked the “Break when this exception is thrown” box and everything worked fine:

clip_image009

We now have our app modernized with WinUI3 components, there is no need to add a new component to interface, like XamlIslands and everything works as a single program. There were some things to update, like components that weren’t ported to .NET 5, new namespaces and some issues with the path, but the changes were pretty seamless. WinUI3 is still in preview and there will surely be changes before it goes to production, but you can have a glance of what is coming and how you can modernize your apps with WinUI 3.

All the code for this article is at https://github.com/bsonnino/WinUiViewer

After I finished last article, I started to think that there should be another way to test the non-virtual methods of a class. And, as a matter of fact, there is another way that has been around for a long time: Microsoft Fakes. If your don’t know it, you can read this article.

While I wouldn’t recommend it for using in your daily testing, it’s invaluable when you are testing legacy code and don’t have any testing and lots of coupling in your code. The usage is very simple:

In the Solution Explorer of the test project, right-click in the dll you want to create a fake (in our case, is the dll of the main project and select Add Fakes Assembly:

That will add the fake assembly and all the infrastructure needed to use fakes in your tests. Then, in your test class, you can use the newly created fake. Just create a ShimContext and use it while testing the class:

[TestMethod]
public void TestVirtualMethod()
{
    using (ShimsContext.Create())
    {
        var fakeClass = new Fakes.ShimClassNonVirtualMethods();
        var sut = new ClassToTest();
        sut.CallNonVirtualMethod(fakeClass);
    }
}

[TestMethod]
public void TestNonVirtualMethod()
{
    using (ShimsContext.Create())
    {
        var fakeClass = new Fakes.ShimClassNonVirtualMethods();
        var sut = new ClassToTest();
        sut.CallNonVirtualMethod(fakeClass);
    }
}

As you can see, we initialize a ShimsContext and enclose it in an Using clause. Then we initialize the fake class. This class will be in the same namespace as the original one, with .Fakes at the end, and its name will be the same, starting with Shim. That way, we can use it as a fake for the original class and it will have all non-virtual methods overridden.

That feature, although not recommended in a daily basis, can be a lifesaver when you have to test legacy code with deep coupling with databases, UI, or even other libraries. Microsoft Fakes can fake almost everything, including System calls, so it can be a nice starting point to refactor your legacy code – with it, you can put some tests in place and star refactoring your code more confidently.

 

Introduction

Usually I use interfaces for my unit tests. This is a good practice, and should be followed when possible. But, last week I was testing some legacy code, which had no interfaces and stumbled with something: I was trying to mock a class with a non-virtual method and saw that the original method was being executed, instead of doing nothing (the expected for the mock). I was using NSubstitute, but the same principles apply to most of the more popular mocking frameworks, like Moq, FakeItEasy or Rhino mocks. All of them use for their backend Castle DynamicProxy to create a proxy around the object and do its magic. All of them state that they can only mock virtual methods, so I should be aware that I could not mock non-virtual methods.

This would be ok if the class to be mocked is yours, just add the Virtual keyword to your method and you’re done. You’ll have some size and performance penalty in exchange for your flexibility. Or you could use something like Typemock Isolator (this is a great mocking framework that can mock almost anything)- that would work even if you don’t have access to the source code. But why aren’t non-virtual methods mocked ?

Let’s start with an example. Let’s say I have the code of the class below:

public class ClassNonVirtualMehods
{
    public void NonVirtualMethod()
    {
         Console.WriteLine("Executing Non Virtual Method");
    }

    public virtual void VirtualMethod()
    {
        Console.WriteLine("Executing Virtual Method");
    }
}

My class to test receives an instance of this class:

public class ClassToTest
{
    public void CallVirtualMethod(ClassNonVirtualMehods param)
    {
        param.VirtualMethod(); 
    }

    public void CallNonVirtualMethod(ClassNonVirtualMehods param)
    {
        param.NonVirtualMethod();
    }
}

My tests are (I’m using NSubstitute here):

[TestClass]
public class UnitTest1
{
    [TestMethod]
    public void TestVirtualMethod()
    {
        var mock = Substitute.For<ClassNonVirtualMehods>();
        var sut = new ClassToTest();
        sut.CallVirtualMethod(mock);
    }

    [TestMethod]
    public void TestNonVirtualMethod()
    {
        var mock = Substitute.For<ClassNonVirtualMehods>();
        var sut = new ClassToTest();
        sut.CallNonVirtualMethod(mock);
    }
}

If you run these tests, you’ll see this:

You’ll see that the non virtual method is executed (writing to the console), while the virtual method is not executed, as we expected.

Why aren’t non-virtual methods mocked ?

To explain that, we should see how Castle DynamicProxy works and how C# polymorphism works.

From here, you can see that DynamicProxy works with two types of proxies: Inheritance-based and composition-based. In order to put in place their mocks, the mock frameworks use the inheritance-based proxy, and that would be the same as create an inherited class (the mock) that overrides the methods and put the desired behavior in them. It would be something like this:

public class MockClassWithNonVirtualMethods : ClassNonVirtualMehods
{
    public override void VirtualMethod()
    {
        // Don't do anything
    }
}

In this case, as you can see, you can only override virtual methods, so the non-virtual methods will continue to execute the same way. If you have tests like this ones, you will see the same behavior as you did with NSubstitute:

You could use the composition-based proxy, but when you read the documentation, you’ll see that this is not an option:

Class proxy with target – this proxy kind targets classes. It is not a perfect proxy and if the class has non-virtual methods or public fields these can’t be intercepted giving users of the proxy inconsistent view of the state of the object. Because of that, it should be used with caution.

Workaround for non-virtual methods

Now we know why mocking frameworks don’t mock non-virtual methods, we must find a workaround.

My first option was to use the RealProxy class to create the mock and intercept the calls, like shown in my article in the MSDN Magazine. This is a nice and elegant solution, as the impact to the source code would be minimum, just create a generic class to create the proxy and use it instead of the real class.  But it turned on that the RealProxy has two disadvantages:

  • It can only create proxies that inherit of MarshalByRefObject or are interfaces, and our classes are none of them
  • It isn’t available in .NET Core, and it was replaced by DispatchProxy, which is different, and can only proxy interfaces

So, I discarded this option and went for another option. I was looking to change the IL Code at runtime, using Reflection.Emit, but this is prone to errors and not an easy task, so I kept searching until I found Fody. This is a very interesting framework to allow you to add new code at runtime. This is the same thing I was expecting to do, with the difference that the solutions are ready and tested. Just add an add-in and you’re set – the task you are trying to do is there. There are a lot of add-ins for it and it’s very simple to use them. If you don’t find what you are trying to do there, you can develop your own add-in. In our case, there is already an add-in, Virtuosity.

To use it, all we have to do is to install the packages Fody and Fody.Virtuosity to your project, add a xml file named FodyWeavers.xml with this content:

<?xml version="1.0" encoding="utf-8"?>
<Weavers xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="FodyWeavers.xsd">
  <Virtuosity />
</Weavers>

Set its Build Action to None and Copy to Output Directory to Copy If Newer and you’re set. This will transform all your non-virtual methods to virtual and you will be able to mock them in any mocking framework, with no need to change anything else in your projects.

Conclusion

The procedure described here is nothing something that I would recommend doing on a daily basis, but when you have some legacy code to test, where changes to the source code are difficult to make, it can be a lifesaver. When you have the time and the means, you should develop an interface and use it in your code, using Dependency Injection. That way, you will reduce coupling and enhance its maintainability.

 

All the source code for this article is at https://github.com/bsonnino/MockNonVirtual

Introduction

Learning a new programming language is always hard. You have to read the documentation, stop, go to the IDE of your choice, start a new project, compile and run the program and see the results. Stop. Restart. After some time, you have no more patience for the process and start to skip some steps (maybe I don’t need to run this sample here…) until the point that you try to find an easier way, but most of the time there is no easier way.

Well, now there is an easier way. The dotnet team has created a new tool, called dotnet try that allows you to have documentation mixed with a code sample window, where you can try the code while reading the documentation. Cool, no?

Starting with dotnet try

To use this tool, you must install dotnet core in your machine and then install dotnet try with:

dotnet tool install -g dotnet-try

Once you do that, you can use the tool with dotnet try. But, the better way to start is to use the tool’s own demo with dotnet try demo. This will load the tutorial, create a server and open the browser with the tutorial:

There you can see the documentation side-by-side with the code. You can change the code and click on the arrow key and look at the results. Nice! You can  follow the tutorial and learn how to create your documentation easily. Or you can continue reading and I will tell you how to use it (sorry, no dotnet try here :-)).

If you want to learn something about .NET, you can run the samples. Just clone the dotnet/try-samples repository, change to the cloned folder and type dotnet try to open it.

There you can see some tutorials:

  • Beginner’s guide
  • 101 Linq Samples
  • C# 7
  • C# 8

With that, you can start playing with the tools. I recommend checking the Linq samples, there is always something to learn about Linq, and the new features in C#7 and 8.

Once you’ve played with the tool, it’s the time to create your own documentation.

Creating documentation for your code

All documentation is based on the MarkDown language. It’s a very simple way to create your documentation, If you don’t know how to use the MarkDown language, you can learn it here.

Create a new folder and, in this folder, create a readme.md file and add this code:

# Documenting your code with **dotnet try**
## Introduction
To document your code with dotnet try, you must mix your markdown text with some fences

Once you save it and run dotnet try, you can see something like this in the browser:

Now you can start adding code. To add the code window, we must add a code fence, a block of code delimited by triple backticks. You can add code directly to the code fence with something like this:

using System;

namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
        }
    }
}

This is great, but can pose a problem: if you need to change something in your code, you must also change the markdown file, and this is less than optimum, as at some point you will forget to update both places. But there is a better way: you can use the source code as the only source of truth and you don’t have to update both places – when you change the source code, the code in the page changes accordingly. To do this, you can create a console app with

dotnet new console

This creates a simple project that writes Hello World in the console. It’s pretty simple, but fits our needs. Now, we will document it. You can add the following code at the end of the readme.md file to show the contents of the program in editable form:

Below, you should see the code in the program.cs file:
```cs --source-file ./Program.cs --project ./dotnettrypost.csproj
```

In this code fence, you must add the name of the file and the project. If you run dotnet try again, you will see something like this:

You can click the arrow to run the code, the first time, it will take a little bit to compile the code, but the other times, it will be faster. You can change the code and see the results:

You have here a safe environment, where you can try changes in the code. If you make a mistake, the window will point it, so you can fix it.

As you can see, the code for the entire file is shown, but sometimes that’s not what you want. Sometimes you just want to show a code snippet from the file. To do that, you must create regions in your code. In the command prompt, type code . to open Visual Studio Code (you must have it installed. If you don’t have it installed, go to https://code.visualstudio.com/download). Then add a region to your code:

using System;

namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
        }
    }
}

 

Save the file and then edit the Readme.md file to change the code fence:

```cs --source-file ./Program.cs --project ./dotnettrypost.csproj --region HelloWorld
```

When you save the file and refresh the page, you will see something like this:

Creating regions in your code and referencing them in the code fence allows you to separate the code you want in the code window. That way, a single file can provide code for many code windows.

If you don’t want to have a runnable window, but still synchronized with the code, you can use the editable parameter:

```cs --source-file ./Program.cs --project ./dotnettrypost.csproj --region HelloWorld --editable false
```

When you are running the code, even if you aren’t showing the full code, everything in the program is executed. For example, if you change the source code to:

using System;
namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
            Console.WriteLine("This won't be shown in the snippet but will be shown when you run");
        }
    }
}

You will see the same code snippet but when you run it you will see something like this:

If you don’t want this, you need to do some special treatment in your code: you must make each snippet of code run alone. You can do that by processing the parameters passed to the program:

using System;

namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            for (var i = 0; i < args.Length; i++)
                if (args[i] == "--region")
                {
                    if (args[i + 1] == "HelloWorld")
                        HelloWorld();
                    else if (args[i + 1] == "ByeBye")
                        ByeBye();
                }
        }

        private static void HelloWorld()
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
        }

        private static void ByeBye()
        {
            #region ByeBye
            Console.WriteLine("ByeBye!");
            #endregion
        }
    }
}

Now, when you run the code, it will run only the code for the HelloWorld snippet. As you can see there are a lot of options to show and run code in the web page. I see many ways to improve documentation and experimentation of new APIs. And you, don’t this bring you new ideas?

All the source code for this project is at https://github.com/bsonnino/dotnettry

One of the perks of being an MVP is to receive some tools for my own use, so I can evaluate them and if, I find them valuable, I can use them on a daily basis. On my work as an architect/consultant, one task that I often find is to analyse an application and suggest changes to make it more robust and maintainable. One tool that can help me in this task is NDepend (https://www.ndepend.com/). With this tool, you can analyse your code, verify its dependencies, set coding rules and verify if the how code changes are affecting the quality of your application. For this article, we will be analyzing eShopOnWeb (https://github.com/dotnet-architecture/eShopOnWeb), a sample application created by Microsoft to demonstrate architectural patterns on Dotnet Core. It’s an Asp.NET MVC Core 2.2 app that shows a sample shopping app, that can be run on premises or in Azure or in containers, using a Microservices architecture. It has a companion book that describes the application architecture and can be downloaded at https://aka.ms/webappebook.

When you download, compile and run the app, you will see something like this:

You have a full featured Shopping app, with everything you would expect to have in this kind of app. The next step is to start analyzing it.

Download NDepend (you have a 14 day trial to use and evaluate it), install it in your machine, it will install as an Add-In to Visual Studio. The, in Visual Studio, select Extensions/NDepend/Attach new NDepend Project to current VS Solution. A window like this opens:

It has the projects in the solution selected, you can click on Analyze 3 .NET Assemblies. After it runs, it opens a web page with a report of its findings:

This report has an analysis of the project, where you can verify the problems NDepend found, the project dependencies, and drill down in the issues. At the same time, a window like this opens in Visual Studio:

If you want a more dynamic view, you can view the dashboard:

 

In the dashboard, you have a full analysis of your application: lines of code, methods, assemblies and so on. One interesting metric there is the technical debt, where you can see how much technical debt there is in this app and how long will it take to fix this debt (in this case, we have 3.72% of technical debt and 2 days to fix it. We have also the code metrics and violated coding rules. If you click in some item in the dashboard, like the # of Lines, you will see the detail in the properties window:

If we take a look at the dashboard, we’ll see some issues that must be improved. In the Quality Gates, we have two issues that fail. By clicking on the number 2 there, we see this in the Quality Gates evolution window:

If we hover the mouse on one of the failed issues, we get a tooltip that explains the failure:

If we double-click in the failure we drill-down to what caused it:

If we click in the second issue, we see that there are two names used in different classes: Basket and IEmailSender:

Basket is the name of the class in Microsoft.eShopWeb.Web.Pages.Shared.Components.BasketComponent and in Microsoft.eShopWeb.ApplicationCore.Entities.BasketAggregate

One other thing that you can see is the dependency graph:

With it, you can see how the assemblies relate to each other and give a first look on the architecture. If you filter the graph to show only the application assemblies, you have a good overview of what’s happening:

The largest assembly is Web, followed by Infrastructure and ApplicationCore. The application is well layered, there are no cyclic calls between assemblies (Assembly A calling assembly B that calls A), there is a weak relation between Web and Infrastructure (given by the width of the line that joins them) and a strong one between Web and ApplicationCore. If we have a large solution with many assemblies, just with that diagram, we can take a look of what’s happening and if we are doing the things right. The next step is go to the details and look at the assemblies dependencies. You can hover the assembly and get some info about it:

For example, the Web assembly has 58 issues detected and has an estimated time to fix them in 1 day. This is an estimate calculated by NDepend using the complexity of the methods that must be fixed, but you can set your own formula to calculate the technical debt if this isn’t ok for your team.

Now that we got an overview of the project, we can start fixing the issues. Let’s start with the easiest ones :-).  The Infrastructure assembly has only two issues and a debt of 20 minutes. In the diagram, we can right-click on the Infrastructure assembly and select Select Issues/On Me and on Child Code elements. This will open the issues in the Queries and Rules Edit window, at the right:

We can then open the tree and double click on the second issue. It will point you to a rule “Non-static classes should be instantiated or turned to static”, pointing to the SpecificatorEvaluator<T> class. This is a class that has only one static method and is referenced only one time, so there’s no harm to make it static.  Once you make it static, build the app and run the dependency check again, you will see this:

Oh-Oh. We fixed an issue and introduced another one – an API Breaking Change – when we made that class static, we removed the constructor. In this case, it wasn’t really an API change, because nobody would instantiate a class with only static methods, so we should do a restart, here. Go to the Dashboard, in Choose Baseline and select define:

Then select the most recent analysis and click OK.  That will open the NDepend settings, where you will see the new baseline. Save the settings and rerun the analysis and the error is gone.

We can then open the tree again and double click on another issue that remains in Infrastructure. That will open the source code, pointing to a readonly declaration for a DBContext. This is not a big issue, it’s only telling us that we are declaring the variable as readonly, but the object it’s storing is mutable, so it can be changed. There is a mention of this issue in the Framework Design Guidelines, by Microsoft – https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/. If you hover the mouse on the issue, there is a tooltip on how to fix it:

We have three ways to fix this issue:

  • Remove the readonly from the field
  • Make the field private and not protected
  • Use an attribute to say “ok, I am aware of this, but I don’t mind”

The first option will suppress the error, but will remove what we want to do – show that this field should not be entirely replaced with another dbcontext. The second option will remove the possibility to use the dbcontext in derived classes, so I’ll choose the third option and add the attribute. If I right-click on the issue in the Rules and Queries window and select Suppress Issue, a window opens:

All I have to do is to copy the attribute to the clipboard and paste it into the source code. I also have to declare the symbol CODE_ANALYSIS in the project (Project/Properties/Build). That was easy! Let’s go to the next one.

This is an obsolete method used. Fortunately, the description shows us to use the UseHiLo method. We change the method, run the app to see if there’s nothing broken, and we’re good. W can run the analysis again and see what happened:

We had a slight decrease in the technical debt, we solved one high issue and one violated rule. As you can see, NDepend not only analyzes your code, but it also gives you a position on what you are doing with your code changes. This is a very well architected code (as it should be – it’s an architecture sample), so the issues are minor, but you can see what can be done with NDepend. When you have a messy project, this will be surely an invaluable tool!

 

You need to create customized reports based on Sharepoint data and you don’t have the access to the server to create a new web part to access it directly. You need to resort to other options to generate the report. Fortunately, Sharepoint allows some methods to access its data, so you can get the information you need. If you are using C#, you can use one of these three methods:

  • Client Side Object Model (CSOM) – this is a set of APIs that allow access to the Sharepoint data and allow you to maintain lists and documents in Sharepoint
  • REST API – with this API you can access Sharepoint data, not only using C#, but with any other platform that can perform and process REST requests, like web or mobile apps
  • SOAP Web Services – although this kind of access is being deprecated, there are a lot of programs that depend on it, so it’s being actively used until now. You can use this API with any platform that can process SOAP web services.

This article will show how to use these three APIs to access lists and documents from a Sharepoint server and put them in a WPF program, with an extra touch: the user will be able to export the data to Excel, for further manipulation.

To start, let’s create a WPF project in Visual Studio and name it SharepointAccess. We will use the MVVM pattern to develop it, so right click the References node in the Solution Explorer and select Manage NuGet Packages and add the MVVM Light package. That will add a ViewModel folder, with two files in it to your project. The next step is to make a correction in the ViewModelLocator.cs file. If you open it, you will see an error in the line

using Microsoft.Practices.ServiceLocation;

Just replace this using clause with

using CommonServiceLocator;

The next step is to link the DataContext of MainWindow to the MainViewModel, like it’s described in the header of ViewModelLocator:

<Window x:Class="SharepointAccess.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:SharepointAccess"
        mc:Ignorable="d"
        Title="Sharepoint Access" Height="700" Width="900"
        DataContext="{Binding Source={StaticResource Locator}, Path=Main}">

Then, let’s add the UI to MainWindow.xaml:

<Grid>
    <Grid.Resources>
        <DataTemplate x:Key="DocumentsListTemplate">
            <StackPanel>
                <TextBlock Text="{Binding Title}" />
            </StackPanel>
        </DataTemplate>
        <DataTemplate x:Key="DocumentTemplate">
            <StackPanel Margin="0,5">
                <TextBlock Text="{Binding Title}" />
                <TextBlock Text="{Binding Url}" />

            </StackPanel>
        </DataTemplate>
        <DataTemplate x:Key="FieldsListTemplate">
            <Grid >
                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="150" />
                    <ColumnDefinition Width="*" />
                </Grid.ColumnDefinitions>
                <TextBlock Text="{Binding Key}" TextTrimming="CharacterEllipsis"/>
                <TextBlock Text="{Binding Value}" Grid.Column="1" TextTrimming="CharacterEllipsis"/>
            </Grid>
        </DataTemplate>
    </Grid.Resources>
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="*" />
        <ColumnDefinition Width="*" />
        <ColumnDefinition Width="*" />
    </Grid.ColumnDefinitions>
    <Grid.RowDefinitions>
        <RowDefinition Height="40" />
        <RowDefinition Height="*" />
        <RowDefinition Height="40" />
    </Grid.RowDefinitions>
    <StackPanel Grid.Row="0" Grid.ColumnSpan="3" HorizontalAlignment="Stretch" Orientation="Horizontal">
        <TextBox Text="{Binding Address}" Width="400" Margin="5" HorizontalAlignment="Left"
                 VerticalContentAlignment="Center"/>
        <Button Content="Go" Command="{Binding GoCommand}" Width="65" Margin="5" HorizontalAlignment="Left"/>
    </StackPanel>
    <StackPanel Orientation="Horizontal"  Grid.Row="0" Grid.Column="3" 
                      HorizontalAlignment="Right" Margin="5,5,10,5">
        <RadioButton Content=".NET Api" IsChecked="True" GroupName="ApiGroup" Margin="5,0"
                     Command="{Binding ApiSelectCommand}" CommandParameter="NetApi" />
        <RadioButton Content="REST" GroupName="ApiGroup"
                     Command="{Binding ApiSelectCommand}" CommandParameter="Rest" Margin="5,0"/>
        <RadioButton Content="SOAP"  GroupName="ApiGroup"
                     Command="{Binding ApiSelectCommand}" CommandParameter="Soap" Margin="5,0"/>
    </StackPanel>
    <ListBox Grid.Column="0" Grid.Row="1" ItemsSource="{Binding DocumentsLists}" 
             ItemTemplate="{StaticResource DocumentsListTemplate}"
             SelectedItem="{Binding SelectedList}"/>
    <ListBox Grid.Column="1" Grid.Row="1" ItemsSource="{Binding Documents}"
             ItemTemplate="{StaticResource DocumentTemplate}"
             SelectedItem="{Binding SelectedDocument}"/>
    <ListBox Grid.Column="2" Grid.Row="1" ItemsSource="{Binding Fields}"
             ItemTemplate="{StaticResource FieldsListTemplate}"
             />
    <TextBlock Text="{Binding ListTiming}" VerticalAlignment="Center" Margin="5" Grid.Row="2" Grid.Column="0" />
    <TextBlock Text="{Binding ItemTiming}" VerticalAlignment="Center" Margin="5" Grid.Row="2" Grid.Column="1" />
  </Grid>

The first line in the grid will have a text box to enter the address of the Sharepoint site, a button to go to the address and three radiobuttons to select the kind of access you want.

The main part of the window will have three listboxes to show the lists on the selected site, the documents in each list and the properties of the document.

As you can see, we’ve used data binding to fill the properties of the UI controls. We’re using the MVVM pattern and all these propperties should be bound to properties in the ViewModel. The buttons and radiobuttons have their Command properties bound to a property in the ViewModel, so we don`t have to add code to the code behind file. We’ve also used templates for the items in the listboxes, so the data is properly presented.

The last line will show the timings for getting the data.

If you run the program, it will run without errors and you will get an UI like this, that doesn’t do anything:

 

It’s time to add the properties in the MainViewModel to achieve the functionality we want. Before that, we’ll add two classes to manipulate the data that comes from the Sharepoint server, no matter the access we are using. Create a new folder named Model, and add these two files, Document.cs and DocumentsList.cs:

public class Document
{
    public Document(string id, string title, 
        Dictionary<string, object> fields,
        string url)
    {
        Id = id;
        Title = title;
        Fields = fields;
        Url = url;
    }
    public string Id { get; }
    public string Title { get; }
    public string Url { get; }
    public Dictionary<string, object> Fields { get; }
}<span id="mce_marker" data-mce-type="bookmark" data-mce-fragment="1">​</span>
public class DocumentsList
{
    public DocumentsList(string title, string description)
    {
        Title = title;
        Description = description;
    }

    public string Title { get; }
    public string Description { get; }
}

These are very simple classes, that will store the data that comes from the Sharepoint server. The next step is to get the data from the server. For that we will use a repository that gets the data and returns it using these two classes. Create a new folder, named Repository and add a new interface, named IListRepository:

public interface IListRepository
{
    Task<List<Document>> GetDocumentsFromListAsync(string title);
    Task<List<DocumentsList>> GetListsAsync();
}

This interface declares two methods, GetListsAsync and GetDocumentsFromListAsync. These are asynchronous methods because we don want them to block the UI while they are being called. Now, its time to create the first access to Sharepoint, using the CSOM API. For that, you must add the NuGet package Microsoft.SharePointOnline.CSOM . This package will provide all the APIs to access Sharepoint data. We can now create our first repository. In the Repository folder, add a new class and name it CsomListRepository.cs:

public class CsomListRepository : IListRepository
{
    private string _sharepointSite;

    public CsomListRepository(string sharepointSite)
    {
        _sharepointSite = sharepointSite;
    }

    public Task<List<DocumentsList>> GetListsAsync()
    {
        return Task.Run(() =>
        {
            using (var context = new ClientContext(_sharepointSite))
            {
                var web = context.Web;
                
                var query = web.Lists.Include(l => l.Title, l => l.Description)
                     .Where(l => !l.Hidden && l.ItemCount > 0);

                var lists = context.LoadQuery(query);
                context.ExecuteQuery();

                return lists.Select(l => new DocumentsList(l.Title, l.Description)).ToList();
            }
        });
    }

    public Task<List<Document>> GetDocumentsFromListAsync(string listTitle)
    {
        return Task.Run(() =>
        {
            using (var context = new ClientContext(_sharepointSite))
            {
                var web = context.Web;
                var list = web.Lists.GetByTitle(listTitle);
                var query = new CamlQuery();

                query.ViewXml = "<View />";
                var items = list.GetItems(query);
                context.Load(list,
                    l => l.Title);
                context.Load(items, l => l.IncludeWithDefaultProperties(
                    i => i.Folder, i => i.File, i => i.DisplayName));
                context.ExecuteQuery();

                return items
                    .Where(i => i["Title"] != null)
                    .Select(i => new Document(i["ID"].ToString(), 
                    i["Title"].ToString(), i.FieldValues, i["FileRef"].ToString()))
                    .ToList();
            }
        });
    }
}

For accessing the Sharepoint data, we have to create a ClientContext, passing the Sharepoint site to access. Then, we get a reference to the Web property of the context and then we do a query for the lists that aren’t hidden and that have any items in them. The query should return with the titles and description of the lists in the website. To get the documents from a list we use a similar way: create a context, then a query and load the items of a list.

We can call this code in the creation of the ViewModel:

private IListRepository _listRepository;

public MainViewModel()
{
    Address = ConfigurationManager.AppSettings["WebSite"];
    GoToAddress();
}

This code will get the initial web site url from the configuration file for the app and call GoToAddress:

private async void GoToAddress()
{
    var sw = new Stopwatch();
    sw.Start();
    _listRepository = new CsomListRepository(Address);

    DocumentsLists = await _listRepository.GetListsAsync();
    ListTiming = $"Time to get lists: {sw.ElapsedMilliseconds}";
    ItemTiming = "";
}

The method calls the GetListsAsync method of the repository to get the lists and sets the DocumentsLists property with the result.

We must also declare some properties that will be used to bind to the control properties in the UI:

public List<DocumentsList> DocumentsLists
{
    get => _documentsLists;
    set
    {
        _documentsLists = value;
        RaisePropertyChanged();
    }
}
public string ListTiming
{
    get => _listTiming;
    set
    {
        _listTiming = value;
        RaisePropertyChanged();
    }
}
public string ItemTiming
{
    get => itemTiming;
    set
    {
        itemTiming = value;
        RaisePropertyChanged();
    }
}
public string Address
{
    get => address;
    set
    {
        address = value;
        RaisePropertyChanged();
    }
}

These properties will trigger the PropertyChanged event handler when they are modified, so the UI can be notified of their change.

If you run this code, you will have something like this:

Then, we must get the documents when we select a list. This is done when the SelectedList property changes:

public DocumentsList SelectedList
{
    get => _selectedList;
    set
    {
        if (_selectedList == value)
            return;
        _selectedList = value;
        GetDocumentsForList(_selectedList);
        RaisePropertyChanged();
    }
}

GetDocumentsForList is:

private async void GetDocumentsForList(DocumentsList list)
{
    var sw = new Stopwatch();
    sw.Start();
    if (list != null)
    {
        Documents = await _listRepository.GetDocumentsFromListAsync(list.Title);
        ItemTiming = $"Time to get items: {sw.ElapsedMilliseconds}";
    }
    else
    {
        Documents = null;
        ItemTiming = "";
    }
}

You have to declare the Documents and the Fields properties:

public List<Document> Documents
{
    get => documents;
    set
    {
        documents = value;
        RaisePropertyChanged();
    }
}

public Dictionary<string, object> Fields => _selectedDocument?.Fields;

One other change that must be made is to create a property named SelectedDocument that will be bound to the second list. When the user selects a document, it will fill the third list with the document’s properties:

public Document SelectedDocument
{
    get => _selectedDocument;
    set
    {
        if (_selectedDocument == value)
            return;
        _selectedDocument = value;
        RaisePropertyChanged();
        RaisePropertyChanged("Fields");
    }
}

Now, when you click on a list, you get all documents in it. Clicking on a document, opens its properties:

Everything works with the CSOM access, so it’s time to add the other two ways to access the data: REST and SOAP. We will implement these by creating two new repositories that will be selected at runtime.

To get items using the REST API, we must do HTTP calls to http://<website>/_api/Lists and  http://<website>/_api/Lists/GetByTitle(‘title’)/Items. In the Repository folder, create a new class and name it RestListRepository. This class should implement the IListRepository interface:

public class RestListRepository : IListRepository
{
    public Task<List<Document>> GetDocumentsFromListAsync(string title)
    {
        throw new NotImplementedException();
    }

    public Task<List<DocumentsList>> GetListsAsync()
    {
        throw new NotImplementedException();
    }
}

GetListsAsync will be:

private XNamespace ns = "http://www.w3.org/2005/Atom";
public Task<List<DocumentsList>> GetListsAsync()
{
    var doc = await GetResponseDocumentAsync(_sharepointSite + "Lists");
    if (doc == null)
        return null;

    var entries = doc.Element(ns + "feed").Descendants(ns + "entry");
    return entries.Select(GetDocumentsListFromElement)
        .Where(d => !string.IsNullOrEmpty(d.Title)).ToList();
}

GetResponseDocumentAsync will issue a HTTP Get request, will process the response and will return an XDocument:

public async Task<XDocument> GetResponseDocumentAsync(string url)
{
    var handler = new HttpClientHandler
    {
        UseDefaultCredentials = true
    };
    HttpClient httpClient = new HttpClient(handler);
    var headers = httpClient.DefaultRequestHeaders;
    var header = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 " +
        "(KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36";
    if (!headers.UserAgent.TryParseAdd(header))
    {
        throw new Exception("Invalid header value: " + header);
    }
    Uri requestUri = new Uri(url);

    try
    {
        var httpResponse = await httpClient.GetAsync(requestUri);
        httpResponse.EnsureSuccessStatusCode();
        var httpResponseBody = await httpResponse.Content.ReadAsStringAsync();
        return XDocument.Parse(httpResponseBody);
    }
    catch
    {
        return null;
    }
}

The response will be a XML String. We could get the response as a Json object if we pass the accept header as application/json.  After the document is parsed, we process all entry elements, retrieving the lists, in GetDocumentsListFromElement:

private XNamespace mns = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata";
private XNamespace dns = "http://schemas.microsoft.com/ado/2007/08/dataservices";
private DocumentsList GetDocumentsListFromElement(XElement e)
{
    var element = e.Element(ns + "content")?.Element(mns + "properties");
    if (element == null)
        return new DocumentsList("", "");
    bool.TryParse(element.Element(dns + "Hidden")?.Value ?? "true", out bool isHidden);
    int.TryParse(element.Element(dns + "ItemCount")?.Value ?? "0", out int ItemCount);
    return !isHidden && ItemCount > 0 ?
      new DocumentsList(element.Element(dns + "Title")?.Value ?? "",
        element.Element(dns + "Description")?.Value ?? "") :
      new DocumentsList("", "");
}

Here we filter the list by parsing the Hidden and ItemCount properties and returning an empty document if the document is hidden or has no items. GetDocumentsFromListAsync is:

private async Task<Document> GetDocumentFromElementAsync(XElement e)
{
    var element = e.Element(ns + "content")?.Element(mns + "properties");
    if (element == null)
        return new Document("", "", null, "");
    var id = element.Element(dns + "Id")?.Value ?? "";
    var title = element.Element(dns + "Title")?.Value ?? "";
    var description = element.Element(dns + "Description")?.Value ?? "";
    var fields = element.Descendants().ToDictionary(el => el.Name.LocalName, el => (object)el.Value);
    int.TryParse(element.Element(dns + "FileSystemObjectType")?.Value ?? "-1", out int fileType);
    string docUrl = "";

    var url = GetUrlFromTitle(e, fileType == 0 ? "File" : "Folder");
    if (url != null)
    {
        var fileDoc = await GetResponseDocumentAsync(_sharepointSite + url);
        docUrl = fileDoc.Element(ns + "entry")?.
            Element(ns + "content")?.
            Element(mns + "properties")?.
            Element(dns + "ServerRelativeUrl")?.
            Value;
    }

    return new Document(id, title, fields, docUrl);
}

It parses the XML and extracts a document and its properties. GetUrlFromTitle gets the Url from the Title property and is:

private string GetUrlFromTitle(XElement element, string title)
{
    return element.Descendants(ns + "link")
            ?.FirstOrDefault(e1 => e1.Attribute("title")?.Value == title)
            ?.Attribute("href")?.Value;
}

The third access method is using the Soap service that Sharepoint makes available. This access method is listed as deprecated, but it’s still alive. You have to create a reference to the http://<website>/_vti_bin/Lists.asmx and create a WCF client for it. I preferred to create a .NET 2.0 Client instead of a WCF service, as I found easier to authenticate with this service.

In Visual Studio, right-click the References node and select the Add Service Reference. Then, click on the Advanced button and then, Add Web Reference. Put the url in the box and click the arrow button:

When you click the Add Reference button, the reference will be added to the project and it can be used. Create a new class in the Repository folder and name it SoapListRepository. Make the class implement the IListRepository interface. The GetListsAsync method will be:

XNamespace ns = "http://schemas.microsoft.com/sharepoint/soap/";
private Lists _proxy;

public async Task<List<DocumentsList>> GetListsAsync()
{
    var tcs = new TaskCompletionSource<XmlNode>();
    _proxy = new Lists
    {
        Url = _address,
        UseDefaultCredentials = true
    };
    _proxy.GetListCollectionCompleted += ProxyGetListCollectionCompleted;
    _proxy.GetListCollectionAsync(tcs);
    XmlNode response;
    try
    {
        response = await tcs.Task;
    }
    finally
    {
        _proxy.GetListCollectionCompleted -= ProxyGetListCollectionCompleted;
    }

    var list = XElement.Parse(response.OuterXml);
    var result = list?.Descendants(ns + "List")
        ?.Where(e => e.Attribute("Hidden").Value == "False")
        ?.Select(e => new DocumentsList(e.Attribute("Title").Value,
        e.Attribute("Description").Value)).ToList();
    return result;
}

private void ProxyGetListCollectionCompleted(object sender, GetListCollectionCompletedEventArgs e)
{
    var tcs = (TaskCompletionSource<XmlNode>)e.UserState;
    if (e.Cancelled)
    {
        tcs.TrySetCanceled();
    }
    else if (e.Error != null)
    {
        tcs.TrySetException(e.Error);
    }
    else
    {
        tcs.TrySetResult(e.Result);
    }
}

As we are using the .NET 2.0 web service, in order to convert the call to an asynchronous method, we must use a TaskCompletionSource to detect when the call to the service returns. Then we fire the call to the service. When it returns, the completed event is called and sets the TaskCompletionSource to the desired state: cancelled, if the call was cancelled, exception, if there was an error or set the result if the call succeeds. Then, we remove the event handler for the completed event and process the result (a XmlNode), to transform into a list of DocumentsList.

The call to GetDocumentsFromListAsync is very similar to GetListsAsync:

XNamespace rs = "urn:schemas-microsoft-com:rowset";
XNamespace z = "#RowsetSchema";

public async Task<List<Document>> GetDocumentsFromListAsync(string title)
{
    var tcs = new TaskCompletionSource<XmlNode>();
    _proxy = new Lists
    {
        Url = _address,
        UseDefaultCredentials = true
    };
    _proxy.GetListItemsCompleted += ProxyGetListItemsCompleted;
    _proxy.GetListItemsAsync(title, "", null, null, "", null, "", tcs);
    XmlNode response;
    try
    {
        response = await tcs.Task;
    }
    finally
    {
        _proxy.GetListItemsCompleted -= ProxyGetListItemsCompleted;
    }

    var list = XElement.Parse(response.OuterXml);

    var result = list?.Element(rs + "data").Descendants(z + "row")
        ?.Select(e => new Document(e.Attribute("ows_ID")?.Value,
        e.Attribute("ows_LinkFilename")?.Value, AttributesToDictionary(e),
        e.Attribute("ows_FileRef")?.Value)).ToList();
    return result;
}

private Dictionary<string, object> AttributesToDictionary(XElement e)
{
    return e.Attributes().ToDictionary(a => a.Name.ToString().Replace("ows_", ""), a => (object)a.Value);
}

private void ProxyGetListItemsCompleted(object sender, GetListItemsCompletedEventArgs e)
{
    var tcs = (TaskCompletionSource<XmlNode>)e.UserState;
    if (e.Cancelled)
    {
        tcs.TrySetCanceled();
    }
    else if (e.Error != null)
    {
        tcs.TrySetException(e.Error);
    }
    else
    {
        tcs.TrySetResult(e.Result);
    }
}

The main difference is the processing of the response, to get the documents list. Once you have the two methods in place, the only thing to do is select the correct repository in MainViewModel. For that, we create an enum for the API Selection:

public enum ApiSelection
{
    NetApi,
    Rest,
    Soap
};

Then, we need to declare a command bound to the radiobuttons, that will receive a string with the enum value:

public ICommand ApiSelectCommand =>
    _apiSelectCommand ?? (_apiSelectCommand = new RelayCommand<string>(s => SelectApi(s)));

private void SelectApi(string s)
{
    _selectedApi = (ApiSelection)Enum.Parse(typeof(ApiSelection), s, true);
    GoToAddress();
}

The last step is to select the repository in the GoToAddress method:

private async void GoToAddress()
{
    var sw = new Stopwatch();
    sw.Start();
    _listRepository = _selectedApi == ApiSelection.Rest ?
        (IListRepository)new RestListRepository(Address) :
        _selectedApi == ApiSelection.NetApi ?
        (IListRepository)new CsomListRepository(Address) :
        new SoapListRepository(Address);

    DocumentsLists = await _listRepository.GetListsAsync();
    ListTiming = $"Time to get lists: {sw.ElapsedMilliseconds}";
    ItemTiming = "";
}

With the code in place, you can run the app and see the data shown for each API.

One last change to the program is to add a command bound to the Go button, so you can change the address of the web site and get the lists and documents for the new site:

public ICommand GoCommand =>
            _goCommand ?? (_goCommand = new RelayCommand(GoToAddress, () => !string.IsNullOrEmpty(Address)));

This command has an extra touch: it will only enable the button if there is an address in the address box. If it’s empty, the button will be disabled. Now you can run the program, change the address of the website, and get the lists for the new website.

Conclusions

As you can see, we’ve created a WPF program that uses the MVVM pattern and accesses Sharepoint data using three different methods – it even has a time measuring feature, so you can check the performance difference and choose the right one for your case.

The full source code for this project is at https://github.com/bsonnino/SharepointAccess