After I finished last article, I started to think that there should be another way to test the non-virtual methods of a class. And, as a matter of fact, there is another way that has been around for a long time: Microsoft Fakes. If your don’t know it, you can read this article.

While I wouldn’t recommend it for using in your daily testing, it’s invaluable when you are testing legacy code and don’t have any testing and lots of coupling in your code. The usage is very simple:

In the Solution Explorer of the test project, right-click in the dll you want to create a fake (in our case, is the dll of the main project and select Add Fakes Assembly:

That will add the fake assembly and all the infrastructure needed to use fakes in your tests. Then, in your test class, you can use the newly created fake. Just create a ShimContext and use it while testing the class:

[TestMethod]
public void TestVirtualMethod()
{
    using (ShimsContext.Create())
    {
        var fakeClass = new Fakes.ShimClassNonVirtualMethods();
        var sut = new ClassToTest();
        sut.CallNonVirtualMethod(fakeClass);
    }
}

[TestMethod]
public void TestNonVirtualMethod()
{
    using (ShimsContext.Create())
    {
        var fakeClass = new Fakes.ShimClassNonVirtualMethods();
        var sut = new ClassToTest();
        sut.CallNonVirtualMethod(fakeClass);
    }
}

As you can see, we initialize a ShimsContext and enclose it in an Using clause. Then we initialize the fake class. This class will be in the same namespace as the original one, with .Fakes at the end, and its name will be the same, starting with Shim. That way, we can use it as a fake for the original class and it will have all non-virtual methods overridden.

That feature, although not recommended in a daily basis, can be a lifesaver when you have to test legacy code with deep coupling with databases, UI, or even other libraries. Microsoft Fakes can fake almost everything, including System calls, so it can be a nice starting point to refactor your legacy code – with it, you can put some tests in place and star refactoring your code more confidently.

 

Introduction

Usually I use interfaces for my unit tests. This is a good practice, and should be followed when possible. But, last week I was testing some legacy code, which had no interfaces and stumbled with something: I was trying to mock a class with a non-virtual method and saw that the original method was being executed, instead of doing nothing (the expected for the mock). I was using NSubstitute, but the same principles apply to most of the more popular mocking frameworks, like Moq, FakeItEasy or Rhino mocks. All of them use for their backend Castle DynamicProxy to create a proxy around the object and do its magic. All of them state that they can only mock virtual methods, so I should be aware that I could not mock non-virtual methods.

This would be ok if the class to be mocked is yours, just add the Virtual keyword to your method and you’re done. You’ll have some size and performance penalty in exchange for your flexibility. Or you could use something like Typemock Isolator (this is a great mocking framework that can mock almost anything)- that would work even if you don’t have access to the source code. But why aren’t non-virtual methods mocked ?

Let’s start with an example. Let’s say I have the code of the class below:

public class ClassNonVirtualMehods
{
    public void NonVirtualMethod()
    {
         Console.WriteLine("Executing Non Virtual Method");
    }

    public virtual void VirtualMethod()
    {
        Console.WriteLine("Executing Virtual Method");
    }
}

My class to test receives an instance of this class:

public class ClassToTest
{
    public void CallVirtualMethod(ClassNonVirtualMehods param)
    {
        param.VirtualMethod(); 
    }

    public void CallNonVirtualMethod(ClassNonVirtualMehods param)
    {
        param.NonVirtualMethod();
    }
}

My tests are (I’m using NSubstitute here):

[TestClass]
public class UnitTest1
{
    [TestMethod]
    public void TestVirtualMethod()
    {
        var mock = Substitute.For<ClassNonVirtualMehods>();
        var sut = new ClassToTest();
        sut.CallVirtualMethod(mock);
    }

    [TestMethod]
    public void TestNonVirtualMethod()
    {
        var mock = Substitute.For<ClassNonVirtualMehods>();
        var sut = new ClassToTest();
        sut.CallNonVirtualMethod(mock);
    }
}

If you run these tests, you’ll see this:

You’ll see that the non virtual method is executed (writing to the console), while the virtual method is not executed, as we expected.

Why aren’t non-virtual methods mocked ?

To explain that, we should see how Castle DynamicProxy works and how C# polymorphism works.

From here, you can see that DynamicProxy works with two types of proxies: Inheritance-based and composition-based. In order to put in place their mocks, the mock frameworks use the inheritance-based proxy, and that would be the same as create an inherited class (the mock) that overrides the methods and put the desired behavior in them. It would be something like this:

public class MockClassWithNonVirtualMethods : ClassNonVirtualMehods
{
    public override void VirtualMethod()
    {
        // Don't do anything
    }
}

In this case, as you can see, you can only override virtual methods, so the non-virtual methods will continue to execute the same way. If you have tests like this ones, you will see the same behavior as you did with NSubstitute:

You could use the composition-based proxy, but when you read the documentation, you’ll see that this is not an option:

Class proxy with target – this proxy kind targets classes. It is not a perfect proxy and if the class has non-virtual methods or public fields these can’t be intercepted giving users of the proxy inconsistent view of the state of the object. Because of that, it should be used with caution.

Workaround for non-virtual methods

Now we know why mocking frameworks don’t mock non-virtual methods, we must find a workaround.

My first option was to use the RealProxy class to create the mock and intercept the calls, like shown in my article in the MSDN Magazine. This is a nice and elegant solution, as the impact to the source code would be minimum, just create a generic class to create the proxy and use it instead of the real class.  But it turned on that the RealProxy has two disadvantages:

  • It can only create proxies that inherit of MarshalByRefObject or are interfaces, and our classes are none of them
  • It isn’t available in .NET Core, and it was replaced by DispatchProxy, which is different, and can only proxy interfaces

So, I discarded this option and went for another option. I was looking to change the IL Code at runtime, using Reflection.Emit, but this is prone to errors and not an easy task, so I kept searching until I found Fody. This is a very interesting framework to allow you to add new code at runtime. This is the same thing I was expecting to do, with the difference that the solutions are ready and tested. Just add an add-in and you’re set – the task you are trying to do is there. There are a lot of add-ins for it and it’s very simple to use them. If you don’t find what you are trying to do there, you can develop your own add-in. In our case, there is already an add-in, Virtuosity.

To use it, all we have to do is to install the packages Fody and Fody.Virtuosity to your project, add a xml file named FodyWeavers.xml with this content:

<?xml version="1.0" encoding="utf-8"?>
<Weavers xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="FodyWeavers.xsd">
  <Virtuosity />
</Weavers>

Set its Build Action to None and Copy to Output Directory to Copy If Newer and you’re set. This will transform all your non-virtual methods to virtual and you will be able to mock them in any mocking framework, with no need to change anything else in your projects.

Conclusion

The procedure described here is nothing something that I would recommend doing on a daily basis, but when you have some legacy code to test, where changes to the source code are difficult to make, it can be a lifesaver. When you have the time and the means, you should develop an interface and use it in your code, using Dependency Injection. That way, you will reduce coupling and enhance its maintainability.

 

All the source code for this article is at https://github.com/bsonnino/MockNonVirtual

Introduction

Learning a new programming language is always hard. You have to read the documentation, stop, go to the IDE of your choice, start a new project, compile and run the program and see the results. Stop. Restart. After some time, you have no more patience for the process and start to skip some steps (maybe I don’t need to run this sample here…) until the point that you try to find an easier way, but most of the time there is no easier way.

Well, now there is an easier way. The dotnet team has created a new tool, called dotnet try that allows you to have documentation mixed with a code sample window, where you can try the code while reading the documentation. Cool, no?

Starting with dotnet try

To use this tool, you must install dotnet core in your machine and then install dotnet try with:

dotnet tool install -g dotnet-try

Once you do that, you can use the tool with dotnet try. But, the better way to start is to use the tool’s own demo with dotnet try demo. This will load the tutorial, create a server and open the browser with the tutorial:

There you can see the documentation side-by-side with the code. You can change the code and click on the arrow key and look at the results. Nice! You can  follow the tutorial and learn how to create your documentation easily. Or you can continue reading and I will tell you how to use it (sorry, no dotnet try here :-)).

If you want to learn something about .NET, you can run the samples. Just clone the dotnet/try-samples repository, change to the cloned folder and type dotnet try to open it.

There you can see some tutorials:

  • Beginner’s guide
  • 101 Linq Samples
  • C# 7
  • C# 8

With that, you can start playing with the tools. I recommend checking the Linq samples, there is always something to learn about Linq, and the new features in C#7 and 8.

Once you’ve played with the tool, it’s the time to create your own documentation.

Creating documentation for your code

All documentation is based on the MarkDown language. It’s a very simple way to create your documentation, If you don’t know how to use the MarkDown language, you can learn it here.

Create a new folder and, in this folder, create a readme.md file and add this code:

# Documenting your code with **dotnet try**
## Introduction
To document your code with dotnet try, you must mix your markdown text with some fences

Once you save it and run dotnet try, you can see something like this in the browser:

Now you can start adding code. To add the code window, we must add a code fence, a block of code delimited by triple backticks. You can add code directly to the code fence with something like this:

using System;

namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
        }
    }
}

This is great, but can pose a problem: if you need to change something in your code, you must also change the markdown file, and this is less than optimum, as at some point you will forget to update both places. But there is a better way: you can use the source code as the only source of truth and you don’t have to update both places – when you change the source code, the code in the page changes accordingly. To do this, you can create a console app with

dotnet new console

This creates a simple project that writes Hello World in the console. It’s pretty simple, but fits our needs. Now, we will document it. You can add the following code at the end of the readme.md file to show the contents of the program in editable form:

Below, you should see the code in the program.cs file:
```cs --source-file ./Program.cs --project ./dotnettrypost.csproj
```

In this code fence, you must add the name of the file and the project. If you run dotnet try again, you will see something like this:

You can click the arrow to run the code, the first time, it will take a little bit to compile the code, but the other times, it will be faster. You can change the code and see the results:

You have here a safe environment, where you can try changes in the code. If you make a mistake, the window will point it, so you can fix it.

As you can see, the code for the entire file is shown, but sometimes that’s not what you want. Sometimes you just want to show a code snippet from the file. To do that, you must create regions in your code. In the command prompt, type code . to open Visual Studio Code (you must have it installed. If you don’t have it installed, go to https://code.visualstudio.com/download). Then add a region to your code:

using System;

namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
        }
    }
}

 

Save the file and then edit the Readme.md file to change the code fence:

```cs --source-file ./Program.cs --project ./dotnettrypost.csproj --region HelloWorld
```

When you save the file and refresh the page, you will see something like this:

Creating regions in your code and referencing them in the code fence allows you to separate the code you want in the code window. That way, a single file can provide code for many code windows.

If you don’t want to have a runnable window, but still synchronized with the code, you can use the editable parameter:

```cs --source-file ./Program.cs --project ./dotnettrypost.csproj --region HelloWorld --editable false
```

When you are running the code, even if you aren’t showing the full code, everything in the program is executed. For example, if you change the source code to:

using System;
namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
            Console.WriteLine("This won't be shown in the snippet but will be shown when you run");
        }
    }
}

You will see the same code snippet but when you run it you will see something like this:

If you don’t want this, you need to do some special treatment in your code: you must make each snippet of code run alone. You can do that by processing the parameters passed to the program:

using System;

namespace dotnettrypost
{
    class Program
    {
        static void Main(string[] args)
        {
            for (var i = 0; i < args.Length; i++)
                if (args[i] == "--region")
                {
                    if (args[i + 1] == "HelloWorld")
                        HelloWorld();
                    else if (args[i + 1] == "ByeBye")
                        ByeBye();
                }
        }

        private static void HelloWorld()
        {
            #region HelloWorld
            Console.WriteLine("Hello World!");
            #endregion
        }

        private static void ByeBye()
        {
            #region ByeBye
            Console.WriteLine("ByeBye!");
            #endregion
        }
    }
}

Now, when you run the code, it will run only the code for the HelloWorld snippet. As you can see there are a lot of options to show and run code in the web page. I see many ways to improve documentation and experimentation of new APIs. And you, don’t this bring you new ideas?

All the source code for this project is at https://github.com/bsonnino/dotnettry

One of the perks of being an MVP is to receive some tools for my own use, so I can evaluate them and if, I find them valuable, I can use them on a daily basis. On my work as an architect/consultant, one task that I often find is to analyse an application and suggest changes to make it more robust and maintainable. One tool that can help me in this task is NDepend (https://www.ndepend.com/). With this tool, you can analyse your code, verify its dependencies, set coding rules and verify if the how code changes are affecting the quality of your application. For this article, we will be analyzing eShopOnWeb (https://github.com/dotnet-architecture/eShopOnWeb), a sample application created by Microsoft to demonstrate architectural patterns on Dotnet Core. It’s an Asp.NET MVC Core 2.2 app that shows a sample shopping app, that can be run on premises or in Azure or in containers, using a Microservices architecture. It has a companion book that describes the application architecture and can be downloaded at https://aka.ms/webappebook.

When you download, compile and run the app, you will see something like this:

You have a full featured Shopping app, with everything you would expect to have in this kind of app. The next step is to start analyzing it.

Download NDepend (you have a 14 day trial to use and evaluate it), install it in your machine, it will install as an Add-In to Visual Studio. The, in Visual Studio, select Extensions/NDepend/Attach new NDepend Project to current VS Solution. A window like this opens:

It has the projects in the solution selected, you can click on Analyze 3 .NET Assemblies. After it runs, it opens a web page with a report of its findings:

This report has an analysis of the project, where you can verify the problems NDepend found, the project dependencies, and drill down in the issues. At the same time, a window like this opens in Visual Studio:

If you want a more dynamic view, you can view the dashboard:

 

In the dashboard, you have a full analysis of your application: lines of code, methods, assemblies and so on. One interesting metric there is the technical debt, where you can see how much technical debt there is in this app and how long will it take to fix this debt (in this case, we have 3.72% of technical debt and 2 days to fix it. We have also the code metrics and violated coding rules. If you click in some item in the dashboard, like the # of Lines, you will see the detail in the properties window:

If we take a look at the dashboard, we’ll see some issues that must be improved. In the Quality Gates, we have two issues that fail. By clicking on the number 2 there, we see this in the Quality Gates evolution window:

If we hover the mouse on one of the failed issues, we get a tooltip that explains the failure:

If we double-click in the failure we drill-down to what caused it:

If we click in the second issue, we see that there are two names used in different classes: Basket and IEmailSender:

Basket is the name of the class in Microsoft.eShopWeb.Web.Pages.Shared.Components.BasketComponent and in Microsoft.eShopWeb.ApplicationCore.Entities.BasketAggregate

One other thing that you can see is the dependency graph:

With it, you can see how the assemblies relate to each other and give a first look on the architecture. If you filter the graph to show only the application assemblies, you have a good overview of what’s happening:

The largest assembly is Web, followed by Infrastructure and ApplicationCore. The application is well layered, there are no cyclic calls between assemblies (Assembly A calling assembly B that calls A), there is a weak relation between Web and Infrastructure (given by the width of the line that joins them) and a strong one between Web and ApplicationCore. If we have a large solution with many assemblies, just with that diagram, we can take a look of what’s happening and if we are doing the things right. The next step is go to the details and look at the assemblies dependencies. You can hover the assembly and get some info about it:

For example, the Web assembly has 58 issues detected and has an estimated time to fix them in 1 day. This is an estimate calculated by NDepend using the complexity of the methods that must be fixed, but you can set your own formula to calculate the technical debt if this isn’t ok for your team.

Now that we got an overview of the project, we can start fixing the issues. Let’s start with the easiest ones :-).  The Infrastructure assembly has only two issues and a debt of 20 minutes. In the diagram, we can right-click on the Infrastructure assembly and select Select Issues/On Me and on Child Code elements. This will open the issues in the Queries and Rules Edit window, at the right:

We can then open the tree and double click on the second issue. It will point you to a rule “Non-static classes should be instantiated or turned to static”, pointing to the SpecificatorEvaluator<T> class. This is a class that has only one static method and is referenced only one time, so there’s no harm to make it static.  Once you make it static, build the app and run the dependency check again, you will see this:

Oh-Oh. We fixed an issue and introduced another one – an API Breaking Change – when we made that class static, we removed the constructor. In this case, it wasn’t really an API change, because nobody would instantiate a class with only static methods, so we should do a restart, here. Go to the Dashboard, in Choose Baseline and select define:

Then select the most recent analysis and click OK.  That will open the NDepend settings, where you will see the new baseline. Save the settings and rerun the analysis and the error is gone.

We can then open the tree again and double click on another issue that remains in Infrastructure. That will open the source code, pointing to a readonly declaration for a DBContext. This is not a big issue, it’s only telling us that we are declaring the variable as readonly, but the object it’s storing is mutable, so it can be changed. There is a mention of this issue in the Framework Design Guidelines, by Microsoft – https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/. If you hover the mouse on the issue, there is a tooltip on how to fix it:

We have three ways to fix this issue:

  • Remove the readonly from the field
  • Make the field private and not protected
  • Use an attribute to say “ok, I am aware of this, but I don’t mind”

The first option will suppress the error, but will remove what we want to do – show that this field should not be entirely replaced with another dbcontext. The second option will remove the possibility to use the dbcontext in derived classes, so I’ll choose the third option and add the attribute. If I right-click on the issue in the Rules and Queries window and select Suppress Issue, a window opens:

All I have to do is to copy the attribute to the clipboard and paste it into the source code. I also have to declare the symbol CODE_ANALYSIS in the project (Project/Properties/Build). That was easy! Let’s go to the next one.

This is an obsolete method used. Fortunately, the description shows us to use the UseHiLo method. We change the method, run the app to see if there’s nothing broken, and we’re good. W can run the analysis again and see what happened:

We had a slight decrease in the technical debt, we solved one high issue and one violated rule. As you can see, NDepend not only analyzes your code, but it also gives you a position on what you are doing with your code changes. This is a very well architected code (as it should be – it’s an architecture sample), so the issues are minor, but you can see what can be done with NDepend. When you have a messy project, this will be surely an invaluable tool!

 

You need to create customized reports based on Sharepoint data and you don’t have the access to the server to create a new web part to access it directly. You need to resort to other options to generate the report. Fortunately, Sharepoint allows some methods to access its data, so you can get the information you need. If you are using C#, you can use one of these three methods:

  • Client Side Object Model (CSOM) – this is a set of APIs that allow access to the Sharepoint data and allow you to maintain lists and documents in Sharepoint
  • REST API – with this API you can access Sharepoint data, not only using C#, but with any other platform that can perform and process REST requests, like web or mobile apps
  • SOAP Web Services – although this kind of access is being deprecated, there are a lot of programs that depend on it, so it’s being actively used until now. You can use this API with any platform that can process SOAP web services.

This article will show how to use these three APIs to access lists and documents from a Sharepoint server and put them in a WPF program, with an extra touch: the user will be able to export the data to Excel, for further manipulation.

To start, let’s create a WPF project in Visual Studio and name it SharepointAccess. We will use the MVVM pattern to develop it, so right click the References node in the Solution Explorer and select Manage NuGet Packages and add the MVVM Light package. That will add a ViewModel folder, with two files in it to your project. The next step is to make a correction in the ViewModelLocator.cs file. If you open it, you will see an error in the line

using Microsoft.Practices.ServiceLocation;

Just replace this using clause with

using CommonServiceLocator;

The next step is to link the DataContext of MainWindow to the MainViewModel, like it’s described in the header of ViewModelLocator:

<Window x:Class="SharepointAccess.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:SharepointAccess"
        mc:Ignorable="d"
        Title="Sharepoint Access" Height="700" Width="900"
        DataContext="{Binding Source={StaticResource Locator}, Path=Main}">

Then, let’s add the UI to MainWindow.xaml:

<Grid>
    <Grid.Resources>
        <DataTemplate x:Key="DocumentsListTemplate">
            <StackPanel>
                <TextBlock Text="{Binding Title}" />
            </StackPanel>
        </DataTemplate>
        <DataTemplate x:Key="DocumentTemplate">
            <StackPanel Margin="0,5">
                <TextBlock Text="{Binding Title}" />
                <TextBlock Text="{Binding Url}" />

            </StackPanel>
        </DataTemplate>
        <DataTemplate x:Key="FieldsListTemplate">
            <Grid >
                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="150" />
                    <ColumnDefinition Width="*" />
                </Grid.ColumnDefinitions>
                <TextBlock Text="{Binding Key}" TextTrimming="CharacterEllipsis"/>
                <TextBlock Text="{Binding Value}" Grid.Column="1" TextTrimming="CharacterEllipsis"/>
            </Grid>
        </DataTemplate>
    </Grid.Resources>
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="*" />
        <ColumnDefinition Width="*" />
        <ColumnDefinition Width="*" />
    </Grid.ColumnDefinitions>
    <Grid.RowDefinitions>
        <RowDefinition Height="40" />
        <RowDefinition Height="*" />
        <RowDefinition Height="40" />
    </Grid.RowDefinitions>
    <StackPanel Grid.Row="0" Grid.ColumnSpan="3" HorizontalAlignment="Stretch" Orientation="Horizontal">
        <TextBox Text="{Binding Address}" Width="400" Margin="5" HorizontalAlignment="Left"
                 VerticalContentAlignment="Center"/>
        <Button Content="Go" Command="{Binding GoCommand}" Width="65" Margin="5" HorizontalAlignment="Left"/>
    </StackPanel>
    <StackPanel Orientation="Horizontal"  Grid.Row="0" Grid.Column="3" 
                      HorizontalAlignment="Right" Margin="5,5,10,5">
        <RadioButton Content=".NET Api" IsChecked="True" GroupName="ApiGroup" Margin="5,0"
                     Command="{Binding ApiSelectCommand}" CommandParameter="NetApi" />
        <RadioButton Content="REST" GroupName="ApiGroup"
                     Command="{Binding ApiSelectCommand}" CommandParameter="Rest" Margin="5,0"/>
        <RadioButton Content="SOAP"  GroupName="ApiGroup"
                     Command="{Binding ApiSelectCommand}" CommandParameter="Soap" Margin="5,0"/>
    </StackPanel>
    <ListBox Grid.Column="0" Grid.Row="1" ItemsSource="{Binding DocumentsLists}" 
             ItemTemplate="{StaticResource DocumentsListTemplate}"
             SelectedItem="{Binding SelectedList}"/>
    <ListBox Grid.Column="1" Grid.Row="1" ItemsSource="{Binding Documents}"
             ItemTemplate="{StaticResource DocumentTemplate}"
             SelectedItem="{Binding SelectedDocument}"/>
    <ListBox Grid.Column="2" Grid.Row="1" ItemsSource="{Binding Fields}"
             ItemTemplate="{StaticResource FieldsListTemplate}"
             />
    <TextBlock Text="{Binding ListTiming}" VerticalAlignment="Center" Margin="5" Grid.Row="2" Grid.Column="0" />
    <TextBlock Text="{Binding ItemTiming}" VerticalAlignment="Center" Margin="5" Grid.Row="2" Grid.Column="1" />
  </Grid>

The first line in the grid will have a text box to enter the address of the Sharepoint site, a button to go to the address and three radiobuttons to select the kind of access you want.

The main part of the window will have three listboxes to show the lists on the selected site, the documents in each list and the properties of the document.

As you can see, we’ve used data binding to fill the properties of the UI controls. We’re using the MVVM pattern and all these propperties should be bound to properties in the ViewModel. The buttons and radiobuttons have their Command properties bound to a property in the ViewModel, so we don`t have to add code to the code behind file. We’ve also used templates for the items in the listboxes, so the data is properly presented.

The last line will show the timings for getting the data.

If you run the program, it will run without errors and you will get an UI like this, that doesn’t do anything:

 

It’s time to add the properties in the MainViewModel to achieve the functionality we want. Before that, we’ll add two classes to manipulate the data that comes from the Sharepoint server, no matter the access we are using. Create a new folder named Model, and add these two files, Document.cs and DocumentsList.cs:

public class Document
{
    public Document(string id, string title, 
        Dictionary<string, object> fields,
        string url)
    {
        Id = id;
        Title = title;
        Fields = fields;
        Url = url;
    }
    public string Id { get; }
    public string Title { get; }
    public string Url { get; }
    public Dictionary<string, object> Fields { get; }
}<span id="mce_marker" data-mce-type="bookmark" data-mce-fragment="1">​</span>
public class DocumentsList
{
    public DocumentsList(string title, string description)
    {
        Title = title;
        Description = description;
    }

    public string Title { get; }
    public string Description { get; }
}

These are very simple classes, that will store the data that comes from the Sharepoint server. The next step is to get the data from the server. For that we will use a repository that gets the data and returns it using these two classes. Create a new folder, named Repository and add a new interface, named IListRepository:

public interface IListRepository
{
    Task<List<Document>> GetDocumentsFromListAsync(string title);
    Task<List<DocumentsList>> GetListsAsync();
}

This interface declares two methods, GetListsAsync and GetDocumentsFromListAsync. These are asynchronous methods because we don want them to block the UI while they are being called. Now, its time to create the first access to Sharepoint, using the CSOM API. For that, you must add the NuGet package Microsoft.SharePointOnline.CSOM . This package will provide all the APIs to access Sharepoint data. We can now create our first repository. In the Repository folder, add a new class and name it CsomListRepository.cs:

public class CsomListRepository : IListRepository
{
    private string _sharepointSite;

    public CsomListRepository(string sharepointSite)
    {
        _sharepointSite = sharepointSite;
    }

    public Task<List<DocumentsList>> GetListsAsync()
    {
        return Task.Run(() =>
        {
            using (var context = new ClientContext(_sharepointSite))
            {
                var web = context.Web;
                
                var query = web.Lists.Include(l => l.Title, l => l.Description)
                     .Where(l => !l.Hidden && l.ItemCount > 0);

                var lists = context.LoadQuery(query);
                context.ExecuteQuery();

                return lists.Select(l => new DocumentsList(l.Title, l.Description)).ToList();
            }
        });
    }

    public Task<List<Document>> GetDocumentsFromListAsync(string listTitle)
    {
        return Task.Run(() =>
        {
            using (var context = new ClientContext(_sharepointSite))
            {
                var web = context.Web;
                var list = web.Lists.GetByTitle(listTitle);
                var query = new CamlQuery();

                query.ViewXml = "<View />";
                var items = list.GetItems(query);
                context.Load(list,
                    l => l.Title);
                context.Load(items, l => l.IncludeWithDefaultProperties(
                    i => i.Folder, i => i.File, i => i.DisplayName));
                context.ExecuteQuery();

                return items
                    .Where(i => i["Title"] != null)
                    .Select(i => new Document(i["ID"].ToString(), 
                    i["Title"].ToString(), i.FieldValues, i["FileRef"].ToString()))
                    .ToList();
            }
        });
    }
}

For accessing the Sharepoint data, we have to create a ClientContext, passing the Sharepoint site to access. Then, we get a reference to the Web property of the context and then we do a query for the lists that aren’t hidden and that have any items in them. The query should return with the titles and description of the lists in the website. To get the documents from a list we use a similar way: create a context, then a query and load the items of a list.

We can call this code in the creation of the ViewModel:

private IListRepository _listRepository;

public MainViewModel()
{
    Address = ConfigurationManager.AppSettings["WebSite"];
    GoToAddress();
}

This code will get the initial web site url from the configuration file for the app and call GoToAddress:

private async void GoToAddress()
{
    var sw = new Stopwatch();
    sw.Start();
    _listRepository = new CsomListRepository(Address);

    DocumentsLists = await _listRepository.GetListsAsync();
    ListTiming = $"Time to get lists: {sw.ElapsedMilliseconds}";
    ItemTiming = "";
}

The method calls the GetListsAsync method of the repository to get the lists and sets the DocumentsLists property with the result.

We must also declare some properties that will be used to bind to the control properties in the UI:

public List<DocumentsList> DocumentsLists
{
    get => _documentsLists;
    set
    {
        _documentsLists = value;
        RaisePropertyChanged();
    }
}
public string ListTiming
{
    get => _listTiming;
    set
    {
        _listTiming = value;
        RaisePropertyChanged();
    }
}
public string ItemTiming
{
    get => itemTiming;
    set
    {
        itemTiming = value;
        RaisePropertyChanged();
    }
}
public string Address
{
    get => address;
    set
    {
        address = value;
        RaisePropertyChanged();
    }
}

These properties will trigger the PropertyChanged event handler when they are modified, so the UI can be notified of their change.

If you run this code, you will have something like this:

Then, we must get the documents when we select a list. This is done when the SelectedList property changes:

public DocumentsList SelectedList
{
    get => _selectedList;
    set
    {
        if (_selectedList == value)
            return;
        _selectedList = value;
        GetDocumentsForList(_selectedList);
        RaisePropertyChanged();
    }
}

GetDocumentsForList is:

private async void GetDocumentsForList(DocumentsList list)
{
    var sw = new Stopwatch();
    sw.Start();
    if (list != null)
    {
        Documents = await _listRepository.GetDocumentsFromListAsync(list.Title);
        ItemTiming = $"Time to get items: {sw.ElapsedMilliseconds}";
    }
    else
    {
        Documents = null;
        ItemTiming = "";
    }
}

You have to declare the Documents and the Fields properties:

public List<Document> Documents
{
    get => documents;
    set
    {
        documents = value;
        RaisePropertyChanged();
    }
}

public Dictionary<string, object> Fields => _selectedDocument?.Fields;

One other change that must be made is to create a property named SelectedDocument that will be bound to the second list. When the user selects a document, it will fill the third list with the document’s properties:

public Document SelectedDocument
{
    get => _selectedDocument;
    set
    {
        if (_selectedDocument == value)
            return;
        _selectedDocument = value;
        RaisePropertyChanged();
        RaisePropertyChanged("Fields");
    }
}

Now, when you click on a list, you get all documents in it. Clicking on a document, opens its properties:

Everything works with the CSOM access, so it’s time to add the other two ways to access the data: REST and SOAP. We will implement these by creating two new repositories that will be selected at runtime.

To get items using the REST API, we must do HTTP calls to http://<website>/_api/Lists and  http://<website>/_api/Lists/GetByTitle(‘title’)/Items. In the Repository folder, create a new class and name it RestListRepository. This class should implement the IListRepository interface:

public class RestListRepository : IListRepository
{
    public Task<List<Document>> GetDocumentsFromListAsync(string title)
    {
        throw new NotImplementedException();
    }

    public Task<List<DocumentsList>> GetListsAsync()
    {
        throw new NotImplementedException();
    }
}

GetListsAsync will be:

private XNamespace ns = "http://www.w3.org/2005/Atom";
public Task<List<DocumentsList>> GetListsAsync()
{
    var doc = await GetResponseDocumentAsync(_sharepointSite + "Lists");
    if (doc == null)
        return null;

    var entries = doc.Element(ns + "feed").Descendants(ns + "entry");
    return entries.Select(GetDocumentsListFromElement)
        .Where(d => !string.IsNullOrEmpty(d.Title)).ToList();
}

GetResponseDocumentAsync will issue a HTTP Get request, will process the response and will return an XDocument:

public async Task<XDocument> GetResponseDocumentAsync(string url)
{
    var handler = new HttpClientHandler
    {
        UseDefaultCredentials = true
    };
    HttpClient httpClient = new HttpClient(handler);
    var headers = httpClient.DefaultRequestHeaders;
    var header = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 " +
        "(KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36";
    if (!headers.UserAgent.TryParseAdd(header))
    {
        throw new Exception("Invalid header value: " + header);
    }
    Uri requestUri = new Uri(url);

    try
    {
        var httpResponse = await httpClient.GetAsync(requestUri);
        httpResponse.EnsureSuccessStatusCode();
        var httpResponseBody = await httpResponse.Content.ReadAsStringAsync();
        return XDocument.Parse(httpResponseBody);
    }
    catch
    {
        return null;
    }
}

The response will be a XML String. We could get the response as a Json object if we pass the accept header as application/json.  After the document is parsed, we process all entry elements, retrieving the lists, in GetDocumentsListFromElement:

private XNamespace mns = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata";
private XNamespace dns = "http://schemas.microsoft.com/ado/2007/08/dataservices";
private DocumentsList GetDocumentsListFromElement(XElement e)
{
    var element = e.Element(ns + "content")?.Element(mns + "properties");
    if (element == null)
        return new DocumentsList("", "");
    bool.TryParse(element.Element(dns + "Hidden")?.Value ?? "true", out bool isHidden);
    int.TryParse(element.Element(dns + "ItemCount")?.Value ?? "0", out int ItemCount);
    return !isHidden && ItemCount > 0 ?
      new DocumentsList(element.Element(dns + "Title")?.Value ?? "",
        element.Element(dns + "Description")?.Value ?? "") :
      new DocumentsList("", "");
}

Here we filter the list by parsing the Hidden and ItemCount properties and returning an empty document if the document is hidden or has no items. GetDocumentsFromListAsync is:

private async Task<Document> GetDocumentFromElementAsync(XElement e)
{
    var element = e.Element(ns + "content")?.Element(mns + "properties");
    if (element == null)
        return new Document("", "", null, "");
    var id = element.Element(dns + "Id")?.Value ?? "";
    var title = element.Element(dns + "Title")?.Value ?? "";
    var description = element.Element(dns + "Description")?.Value ?? "";
    var fields = element.Descendants().ToDictionary(el => el.Name.LocalName, el => (object)el.Value);
    int.TryParse(element.Element(dns + "FileSystemObjectType")?.Value ?? "-1", out int fileType);
    string docUrl = "";

    var url = GetUrlFromTitle(e, fileType == 0 ? "File" : "Folder");
    if (url != null)
    {
        var fileDoc = await GetResponseDocumentAsync(_sharepointSite + url);
        docUrl = fileDoc.Element(ns + "entry")?.
            Element(ns + "content")?.
            Element(mns + "properties")?.
            Element(dns + "ServerRelativeUrl")?.
            Value;
    }

    return new Document(id, title, fields, docUrl);
}

It parses the XML and extracts a document and its properties. GetUrlFromTitle gets the Url from the Title property and is:

private string GetUrlFromTitle(XElement element, string title)
{
    return element.Descendants(ns + "link")
            ?.FirstOrDefault(e1 => e1.Attribute("title")?.Value == title)
            ?.Attribute("href")?.Value;
}

The third access method is using the Soap service that Sharepoint makes available. This access method is listed as deprecated, but it’s still alive. You have to create a reference to the http://<website>/_vti_bin/Lists.asmx and create a WCF client for it. I preferred to create a .NET 2.0 Client instead of a WCF service, as I found easier to authenticate with this service.

In Visual Studio, right-click the References node and select the Add Service Reference. Then, click on the Advanced button and then, Add Web Reference. Put the url in the box and click the arrow button:

When you click the Add Reference button, the reference will be added to the project and it can be used. Create a new class in the Repository folder and name it SoapListRepository. Make the class implement the IListRepository interface. The GetListsAsync method will be:

XNamespace ns = "http://schemas.microsoft.com/sharepoint/soap/";
private Lists _proxy;

public async Task<List<DocumentsList>> GetListsAsync()
{
    var tcs = new TaskCompletionSource<XmlNode>();
    _proxy = new Lists
    {
        Url = _address,
        UseDefaultCredentials = true
    };
    _proxy.GetListCollectionCompleted += ProxyGetListCollectionCompleted;
    _proxy.GetListCollectionAsync(tcs);
    XmlNode response;
    try
    {
        response = await tcs.Task;
    }
    finally
    {
        _proxy.GetListCollectionCompleted -= ProxyGetListCollectionCompleted;
    }

    var list = XElement.Parse(response.OuterXml);
    var result = list?.Descendants(ns + "List")
        ?.Where(e => e.Attribute("Hidden").Value == "False")
        ?.Select(e => new DocumentsList(e.Attribute("Title").Value,
        e.Attribute("Description").Value)).ToList();
    return result;
}

private void ProxyGetListCollectionCompleted(object sender, GetListCollectionCompletedEventArgs e)
{
    var tcs = (TaskCompletionSource<XmlNode>)e.UserState;
    if (e.Cancelled)
    {
        tcs.TrySetCanceled();
    }
    else if (e.Error != null)
    {
        tcs.TrySetException(e.Error);
    }
    else
    {
        tcs.TrySetResult(e.Result);
    }
}

As we are using the .NET 2.0 web service, in order to convert the call to an asynchronous method, we must use a TaskCompletionSource to detect when the call to the service returns. Then we fire the call to the service. When it returns, the completed event is called and sets the TaskCompletionSource to the desired state: cancelled, if the call was cancelled, exception, if there was an error or set the result if the call succeeds. Then, we remove the event handler for the completed event and process the result (a XmlNode), to transform into a list of DocumentsList.

The call to GetDocumentsFromListAsync is very similar to GetListsAsync:

XNamespace rs = "urn:schemas-microsoft-com:rowset";
XNamespace z = "#RowsetSchema";

public async Task<List<Document>> GetDocumentsFromListAsync(string title)
{
    var tcs = new TaskCompletionSource<XmlNode>();
    _proxy = new Lists
    {
        Url = _address,
        UseDefaultCredentials = true
    };
    _proxy.GetListItemsCompleted += ProxyGetListItemsCompleted;
    _proxy.GetListItemsAsync(title, "", null, null, "", null, "", tcs);
    XmlNode response;
    try
    {
        response = await tcs.Task;
    }
    finally
    {
        _proxy.GetListItemsCompleted -= ProxyGetListItemsCompleted;
    }

    var list = XElement.Parse(response.OuterXml);

    var result = list?.Element(rs + "data").Descendants(z + "row")
        ?.Select(e => new Document(e.Attribute("ows_ID")?.Value,
        e.Attribute("ows_LinkFilename")?.Value, AttributesToDictionary(e),
        e.Attribute("ows_FileRef")?.Value)).ToList();
    return result;
}

private Dictionary<string, object> AttributesToDictionary(XElement e)
{
    return e.Attributes().ToDictionary(a => a.Name.ToString().Replace("ows_", ""), a => (object)a.Value);
}

private void ProxyGetListItemsCompleted(object sender, GetListItemsCompletedEventArgs e)
{
    var tcs = (TaskCompletionSource<XmlNode>)e.UserState;
    if (e.Cancelled)
    {
        tcs.TrySetCanceled();
    }
    else if (e.Error != null)
    {
        tcs.TrySetException(e.Error);
    }
    else
    {
        tcs.TrySetResult(e.Result);
    }
}

The main difference is the processing of the response, to get the documents list. Once you have the two methods in place, the only thing to do is select the correct repository in MainViewModel. For that, we create an enum for the API Selection:

public enum ApiSelection
{
    NetApi,
    Rest,
    Soap
};

Then, we need to declare a command bound to the radiobuttons, that will receive a string with the enum value:

public ICommand ApiSelectCommand =>
    _apiSelectCommand ?? (_apiSelectCommand = new RelayCommand<string>(s => SelectApi(s)));

private void SelectApi(string s)
{
    _selectedApi = (ApiSelection)Enum.Parse(typeof(ApiSelection), s, true);
    GoToAddress();
}

The last step is to select the repository in the GoToAddress method:

private async void GoToAddress()
{
    var sw = new Stopwatch();
    sw.Start();
    _listRepository = _selectedApi == ApiSelection.Rest ?
        (IListRepository)new RestListRepository(Address) :
        _selectedApi == ApiSelection.NetApi ?
        (IListRepository)new CsomListRepository(Address) :
        new SoapListRepository(Address);

    DocumentsLists = await _listRepository.GetListsAsync();
    ListTiming = $"Time to get lists: {sw.ElapsedMilliseconds}";
    ItemTiming = "";
}

With the code in place, you can run the app and see the data shown for each API.

One last change to the program is to add a command bound to the Go button, so you can change the address of the web site and get the lists and documents for the new site:

public ICommand GoCommand =>
            _goCommand ?? (_goCommand = new RelayCommand(GoToAddress, () => !string.IsNullOrEmpty(Address)));

This command has an extra touch: it will only enable the button if there is an address in the address box. If it’s empty, the button will be disabled. Now you can run the program, change the address of the website, and get the lists for the new website.

Conclusions

As you can see, we’ve created a WPF program that uses the MVVM pattern and accesses Sharepoint data using three different methods – it even has a time measuring feature, so you can check the performance difference and choose the right one for your case.

The full source code for this project is at https://github.com/bsonnino/SharepointAccess

One book that I recommend the reading is Clean Code, by Robert Martin. It is a well written book with wonderful techniques to create better code and improve your current programs, so they become easier to read, maintain and understand.

While going through it again, I found an excellent opportunity to improve my skills trying to do some refactoring: in listing 4.7 there is a prime generator function that he uses to show some refactoring concepts and turn int listing 4.8. I then thought do do the same and show my results here.

We can start with the listing converted to C#. This is a very easy task. The original program is written in  Java, but converting it to C# is just a matter of one or two small fixes:

using System;

namespace PrimeNumbers
{
/**
* This class Generates prime numbers up to a user specified
* maximum. The algorithm used is the Sieve of Eratosthenes.
* <p>
* Eratosthenes of Cyrene, b. c. 276 BC, Cyrene, Libya --
* d. c. 194, Alexandria. The first man to calculate the
* circumference of the Earth. Also known for working on
* calendars with leap years and ran the library at Alexandria.
* <p>
* The algorithm is quite simple. Given an array of integers
* starting at 2. Cross out all multiples of 2. Find the next
* uncrossed integer, and cross out all of its multiples.
* Repeat untilyou have passed the square root of the maximum
* value.
*
* @author Alphonse
* @version 13 Feb 2002 atp
*/
    public class GeneratePrimes
    {
        /**
        * @param maxValue is the generation limit.
*/
        public static int[] generatePrimes(int maxValue)
        {
            if (maxValue >= 2) // the only valid case
            {
                // declarations
                int s = maxValue + 1; // size of array
                bool[] f = new bool[s];
                int i;

                // initialize array to true.
                for (i = 0; i < s; i++)
                    f[i] = true;
                // get rid of known non-primes
                f[0] = f[1] = false;
                // sieve
                int j;
                for (i = 2; i < Math.Sqrt(s) + 1; i++)
                {
                    if (f[i]) // if i is uncrossed, cross its multiples.
                    {
                        for (j = 2 * i; j < s; j += i)
                            f[j] = false; // multiple is not prime
                    }
                }
                // how many primes are there?
                int count = 0;
                for (i = 0; i < s; i++)
                {
                    if (f[i])
                        count++; // bump count.
                }
                int[] primes = new int[count];
                // move the primes into the result
                for (i = 0, j = 0; i < s; i++)
                {
                    if (f[i]) // if prime
                        primes[j++] = i;
                }
                return primes; // return the primes
            }
            else // maxValue < 2
                return new int[0]; // return null array if bad input.
        }
    }
}

The first step is to put in place some tests, so we can be sure that we are not breaking anything while refactoring the code. In the solution, I added a new Class Library project, named it GeneratePrimes.Tests and added the packages NUnit, NUnit3TestAdapter and FluentAssertions to get fluent assertions in a NUnit test project. Then I added these tests:

using NUnit.Framework;
using FluentAssertions;

namespace PrimeNumbers.Tests
{
    [TestFixture]
    public class GeneratePrimesTests
    {
        [Test]
        public void GeneratePrimes0ReturnsEmptyArray()
        {
            var actual = GeneratePrimes.generatePrimes(0);
            actual.Should().BeEmpty();
        }

        [Test]
        public void GeneratePrimes1ReturnsEmptyArray()
        {
            var actual = GeneratePrimes.generatePrimes(1);
            actual.Should().BeEmpty();
        }

        [Test]
        public void GeneratePrimes2ReturnsArrayWith2()
        {
            var actual = GeneratePrimes.generatePrimes(2);
            actual.Should().BeEquivalentTo(new[] { 2 });
        }

        [Test]
        public void GeneratePrimes10ReturnsArray()
        {
            var actual = GeneratePrimes.generatePrimes(10);
            actual.Should().BeEquivalentTo(new[] { 2,3,5,7 });
        }

        [Test]
        public void GeneratePrimes10000ReturnsArray()
        {
            var actual = GeneratePrimes.generatePrimes(10000);
            actual.Should().HaveCount(1229).And.EndWith(9973);
        }
    }
}

These tests check that there are no primes for 0 and 1, one prime for 2, the primes for 10 are 2, 3, 5, 7 and that there are 1229 primes less than 10,000 and the largest one is 9973. Once we run the tests, we can see that the pass and we can start doing our changes.

The easiest fix we can do is to revise the comments at the beginning. We don’t need the history of Erasthotenes (you can go to Wikipedia for that). We don’t need the author and version, thanks to source control technology :-). We don’t need either the initial comment:

/**
    * This class Generates prime numbers up to a user specified
    * maximum. The algorithm used is the Sieve of Eratosthenes.
    *  https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes   
*/
public class GeneratePrimes
{
    public static int[] generatePrimes(int maxValue)

Then we can invert the initial test, to reduce nesting. If we hover the mouse in the line of the first if, an arrow appears at the border, indicating a quick fix:

We can do the quick fix, then eliminate the else clause (don’t forget to remove the extra comments that are not needed):

public static int[] generatePrimes(int maxValue)
{
    if (maxValue < 2) 
        return new int[0]; 

    // declarations
    int s = maxValue + 1; // size of array
    bool[] f = new bool[s];
    int i;

Save the code and check that all tests pass. The next step is to rename the variables:

  • s can be renamed to sizeOfArray
  • f can be renamed as isPrimeArray

Go to the declaration of s and press Ctrl-R-R to rename and rename it to sizeOfArray. Do the same with the f variable. Don’t forget to remove the comments (and to run the tests):

int sizeOfArray = maxValue + 1; 
bool[] isPrimeArray = new bool[sizeOfArray];
int i;

To go to the next refactorings, we can use the comments as indicators for extracting methods. We can extract the InitializeArray method:

The extracted code isn’t what I expected, so I change it to:

private static bool[] InitializeArray(int sizeOfArray)
{
    bool[] isPrimeArray = new bool[sizeOfArray];
    // initialize array to true.
    for (var i = 0; i < sizeOfArray; i++)
        isPrimeArray[i] = true;
    return isPrimeArray;
}

I can use the code like this:

var isPrimeArray = InitializeArray(sizeOfArray);

After passing the tests, I can refactor the code of InitializeArray to:

private static bool[] InitializeArray(int sizeOfArray)
{
    return Enumerable
        .Range(0, sizeOfArray)
        .Select(n => true)
        .ToArray();
}

The next step is the sieve:

The code for the sieve is really bad:

private static void Sieve(int sizeOfArray, bool[] isPrimeArray, 
    out int i, out int j)
{
    // get rid of known non-primes
    isPrimeArray[0] = isPrimeArray[1] = false;
    for (i = 2; i < Math.Sqrt(sizeOfArray) + 1; i++)
    {
        if (isPrimeArray[i]) // if i is uncrossed, cross its multiples.
        {
            for (j = 2 * i; j < sizeOfArray; j += i)
                isPrimeArray[j] = false; // multiple is not prime
        }
    }
}

It has two out parameters (which, for me, is a code smell), and has an error (the out parameter j must be assigned) before exiting the method. So we can change it to remove the out parameters and remove the sizeOfArray parameter:

private static void Sieve(bool[] isPrimeArray)
{
    var sizeOfArray = isPrimeArray.Length;

    isPrimeArray[0] = isPrimeArray[1] = false;

    for (int i = 2; i < Math.Sqrt(sizeOfArray) + 1; i++)
    {
        if (isPrimeArray[i]) // if i is uncrossed, cross its multiples.
        {
            for (int j = 2 * i; j < sizeOfArray; j += i)
                isPrimeArray[j] = false; 
        }
    }

Then, we can extract the method to count primes:

CountPrimes has the same flaws as Sieve, so we change it to:

private static int CountPrimes(bool[] isPrimeArray)
{
    var sizeOfArray = isPrimeArray.Length;
    var count = 0;
    for (var i = 0; i < sizeOfArray; i++)
    {
        if (isPrimeArray[i])
            count++; 
    }
    return count;
}

We can refactor it to:

private static int CountPrimes(bool[] isPrimeArray) => 
    isPrimeArray.Count(i => i);

The next step is MovePrimes:

After we tweak the MovePrimes code, we get:

private static int[] MovePrimes(bool[] isPrimeArray, int count)
{
    var sizeOfArray = isPrimeArray.Length;
    var primes = new int[count];
    for (int i = 0, j = 0; i < sizeOfArray; i++)
    {
        if (isPrimeArray[i]) // if prime
            primes[j++] = i;
    }
    return primes;
}

Then we can refactor MovePrimes:

 private static int[] MovePrimes(bool[] isPrimeArray, int count) =>
     isPrimeArray
         .Select((p, i) => new { Index = i, IsPrime = p })
         .Where(v => v.IsPrime)
         .Select(v => v.Index)
         .ToArray();

Notice that we aren’t using the primes count in this case, so we can remove the calculation of the count and the parameter. After some cleaning and name changing, we get:

public static int[] GetPrimes(int maxValue)
{
    if (maxValue < 2)
        return new int[0];

    bool[] isPrimeArray = InitializeArray(maxValue);
    Sieve(isPrimeArray);
    return MovePrimes(isPrimeArray);
}

Much cleaner, no? Now, it’s easier to read the method, the details are hidden, but the code still runs the same way. We have a more maintainable method, and it shows clearly what it does.

But there is a change we can do here: we are using static methods only. We can then use extension methods and add the keyword this to allow the methods to be used as extension methods. For example, if we change MovePrimes and Sieve to:

private static int[] MovePrimes(this bool[] isPrimeArray) =>
    isPrimeArray
        .Select((p, i) => new { Index = i, IsPrime = p })
        .Where(v => v.IsPrime)
        .Select(v => v.Index)
        .ToArray();

private static bool[] Sieve(this bool[] isPrimeArray)
{
    var sizeOfArray = isPrimeArray.Length;

    isPrimeArray[0] = isPrimeArray[1] = false;

    for (int i = 2; i < Math.Sqrt(sizeOfArray) + 1; i++)
    {
        if (isPrimeArray[i]) // if i is uncrossed, cross its multiples.
        {
            for (int j = 2 * i; j < sizeOfArray; j += i)
                isPrimeArray[j] = false;
        }
    }
    return isPrimeArray;

We can have the GetPrimes method to be changed to:

public static int[] PrimesSmallerOrEqual(this int maxValue)
{
    if (maxValue < 2)
        return new int[0];

    return maxValue.InitializeArray()
        .Sieve()
        .MovePrimes();
}

Cool, no? With this change, the tests become:

public class GeneratePrimesTests
{
    [Test]
    public void GeneratePrimes0ReturnsEmptyArray()
    {
        0.PrimesSmallerOrEqual().Should().BeEmpty();
    }

    [Test]
    public void GeneratePrimes1ReturnsEmptyArray()
    {
        1.PrimesSmallerOrEqual().Should().BeEmpty();
    }

    [Test]
    public void GeneratePrimes2ReturnsArrayWith2()
    {
        2.PrimesSmallerOrEqual()
            .Should().BeEquivalentTo(new[] { 2 });
    }

    [Test]
    public void GeneratePrimes10ReturnsArray()
    {
        10.PrimesSmallerOrEqual()
            .Should().BeEquivalentTo(new[] { 2, 3, 5, 7 });
    }

    [Test]
    public void GeneratePrimes10000ReturnsArray()
    {
        10000.PrimesSmallerOrEqual()
            .Should().HaveCount(1229).And.EndWith(9973);
    }
}

The full code is at https://github.com/bsonnino/PrimeNumbers. Each commit there is a phase of the refactoring.

Sometimes, when we open an Explorer window in the main computer, we see red bars in some disks, telling us that the disk is almost full and that we need to do some cleanup. We call the system cleanup, that removes some unused space, but this isn’t enough to  make things better.

So, we try to find the duplicate files in the disk to remove some extra space, but we have a problem: where are the duplicate files? The first answer is to check the files with the same name and size, but that isn’t enough – files can be renamed, and still be duplicates.

So, the best thing to do is to find a way to find and list all duplicates in the disk. But how can we do this?

The naive approach is to get all files with the same size and compare them one with the other. But this is really cumbersome, because if there are 100 files in the group, there will be 100!/(2!*98!) = 100*99/2 = 4950 comparisons and has a complexity of O(n^2).

One other approach is to get a checksum of the file and compare checksums. That way, you will still have the O(n^2) complexity, but you’ll have less data to compare (but you will have to compute the time to calculate the checksums). A third approach would be to use a dictionary to group the files with the same hash. The search in a dictionary has a O(1) complexity, so this would do a O(n) complexity.

Now, we only have to choose the checksum. Every checksum has a number of bits and, roughly, the larger the number of bits, the longer it takes to compute it. But the larger number of bits make it more difficult to get wrong results: if you are using CRC16 checksum (16 bits), you will have 65,535 combinations and the probability of two different files have the same checksum is very large. CRC32 allows 2,147,483,647 combinations and, thus, is more difficult to have a wrong result. You can use other algorithms, like MD5 (128 bits), SHA1 (196 bits) or SHA256 (256 bits), but computing these will be way longer than computing the CRC32 bits. As we are not seeking for huge accuracy, but for speed, we’ll use the CRC32 algorithm to compute the hashes. A fast implementation of this algorithm can be found here , and you can use it by installing the CRC32C.NET NuGet package.

From there, we can create our program to find and list the duplicates in the disk. In Visual Studio, create a new WPF application. In the Solution Explorer, right-click on the references node and select the WpfFolderBrowser and Crc32C.NET packages. Then add this code in MainWindow.xaml:

<Grid>
    <Grid.RowDefinitions>
        <RowDefinition Height="40"/>
        <RowDefinition Height="*"/>
    </Grid.RowDefinitions>
    <Button Width="85" Height="30" Content="Start" Click="StartClick"
                HorizontalAlignment="Right" Margin="5" Grid.Row="0"/>
    <Grid Grid.Row="1">
        <Grid.RowDefinitions>
            <RowDefinition Height="*"/>
            <RowDefinition Height="30"/>
        </Grid.RowDefinitions>
        <ScrollViewer HorizontalScrollBarVisibility="Disabled">
        <ItemsControl x:Name="FilesList" HorizontalContentAlignment="Stretch">
            <ItemsControl.ItemTemplate>
                <DataTemplate>
                    <Grid HorizontalAlignment="Stretch">
                        <Grid.RowDefinitions>
                            <RowDefinition Height="30" />
                            <RowDefinition Height="Auto" />
                        </Grid.RowDefinitions>
                        <TextBlock Text="{Binding Value[0].Length, StringFormat=N0}"
                                   Margin="5" FontWeight="Bold"/>
                        <TextBlock Text="{Binding Key, StringFormat=X}"
                                   Margin="5" FontWeight="Bold" HorizontalAlignment="Right"/>
                        <ItemsControl ItemsSource="{Binding Value}" Grid.Row="1" 
                                      HorizontalAlignment="Stretch"
                                      ScrollViewer.HorizontalScrollBarVisibility="Disabled"
                                      Background="Aquamarine">
                            <ItemsControl.ItemTemplate>
                                <DataTemplate>
                                    <TextBlock Text="{Binding FullName}" Margin="15,0"  />
                                </DataTemplate>
                            </ItemsControl.ItemTemplate>
                        </ItemsControl>
                    </Grid>
                </DataTemplate>
            </ItemsControl.ItemTemplate>
        </ItemsControl>
        </ScrollViewer>
        <StackPanel Grid.Row="1" Orientation="Horizontal">
            <TextBlock x:Name="TotalFilesText" Margin="5,0" VerticalAlignment="Center"/>
            <TextBlock x:Name="LengthFilesText" Margin="5,0" VerticalAlignment="Center"/>
        </StackPanel>
    </Grid>
</Grid>

In the button’s click event handler, we will open a Folder browser dialog and, if the user selects a folder, we will process it, enumerating the files and  finding the ones that have the same size. Then, we calculate the Crc32 for these files and add them to a dictionary, grouped by hash:

private async void StartClick(object sender, RoutedEventArgs e)
{
    var fbd = new WPFFolderBrowserDialog();
    if (fbd.ShowDialog() != true)
        return;
    FilesList.ItemsSource = null;
    var selectedPath = fbd.FileName;

    var files = await GetPossibleDuplicatesAsync(selectedPath);
     FilesList.ItemsSource = await GetRealDuplicatesAsync(files);
}

The GetPossibleDuplicatesAsync will enumerate the files and group them by size, returning only the groups that have more than one file:

private async Task<List<IGrouping<long, FileInfo>>> GetPossibleDuplicates(string selectedPath)
{
    List<IGrouping<long, FileInfo>> files = null;
    await Task.Factory.StartNew(() =>
    {
        files = GetFilesInDirectory(selectedPath)
                       .OrderByDescending(f => f.Length)
                         .GroupBy(f => f.Length)
                         .Where(g => g.Count() > 1)
                         .ToList();
    });
    return files;
}

GetFilesInDirectory enumerates the files in the selected directory:

private List<FileInfo> GetFilesInDirectory(string directory)
{
    var files = new List<FileInfo>();
    try
    {
        var directories = Directory.GetDirectories(directory);
        try
        {
            var di = new DirectoryInfo(directory);
            files.AddRange(di.GetFiles("*"));
        }
        catch
        {
        }
        foreach (var dir in directories)
        {
            files.AddRange(GetFilesInDirectory(System.IO.Path.Combine(directory, dir)));
        }
    }
    catch
    {
    }

    return files;
}

After we have the duplicate files grouped, we can search the real duplicates with GetRealDuplicatesAsync:

private static async Task<Dictionary<uint,List<FileInfo>>> GetRealDuplicatesAsync(
    List<IGrouping<long, FileInfo>> files)
{
    var dictFiles = new Dictionary<uint, List<FileInfo>>();
    await Task.Factory.StartNew(() =>
    {
        foreach (var file in files.SelectMany(g => g))
        {
            var hash = GetCrc32FromFile(file.FullName);
            if (hash == 0)
                continue;
            if (dictFiles.ContainsKey(hash))
                dictFiles[hash].Add(file);
            else
                dictFiles.Add(hash, new List<FileInfo>(new[] { file }));
        }
    });
    return dictFiles.Where(p => p.Value.Count > 1).ToDictionary(p => p.Key, p => p.Value);
}

The GetCrc32FromFile method with use the Crc32C library to compute the Crc32 hash from the file. Note that we can’t compute the hash in one pass, by reading the whole file, as this will fail with files with more than 2Gb. So, we read chunks of 10,000 bytes and process them.

public static uint GetCrc32FromFile(string fileName)
{
    try
    {
        using (FileStream file = new FileStream(fileName, FileMode.Open))
        {
            const int NumBytes = 10000;
            var bytes = new byte[NumBytes];
            var numRead = file.Read(bytes, 0, NumBytes);
            if (numRead == 0)
                return 0;
            var crc = Crc32CAlgorithm.Compute(bytes, 0, numRead);
            while (numRead > 0)
            {
                numRead = file.Read(bytes, 0, NumBytes);
                if (numRead > 0)
                    Crc32CAlgorithm.Append(crc, bytes, 0, numRead);
            }
            return crc;
        }
    }
    catch (Exception ex) when (ex is UnauthorizedAccessException || ex is IOException)
    {
        return 0;
    }
}

Now, when you run the app, you will get something like this:

You can then verify the files you want to remove and then go to Explorer and remove them. But there is one thing to do here: the time to compute the hash is very large, especially if you have a lot of data to process (large files, large number of files or both). Could it be improved?

This issue is somewhat complicated to solve. Fortunately, .NET provide us with an excellent tool to improve performance in this case: Parallel programming. By making a small change in the code, you can calculate the CRC of the files in parallel, thus improving the performance. But there is a catch: we are using classes that are not thread safe. If you use the common Dictionary and List to store the data, you will end up with wrong results. But, once again, .NET comes to rescue us: it provides the ConcurrentDictionary and ConcurrentBag to replace the common classes, so we can store the data in a thread safe way. We can then change the code to this:

private static async Task<Dictionary<uint, List<FileInfo>>> GetRealDuplicatesAsync(
    List<IGrouping<long, FileInfo>> files)
{
    var dictFiles = new ConcurrentDictionary<uint, ConcurrentBag<FileInfo>>();
    await Task.Factory.StartNew(() =>
    {
        Parallel.ForEach(files.SelectMany(g => g), file =>
        {
            var hash = GetCrc32FromFile(file.FullName);
            if (hash != 0)
            {
                if (dictFiles.ContainsKey(hash))
                    dictFiles[hash].Add(file);
                else
                    dictFiles.TryAdd(hash, new ConcurrentBag<FileInfo>(new[] { file }));
            }
        });
    });
    return dictFiles.Where(p => p.Value.Count > 1)
        .OrderByDescending(p => p.Value.First().Length)
        .ToDictionary(p => p.Key, p => p.Value.ToList());
}

When we do that and run our program again, we will see that more CPU is used for the processing and the times to get the list come to 46 seconds from 78 seconds (for 18GB of duplicate files).

Conclusions

With this program, we can show the largest duplicates in a folder and see what can be safely deleted in our disk, thus retrieving some space (in our case, we would have potentially got 9Gb extra). We’ve done some optimization in the code by parallelizing the calculations using the parallel extensions in .NET.

The source code for this article is in https://github.com/bsonnino/FindDuplicates

Some time ago I wrote a post about converting a WPF application into .NET Core. One thing that called my attention in this Build 2019 talk was that the performance for file enumerations was enhanced in the .NET core apps. So I decided to check this with my own app and see what happens in my machine.

I added some measuring data in the app, so I could see what happens there:

private async void StartClick(object sender, RoutedEventArgs e)
{
    var fbd = new WPFFolderBrowserDialog();
    if (fbd.ShowDialog() != true)
        return;
    FilesList.ItemsSource = null;
    ExtList.ItemsSource = null;
    ExtSeries.ItemsSource = null;
    AbcList.ItemsSource = null;
    AbcSeries.ItemsSource = null;
    var selectedPath = fbd.FileName;
    Int64 minSize;
    if (!Int64.TryParse(MinSizeBox.Text, out minSize))
        return;
    List<FileInfo> files = null;
    var sw = new Stopwatch();
    var timeStr = "";
    await Task.Factory.StartNew(() =>
    {
       sw.Start();
       files = GetFilesInDirectory(selectedPath).ToList();
       timeStr = $" {sw.ElapsedMilliseconds} for enumeration";
       sw.Restart();
       files = files.Where(f => f.Length >= minSize)
         .OrderByDescending(f => f.Length)
         .ToList();
       timeStr += $" {sw.ElapsedMilliseconds} for ordering and filtering";
    });
    var totalSize = files.Sum(f => f.Length);
    TotalFilesText.Text = $"# Files: {files.Count}";
    LengthFilesText.Text = $"({totalSize:N0} bytes)";
    sw.Restart();
    FilesList.ItemsSource = files;
    var extensions = files.GroupBy(f => f.Extension)
        .Select(g => new { Extension = g.Key, Quantity = g.Count(), Size = g.Sum(f => f.Length) })
        .OrderByDescending(t => t.Size).ToList();
    ExtList.ItemsSource = extensions;
    ExtSeries.ItemsSource = extensions;
    var tmp = 0.0;
    var abcData = files.Select(f =>
    {
        tmp += f.Length;
        return new { f.Name, Percent = tmp / totalSize * 100 };
    }).ToList();
    AbcList.ItemsSource = abcData;
    AbcSeries.ItemsSource = abcData.OrderBy(d => d.Percent).Select((d, i) => new { Item = i, d.Percent });
    timeStr += $"  {sw.ElapsedMilliseconds} to fill data";
    TimesText.Text = timeStr;
}

That way, I could measure two things: the time to enumerate the files and the times to sort, filter and assign the files to the lists. Then, I run the two programs, to see what happened.

The machine I’ve run is a Virtual machine with a Core I5 and 4 virtual processors and a virtualized hard disk, with 12,230 files (93.13 GB of data). The measures may vary on your machine, but the differences should be comparable. To avoid bias, I ran 3 times each program (in Admin mode), then rebooted and run the other one.

Here are the results I’ve got:

Run Enumerate Sort/Filter Assign
.NET
1 137031 96 43
2 58828 56 9
3 59474 55 8
Avg 85111 69 20
.NET Core
1 91105 120 32
2 33422 90 14
3 32907 87 20
Avg 52478 99 22

 

As you can see by the numbers, the .NET Core application improved a lot the times for file enumeration, but still lacks some effort for sorting/filtering and assigning data to the UI lists. But that’s not bad for a platform still in preview!

If you do some testingfor the performance, I’m curious to see what you’ve got, you can put your results and comments in the Comments section.

 

One thing that has been recently announced by Microsoft is the availability of .NET Core 3. With it, you will be able to create WPF and Winforms apps with .NET Core. And one extra bonus is that both WPF and Winforms are being open sourced. You can check these in https://github.com/dotnet/wpf and https://github.com/dotnet/winforms.

The first step to create a .NET Core WPF program is to download the .NET Core 3.0 preview from https://dotnet.microsoft.com/download/dotnet-core/3.0. Once you have it installed, you can check that it was installed correctly by open a Command Line window and typing dotnet –info and seeing the installed version:

:

With that in place, you can change the current folder to a new folder and type

dotnet new wpf
dotnet run

This will create a new .NET Core 3.0 WPF project and will compile and run it. You should get something like this:

If you click on the Exit button, the application exits. If you take a look at the folder, you will see that it generated the WPF project file, App.xaml and App.xaml.cs, MainWindow.xaml and MainWindow.xaml.cs. The easiest way to edit these files is to use Visual Studio Code. Just open Visual Studio Code and go to menu File/Open Folder and open the folder for the project. There you will see the project files and will be able to run and debug your code:

A big difference can be noted in the csproj file. If you open it, you will see something like this:

<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">

  <PropertyGroup>
    <OutputType>WinExe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <UseWPF>true</UseWPF>
  </PropertyGroup>

</Project>

That’s very simple and there’s nothing else in the project file. There are some differences between this project and other types of .NET Core, like the console one:

  • The output type is WinExe, and not Exe, in the console app
  • The UseWPF clause is there and it’s set to true

Now, you can modify and run the project inside VS Code. Modify MainWindow.xaml and put this code in it:

<Window x:Class="DotNetCoreWPF.MainWindow" 
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
    xmlns:local="clr-namespace:DotNetCoreWPF" mc:Ignorable="d" Title="MainWindow" Height="450" Width="800">
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="*"/>
            <RowDefinition Height="40"/>
        </Grid.RowDefinitions>
        <Grid>
            <Grid.RowDefinitions>
                <RowDefinition Height="40"/>
                <RowDefinition Height="40"/>
                <RowDefinition Height="40"/>
                <RowDefinition Height="40"/>
                <RowDefinition Height="40"/>
                <RowDefinition Height="40"/>
            </Grid.RowDefinitions>
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="*"/>
                <ColumnDefinition Width="2*"/>
            </Grid.ColumnDefinitions>

            <TextBlock Text="Id"      Grid.Column="0" Grid.Row="0" Margin="5" VerticalAlignment="Center"/>
            <TextBlock Text="Name"    Grid.Column="0" Grid.Row="1" Margin="5" VerticalAlignment="Center"/>
            <TextBlock Text="Address" Grid.Column="0" Grid.Row="2" Margin="5" VerticalAlignment="Center"/>
            <TextBlock Text="City"    Grid.Column="0" Grid.Row="3" Margin="5" VerticalAlignment="Center"/>
            <TextBlock Text="Email"   Grid.Column="0" Grid.Row="4" Margin="5" VerticalAlignment="Center"/>
            <TextBlock Text="Phone"   Grid.Column="0" Grid.Row="5" Margin="5" VerticalAlignment="Center"/>
            <TextBox Grid.Column="1" Grid.Row="0" Margin="5"/>
            <TextBox Grid.Column="1" Grid.Row="1" Margin="5"/>
            <TextBox Grid.Column="1" Grid.Row="2" Margin="5"/>
            <TextBox Grid.Column="1" Grid.Row="3" Margin="5"/>
            <TextBox Grid.Column="1" Grid.Row="4" Margin="5"/>
            <TextBox Grid.Column="1" Grid.Row="5" Margin="5"/>
        </Grid>
        <Button Content="Submit" Width="65" Height="35" Grid.Row="1" HorizontalAlignment="Right" VerticalAlignment="Center" Margin="5,0"/>
    </Grid>
</Window>

Now, you can compile and run the app in VS Code with F5, and you will get something like this:

If you don’t want to use Visual Studio Code, you can edit your project in Visual Studio 2019. The first preview still doesn’t have a visual editor for the XAML file, but you can edit the XAML file in the editor, it will work fine.

Porting a WPF project to .NET Core

To port a WPF project to .NET Core, you should run the Portability Analyzer tool first, to see what problems you will find before porting it to .NET Core. This tool can be found here. You can download it and run on your current application, and check what APIs that are not portable.

I will be porting my DiskAnalisys project. This is a simple project, that uses the File.IO functions to enumerate the files in a folder and uses two NuGet packages to add a Folder Browser and Charts to WPF. The first step is to run the portability analysis on it. Run the PortabilityAnalizer app and point it to the folder where the executable is located:

When you click on the Analyze button, it will analyze the executable and generate an Excel spreadsheet with the results:

As you can see, all the code is compatible with .NET Core 3.0. So, let’s port it to .NET Core 3.0. I will show you three ways to do it: creating a new project, updating the .csproj file and using a tool.

Upgrading by Creating a new project

This way needs the most work, but it’s the simpler to fix. Just create a new folder and name it DiskAnalysisCorePrj. Then open a command line window and change the directory to the folder you’ve created. Then, type these commands:

dotnet new wpf
dotnet add package wpffolderbrowser
dotnet add package dotnetprojects.wpf.toolkit
dotnet run

These commands will create the WPF project, add the two required NuGet packages and run the default app. You may see a warning like this:

D:\Documentos\Artigos\Artigos\CSharp\WPFCore\DiskAnalysisCorePrj\DiskAnalysisCorePrj.csproj : warning NU1701: Package 'DotNetProjects.Wpf.Toolkit 5.0.43' was restored using '.NETFramework,Version=v4.6.1' instead of the project target framework '.NETCoreApp,Version=v3.0'. This package may not be fully compatible with your project.
D:\Documentos\Artigos\Artigos\CSharp\WPFCore\DiskAnalysisCorePrj\DiskAnalysisCorePrj.csproj : warning NU1701: Package 'WPFFolderBrowser 1.0.2' was restored using '.NETFramework,Version=v4.6.1' instead of the project target framework '.NETCoreApp,Version=v3.0'. This package may not be fully compatible with your project.
D:\Documentos\Artigos\Artigos\CSharp\WPFCore\DiskAnalysisCorePrj\DiskAnalysisCorePrj.csproj : warning NU1701: Package 'DotNetProjects.Wpf.Toolkit 5.0.43' was restored using '.NETFramework,Version=v4.6.1' instead of the project target framework '.NETCoreApp,Version=v3.0'. This package may not be fully compatible with your project.
D:\Documentos\Artigos\Artigos\CSharp\WPFCore\DiskAnalysisCorePrj\DiskAnalysisCorePrj.csproj : warning NU1701: Package 'WPFFolderBrowser 1.0.2' was restored using '.NETFramework,Version=v4.6.1' instead of the project target framework '.NETCoreApp,Version=v3.0'. This package may not be fully compatible with your project.

This means that the NuGet packages weren’t converted to .NET Core 3.0, but they are still usable (remember, the compatibility report showed 100% compatibility). Then, copy MainWindow.xaml and MainWindow.xaml.cs from the original folder to the new one. We don’t need to copy any other files, as no other files were changed. Then, type

dotnet run

and the program is executed:

Converting by Changing the .csproj file

This way is very simple, just changing the project file, but can be challenging, especially for very large projects. Just create a new folder and name it DiskAnalysisCoreCsp. Copy all files from the main folder of the original project (there’s no need of copying the Properties folder) and edit the .csproj file, changing it to:

<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
  <PropertyGroup>
    <OutputType>WinExe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <UseWPF>true</UseWPF>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="dotnetprojects.wpf.toolkit" Version="5.0.43" />
    <PackageReference Include="wpffolderbrowser" Version="1.0.2" />
  </ItemGroup>
</Project>

Then, type

dotnet run

and the program is executed.

Converting using a tool

The third way is to use a tool to convert the project. You must install the conversion extension created by Brian Lagunas, from here. Then, open your WPF project in Visual Studio, right-click in the project and select “Convert Project to .NET Core 3”.

That’s all. You now have a NET Core 3 app. If you did that in Visual Studio 2017, you won’t be able to open the project, you will need to compile it with dotnet run, or open it in Visual Studio code.

Conclusions

As you can see, although this is the first preview of WPF .NET Core, it has a lot of work done, and you will be able to port most of your WPF projects to .NET Core.

As an MVP, I sometimes receive licenses to software from the vendors for my usage. Some of them become indispensable for me and I feel in the obligation to write a review (yes, it’s a biased review, as I really like the tool and use it on a daily basis :-)) as a way to say thank you!

One of these tools is Linqpad (https://www.linqpad.net/). It’s a simple tool, with a small footprint, but I have used it in so many ways that I find it incredible. There is a free version that has a lot of features to start, but I really recommend the paid version (if you have the $95 to spend, the Premium edition has even a debugger to debug your snippets).

Introduction

Once you open Linqpad, you will see a simple desktop like this:

At first, the name of the tool may indicate that this is a notepad for linq queries, but it’s much more than that! If you take a look at the Samples pane, you can see that there’s even an Interactive Regex Evaluator.

A closer look at that pane shows that you are not tied to C#: you can also use F# there. In fact, there is a full F# tutorial there. If you open the Language combo, you can see that you can use also VB or SQL queries.

My first usages in Linqpad were to learn Linq (the name is Linqpad, no?). At the beginning, Linq seems a little bit daunting, with all those extension methods and lambdas. So, I started to try some Linq queries, making them more difficult as my knowledge was improving. In Linqpad, you have three flavors of code: Expressions, where you have a single expression evaluated; Statements, where you have some statements evaluated and Program, where you can have a full program run in Linqpad (I use this when I want to run a console program and don’t want to open Visual Studio and create a new project).

In the Expression mode, you can enter a single expression, like this:

from i in Enumerable.Range(1,1000)
  where i % 2 == 0
  select i

If you run it, you will see the result in the Results pane:

As you can see, all the results are there, there is no need to open a console window or anything else. And, what’s better, you can export the results to Excel, Word or HTML. You can also use the other Linq format, the functional one:

Enumerable.Range(1,1000).Where(i => i %2 == 0)

After that, you can start tweaking your code and clicking on the Run button and observing the results. If you have the paid version, you also have Intellisense in the code, so you can check the syntax.

For example, to get the sum of the squares of the even numbers, we can do something like this:

If we have something more complicated than a single expression, we can run it using the C# statements. For example, to get all methods and parameters of the methods in the Directory class, we can use these statements:

var methodInfos = typeof(Directory).GetMethods(BindingFlags.Public | 
  BindingFlags.Static);

methodInfos.Select(m => new 
{
  m.Name, 
  Parameters = m.GetParameters() 
}).Dump();

You may have noticed something different in the code above: the Dump method. Linqpad adds this method to dump the values to the results pane. It is very powerful, you don’t need to know the type of the object, all the properties are shown there:

And you are not limited to old C#, you can also use C#7 features and even async programming. For example, this code (based on https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/walkthrough-accessing-the-web-by-using-async-and-await) will download asynchronously some pages from the web and will display their sizes:

async Task Main()
{
	await SumPageSizesAsync().Dump();
}

private async Task<List<string>> SumPageSizesAsync()
{
	var results = new List<string>();
	// Declare an HttpClient object and increase the buffer size. The
	// default buffer size is 65,536.
	HttpClient client =
		new HttpClient() { MaxResponseContentBufferSize = 1000000 };

	// Make a list of web addresses.
	List<string> urlList = SetUpURLList();

	var total = 0;

	foreach (var url in urlList)
	{
		// GetByteArrayAsync returns a task. At completion, the task
		// produces a byte array.
		byte[] urlContents = await client.GetByteArrayAsync(url);

		// The following two lines can replace the previous assignment statement.
		//Task<byte[]> getContentsTask = client.GetByteArrayAsync(url);
		//byte[] urlContents = await getContentsTask;

		results.Add(DisplayResults(url, urlContents));

		// Update the total.
		total += urlContents.Length;
	}

	// Display the total count for all of the websites.
	results.Add(
		$"\r\n\r\nTotal bytes returned:  {total}\r\n");
	return results;
}

private List<string> SetUpURLList()
{
	List<string> urls = new List<string>
			{
				"https://msdn.microsoft.com/library/windows/apps/br211380.aspx",
				"https://msdn.microsoft.com",
				"https://msdn.microsoft.com/library/hh290136.aspx",
				"https://msdn.microsoft.com/library/ee256749.aspx",
				"https://msdn.microsoft.com/library/hh290138.aspx",
				"https://msdn.microsoft.com/library/hh290140.aspx",
				"https://msdn.microsoft.com/library/dd470362.aspx",
				"https://msdn.microsoft.com/library/aa578028.aspx",
				"https://msdn.microsoft.com/library/ms404677.aspx",
				"https://msdn.microsoft.com/library/ff730837.aspx"
			};
	return urls;
}

private string DisplayResults(string url, byte[] content)
{
	// Display the length of each website. The string format
	// is designed to be used with a monospaced font, such as
	// Lucida Console or Global Monospace.
	var bytes = content.Length;
	// Strip off the "https://".
	var displayURL = url.Replace("https://", "");
	return $"\n{displayURL,-58} {bytes,8}";
}

When you run it, you will see something like this:

And you are not tied to the default C# libraries. If you have the Developer or Premium versions, you can download and use NuGet packages in your queries. For example in this previous article, I’ve shown how to use the Microsoft.SqlServer.TransactSql.ScriptDom package to parse your Sql Server code. You don’t even need to open Visual Studio for that. Just put this code in the Linqpad window:

static void Main()
{
	using (var con = new SqlConnection("Server=.;Database=WideWorldImporters;Trusted_Connection=True;"))
	{
		con.Open();
		var procTexts = GetStoredProcedures(con)
		  .Select(n => new { ProcName = n, Tree = ParseSql(GetProcText(con, n)) })
		  .Dump();
	}
}

private static List<string> GetStoredProcedures(SqlConnection con)
{
	using (SqlCommand sqlCommand = new SqlCommand("select s.name+'.'+p.name as name from sys.procedures p " +
	  "inner join sys.schemas s on p.schema_id = s.schema_id order by name", con))
	{
		using (DataTable procs = new DataTable())
		{
			procs.Load(sqlCommand.ExecuteReader());
			return procs.Rows.OfType<DataRow>().Select(r => r.Field<String>("name")).ToList();
		}
	}
}

private static string GetProcText(SqlConnection con, string procName)
{
	using (SqlCommand sqlCommand = new SqlCommand("sys.sp_helpText", con)
	{
		CommandType = CommandType.StoredProcedure
	})
	{
		sqlCommand.Parameters.AddWithValue("@objname", procName);
		using (var proc = new DataTable())
		{
			try
			{
				proc.Load(sqlCommand.ExecuteReader());
				return string.Join("", proc.Rows.OfType<DataRow>().Select(r => r.Field<string>("Text")));
			}
			catch (SqlException)
			{
				return null;
			}
		}
	}
}

private static (TSqlFragment sqlTree, IList<ParseError> errors) ParseSql(string procText)
{
	var parser = new TSql150Parser(true);
	using (var textReader = new StringReader(procText))
	{
		var sqlTree = parser.Parse(textReader, out var errors);

		return (sqlTree, errors);
	}
}

You will see some missing references. Just press F4 and it will open the following screen:

Click the Add NuGet button and add the Microsoft.SqlServer.TransactSql.ScriptDom package, then run the program. You will see something like this:

You can even click on the ScriptTokenStream result, to see the list of tokens in the procedure:

You can also simplify the query by using the connections available in Linqpad. Just go to the connections pane, add a new connection and point it to the WorldWideImporters database. Then select the connection in the connections combo and use this code:

void Main()
{
	ExecuteQuery<string>("select s.name+'.'+p.name as name from sys.procedures p " +
	  "inner join sys.schemas s on p.schema_id = s.schema_id order by name")
		  .Select(n => new 
		    { 
			  ProcName = n, 
			  Tree = ParseSql(ExecuteQuery<string>("exec sys.sp_helpText @objname={0}",n).FirstOrDefault()) 
			})
		  .Dump();
}

private static (TSqlFragment sqlTree, IList<ParseError> errors) ParseSql(string procText)
{
	var parser = new TSql150Parser(true);
	using (var textReader = new StringReader(procText))
	{
		var sqlTree = parser.Parse(textReader, out var errors);

		return (sqlTree, errors);
	}
}

You will see the same results. As you can see, you don’t even need to open the connection and create the command to run it. You can run your queries against your databases the same way you would do with any data. And if you are a SQL guy, you can run your queries directly using the SQL language. And, if you are brave and want to learn F#, you have here a really nice tool to learn.

Conclusions

At first, the size and appearance of Linqpad may fool you, but it’s a very nice tool to work, saving you a lot of time to try and debug your code. If you have some code snipped that you want to test and improve, this is the tool to use. And, one feature that I didn’t mention that’s invaluable when you are optimizing your code is the timing feature. After the execution of each query, Linqpad shows the execution time, so you can know how long did it take to execute it.