Azure Service Fabric Actor Polymorphism

August 11, 2016

Anyone who has been programming within the last 10 years has probably heard of polymorphism but just in case you have not, it is the ability for objects to be inherited from other objects.  Much like you can say a Hyundai and a Ferrari are both cars they are different in various ways.  They both have common features (like number of seats, engine size, number of doors, etc.) but they are very different in other aspects.  But rather than having to reinvent the wheel for each one, you can create an object that contains all the common features and then allow the child objects to inherit from the parent objects which then gain access to all the common features.  There is a bit more to it but that is enough to get you going.

Like in my cars example, Service Fabric Actors allow for Polymorphism as well.  Not only do you get the benefit of not having to recreate the common features each time but, as will be shown below, you can create an instance of an actor without actually knowing which actor you will be creating beforehand.  Very powerful stuff!

Let’s start with an example.  Suppose you have a program that takes information from various 3rd parties and that gets stored somewhere (I will be using an Azure IoT Hub in my code shown in a different post).  The different 3rd parties use different ways of measuring height (like inches, meters, and centimeters) and you always need the information in centimeters so you need to translate the height that gets passed in into the height you want.   You can do this by using actors that do the translations.  Granted this is a very simple example and you could probably do this using other features of Azure but for the sake of this example let’s assume this is really hard to do.

You could write a large switch statement in your code that calls the appropriate actor based on a field in the data (I will be using the company’s name) but then you are having to change the code and update the service each time a new company begins to use your code (not that big of deal with DevOps but again let’s assume it would be hard).  In any case the better way to do it would be to store which translation actor to call for each company in some storage medium (like Azure SQL) and use that information to call the needed actor.  That way, if a new company starts using your code you can just create the new translation actor (if needed), deploy it, and update the storage medium with the needed information.

To begin with you need the base class that the others will inherit from.  Start by creating a new project using the “Stateful Service” template called “IotTranslatorDemo”. This will be the service that starts which will gather the data from the IoT Hub and will call the appropriate translator.   Once that is created, right click on the project, select “Add” and then “New Service Fabric Service…”.


In the screen that pops up, select “Actor Service” and name it something like “BaseTranslate”.  When creating the actor if your name has a period in it (like “BaseTranslate.TranslateActor”) the code will use the first part as the namespace and second part as the name of your actor.  Since I did not do it in this case both the namespace and the actor name will be “BaseTranslate”.  If this were production code I would use a better name to avoid confusion later.  The code will automatically add “.Interfaces” to whatever name you use for the interface project as well.


This will create the new actor for you.  Now you need to modify it so that other classes can  inherit from it.  The BaseTranslate.Interface file does not need any changes other than to make sure that the methods you require the child actors to implement (and only those methods) are listed.  So for my example, I only have the one method called “TranslateData” and the code looks like below (note that I removed the comments to save space):

namespace BaseTranslate.Interfaces
    public interface IBaseTranslate : IActor
        Task TranslateData(string data);

In the BaseTranslate.cs file is the code is different than in a normal actor. Normally you would just include the associated interface and add the code for the needed methods.  Since this is an actor that will be inherited, you do not add the code to any methods that the child actors need to implement, rather you mark the method and the class as abstract so that the child actors know they have to implement it.  Note that you can add methods with code into this class if you want all the child actors to have access to it (for instance if you have code that performs some sort of setup) but in this case I do not have any.  I have also turned off the state persistence since it is not needed.

namespace BaseTranslate
    public abstract class BaseTranslate : ActorIBaseTranslate
        public abstract Task TranslateData(string data);

There are few more steps that need to be taken in order for your base class to work.  Since we are never actually creating an instance of this class it does not need to be, and cannot be, registered.  So we need to open the “Program.cs” file in the “BaseTranslate” project and comment out the line that begins with “ActorRuntime.RegisterActorAsync” and the line below it, since it is a continuation of the first line, so that this actor is not registered.

The last thing we need to do is the modify the “ServiceManifest.xml” file which is located under the “PackageRoot” folder in the “BaseTranslate” project.   If you were to try to compile your solution right now you would get an error in this file since the “<ServiceTypes/>” and “<Endpoints/>” are not filled in.  Normally this is done for you automatically but since this is a base class it needs to be done manually.

Make the changes as shown below.  The lines that need to be modified are in bold.  Make sure that the “ServiceTypeName” and the first part of the “Endpoint” match the name of your actor.

<?xml version="1.0" encoding="utf-8"?>
<ServiceManifest xmlns:xsd="" xmlns:xsi="" Name="BaseTranslatePkg" Version="1.0.0" xmlns="">
    <StatelessServiceType ServiceTypeName="BaseTranslate" />
  <CodePackage Name="Code" Version="1.0.0">
  <ConfigPackage Name="Config" Version="1.0.0" />
      <Endpoint Name="BaseTranslateEndpoint" />
  <!-- The content will be generated during build -->

So now we have the base actor created and ready to go. It is just a matter of creating a new actor (or more) that inherits from it.  Once again right click on the main project, select “Add” and then “New Service Fabric Service…”.  As before select “Actor” and give it an appropriate name.  I used “TranslateCH” in this example (for Contoso Health).

The first thing you will need to do is to add a reference to the “BaseTranslate.Interfaces” to the “TranslateCH.Interface” project so that you can use it.  Then add a “using” statement to add a reference to the interface to the “ITranslateCH.cs” file.  You can then remove any methods that you do not need noting that you do NOT need to have the “TranslateData” method listed since it will be coming from the base actor.  My interface file is shown below:

using BaseTranslate.Interfaces;
namespace TranslateCH.Interfaces
    public interface ITranslateCH : IBaseTranslate

In the “TranslateCH” project you will need to add a reference to both the “BaseTranslate.Interfaces” as well as the “BaseTranslate” projects.  Open the “TranslateCH.cs” file as there are few changes that need to be made.

First, on the class definition line, replace the “Actor” with “BaseTranslate.BaseTranslate” (remember the naming convention discussion above?).  This will tell the code to follow up the chain of inheritance through “BaseTranslate” to get to “Actor”.  No matter what, you will need either a direct or indirect reference to “Actor”.

Second, add the line [ActorService(Name=”TranslateCH”)] to your code right under the namespace definition.  Since your class is implementing actor interface this line will tell the client the correct service type to use when connecting.

Finally, add the code for all the needed methods.  In this case only the “TranslateData” method will need to be defined.

My code in this example is shown below.  There is a lot of code in the “TranslateData” method that I will talk about in a later blog post.  In a nutshell it is taking the data, translating it, and passing it to another actor to save.

namespace TranslateCH
    [ActorService(Name ="TranslateCH")]
    internal class TranslateCH : BaseTranslate.BaseTranslateITranslateCH
        public override async Task TranslateData(string data)
            DataTransmission dataObject = new DataTransmission();
            JsonConvert.PopulateObject(data, dataObject);
            double height = Convert.ToDouble( dataObject.height);
            //Convert meters to centimeters.
            height = height * 10;
            dataObject.height = height.ToString();
            string serializedMessage = JsonConvert.SerializeObject(dataObject);
            Uri sqlActorUri = new ServiceUriBuilder("SQLActorService").ToUri();
            var sqlActorProxy = ActorProxy.Create<ISQLActor>(new ActorId(Guid.NewGuid()), sqlActorUri);
            await sqlActorProxy.SaveData(serializedMessage);

Repeat this process for as many child actors as needed.  Once you have that all done, calling these actors is easy.  In the Stateful Service project that you originally created, open the file that corresponds to the name of the service.  In the “RunAsync” method you will most likely be calling the needed actors so you can use code similar to what is shown below.

In the code below assume  the “actorName” variable is already filled in with the actual translation actor to use.  Then it is just a matter of using the “IBaseTranslate” type rather than the actual type of the actor when creating the proxy.  Since all the translation actors inherit that class then  they all have the “TranslateData” method and can call that as needed without know which actual actor will be used at design time.

//determine the URI of the actor
Uri actorUri = new ServiceUriBuilder(actorName).ToUri();
//Create the proxy to the actor.  It is not actually being created yet.  You can verify this by putting a break
//point in the "OnActivateAsync" method of the actor
IBaseTranslate translationProxy = ActorProxy.Create<IBaseTranslate>(new ActorId(Guid.NewGuid()), actorUri);
//perform the translation and store the information (via another actor)
await translationProxy.TranslateData(data);

Introduction to Microservices

July 24, 2016

I have been playing around a lot with Azure Service Fabric which can be used for microservices.   When explaining this to some coworkers, who were not IT people, I realized they didn’t know what microservices are so I thought I would try to explain it in non-technical terms.

In technical terms, according to Wikipedia, microservices are “…services in a microservice architecture are processes that communicate with each other over the network in order to fulfill a goal”.  So what does that actually mean?  Let’s break it down  a bit.

The first half of the sentence states that a microservice is a process.  So what is a process.  Just about anything can be a process.  In everyday life, making a pizza, getting the house ready to go to bed at night, taking a shower, etc.  Basically it is a series of steps to produce an outcome.  So for making a pizza you would have things like roll-out the dough, add the sauce, add the toppings, preheat the oven, bake at a certain temperature for a certain amount of time, and so on.

Since not all pizzas are made the same way you may have different processes for creating different pizzas or you can use the same process with some optional steps, like using pesto sauce rather than tomato sauce, to make the different pizzas.  The main point is that there are steps to take to get something done.

The rest of the sentence states that the various processes communicate with each other to complete a job.  So to take our pizza scenario to the next level imagine you work in a pizzeria.  You may be the person that makes the pizza but then you hand it off to someone else to do the cooking and someone else to serve it to the customer.  Each person’s job would be a separate process and the communication could be as simple as shouting to each other that your step is done and the next step is ready to go.  All of you do your own thing, the process, and communicate with each other, the shouting, to get the goal, getting a pizza to the customer, complete.

So if you think of it this way, each person would be a separate microservice that communicate with each other to achieve the goal of getting the pizza out.   With a bit more thought you can see that this can be expanded to include the hostess, the bus boy, the manager, etc…

Creating an Azure Key Vault and adding a certificate.

April 14, 2016

This is a prelude to creating a secure Azure Service Fabric system in Azure using the Portal.  Granted, doing things through the portal is supposed to be easy but that is not always the case.  In any case, let me walk you through what I have found.  Even if you are not going to use Service Fabric, this will explain how to create a vault and upload a certificate.

The first thing is if you want to secure your Service Fabric (and who doesn’t) you need to have a certificate stored in an Azure Key Vault. Remember when I said not everything is easy when done through the Portal?  This is one of those cases.  Actually you cannot do this through the portal, you need to use PowerShell.  I found a couple of article that walked me through creating a vault but when I tried to create the Service Fabric I always got an error about the vault not being enabled for deployment.  Turns out the error message was right.  When I created my vault using the instructions found on the web page I was reading there was no mention of enabling it for deployment.  Luckily it is just a parameter that needed to be added to the PowerShell command.  The commands I used are as follows (replace with the names of your resource group and vault).  Sorry about the lousy formatting but for some reason I do not have much control over formatting right now

#Login to your Azure Account


#Set the default subscription ID.  The subscription ID will be shown after you login

Set-AzureRmContext -SubscriptionId <scubscriptionId>

#Create the resource group for the vault.  If you already have one you want to use, you can skip this step.

New-AzureRmResourceGroup –Name <resourcegroupname> –Location <location>

#Create the new vault.  Make sure the resource group name and location match the previous line.

New-AzureRmKeyVault -VaultName <vaultname> -ResourceGroupName <resourcegroupname> -Location <location> -EnabledForDeployment


Now that you have a place to store your certificate, you will need a certificate. There are a couple of ways to get a certificate, including the newly announced Azure App Services Certificate, and uploading them into the Key Vault isn’t too big of a deal so I am going to concentrate on using a self-signed certificate.  Granted, you would only want to do this for testing purposes but it beats paying about $70/year for a “real” certificate.  If you have a certificate you want to use, you can go to and look at Step 2.

No matter if you have a certificate or not it will make your life easier to download the “ServiceFabricRPHelpers” code from GitHub.  The name specifies Service Fabric but it will help upload a certificate in any event.  Once you have that downloaded (I went up two levels from there to the “Service-Fabric” level  and downloaded everything in a Zip file) there are more PowerShell commands to run.   Start PowerShell as an Administrator and go to the directory where the ServiceFabricRPHelpers reside (in my case it was “C:\Service-Fabric\Scripts”) and run the following commands:

#Login to your Azure Account.  Not needed if you continued with the same PowerShell session as above


#Unblock the file so that you do not need to keep saying you agree that you want to use it.  I found this also got rid of an intermittent error I was getting from another file.

Unblock-File -Path “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”

#Import the module so that you can use it.

Import-Module “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”

#This is the big command.  It will create a self-signed certificate and upload it.  If you look at the ServiceFabricRPHelpers.psm1 file you will see a LOT of code that gets run in the background. Make sure to use the same values as before.

Invoke-AddCertToKeyVault -SubscriptionId <subscription> -ResourceGroupName <resourcegroup> -Location <location> -VaultName <vaultname>-CertificateName <certname> -Password <certpassword> -CreateSelfSignedCertificate -DnsName <dnsname> -OutputPath <locationForCertOnLocalComputer>

#Since this is a self-signed certificate you need to import it into your machine’s trusted store.

Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\TrustedPeople -FilePath <locationAndCertName> -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)
Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\My -FilePath <locationAndCertName> -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)

After you run the “Invoke-AddCertToKeyVault” there will be a screen of information including the certificate thumbprint, Source vault, and certificate URL.  It is highly recommended you save those values for later use.


Here is a complete example of the commands:

Set-AzureRmContext -SubscriptionId bf32d86a-b46D-4503-95c8-38c744f46389
New-AzureRmResourceGroup –Name ‘ShareBlogKeyVault’ –Location ‘East US’
New-AzureRmKeyVault -VaultName ‘servicefabricvault’ -ResourceGroupName ‘ShareBlogKeyVault’ -Location ‘East US’

Unblock-File -Path “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”
Import-Module “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”

Invoke-AddCertToKeyVault -SubscriptionId bf32d86a-b46D-4503-95c8-38c744f46389 -ResourceGroupName ‘ShareBlogKeyVault’ -Location ‘East US’ -VaultName ‘servicefabricvault’  -CertificateName ‘servicefabriccert’ -Password ‘Pass@word1’ -CreateSelfSignedCertificate -DnsName ‘’ -OutputPath ‘C:\servicefabric’

Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\TrustedPeople -FilePath C:\servicefabric\servicefabriccert.pfx -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)
Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\My -FilePath C:\servicefabric\servicefabriccert.pfx -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)

Overall plan for next series of posts

January 11, 2016

The image below shows the high level architecture (always subject to change since Azure is changing constantly and if I find new and interesting ways of doing things) of the application I am going to be blogging about.


Here is a brief explanation

  1. The mobile application (or mobile device, or dummy application, or whatever) is going to be sending information into an IoT Hub.  This will allow for many different instances of the app to securely send information to be processed. In my next post I will go into more details why I chose an IoT Hub over and Event Hub and how I plan to get the information into it.
  2. The IoT Hub will be used to store the information until needed.
  3. The Stream Analytics job will take one route for the information.  This job will process all the information that comes across and will store it in an Azure SQL Database.  That information can then be processed using things like PowerBI to show reports.
  4. The Azure SQL database will store all the information that has come across for future processing as opposed to the DocumentDB database.
  5. The Service Fabrics apps will be used to determine if there was some sort of event in the data, where the event could be high blood pressure, low blood pressure, high white cell count, etc…   There will be one service fabric application to read the information from the hub, check to see if there is an event (i.e. high blood pressure), and push the data to the appropriate secondary service fabric app to do some processing on the data to make sure it is an event (i.e. the high blood pressure may be normal due to some other circumstances) before storing it.
  6. The DocumentDB will store only those items that had an event associated with them.  Using this mainly to show different ways of storing information in Azure.
  7. The Reporting will be handled via PowerBI.  There may need to be an additional step between the DocumentDB and PowerBI since, as of this writing, the PowerBI add-in to read DocumentDB databases is still in beta and may not be ready for use when I get to this step.

Along the way I plan on using TFS Online to push out all my code using DevOps practices.  I am also hoping to create a mobile application using PowerApps to populate the data if I can get into the program.

Unable to locate PowerShell command

December 26, 2015

The other day I was trying out Azure’s Service Fabric on my new Surface and I was not able to get it to work.  I was constantly getting an error that “Remove-ServiceFabricNodeConfiguration” was not found.  This was really strange since I copied the installation files directly from another laptop that did work correctly.

After much head banging it turns out that I was running the 32bit version of PowerShell on my Surface and was running the 64 bit version on my other laptop.  One way to find out which version you are running is to look at the shortcut that you use to launch the program.  If it is pointing to “c:\Windows\WOW64” it is the 32bit version and if it points to “C:\Windows\system32” it is the 64bit version.   Once I changed the shortcut to point to the 64bit  version everything installed correctly.

There is a thread that shows how you can determine the version of PowerShell you are running while running it located here:

Welcome to the new ShareBlog

December 15, 2015

I have noticed that I have not posted anything on my blog for quite some time (more than 2 years to be precise) so I figure it is time to get back and share what I learn in my life as a consultant.

Some things have changed in the 2 years.  Numerous jobs (which we will not be discussing) and the fact that I have decided to change the focus of my work to Azure from SharePoint.  Don’t get me wrong, I still use and work with SharePoint but I feel it is a time for a change.  I have been working with SharePoint for around 15 years now (even before it was SharePoint) and I am ready for a change so I fell into an Azure project (which is how I actually started with SharePoint as well) and really enjoyed working with it.

What I propose doing next is to work on a somewhat hypothetical application that I have been kicking around in my mind for some time now.  I will create another blog post with the details but the short version is it will be a medical testing program that takes information from a hypothetical wearable device and uploads the data into Azure where it can be manipulated and reported on.  I plan on starting small and working on expanding the application into new areas and new technologies.  One thing with Azure right now is it is constantly changing so there will always be new ways to do something.

If I get ambitious I will also work on writing a JavaScript front-end that the hypothetical doctors and staff would use to interact with the information.  Let me just make one point clear.  This is hypothetical only and I have no idea if the way the program runs would work in the real world or pass FDA standards for testing. 

The value of old data

February 15, 2013

Recently in one of my projects we had a vendor that would deliver code that they swore they tested and passed all validation tests only to have it fail on the first couple of tests every time.  This was frustrating for us, since we were paying this company to test the code before sending it, and for them, since their reputation was getting hurt with each “bad” release.  After they tested the code again and we tested the code again and we each received different answers, we decided to have a web conference to try to figure out what was going on.  They started by creating a new entry and showed how everything worked fine and then we showed them our entry and how the code bombed with the first test.

I could see some lightbulbs go off when they realized we were using entries that were created with an old version of the software.  I won’t go into much detail about some of their suggestions other than to say that completely wiping out our database and starting over is not an option each time a new version of the software comes out (which is what they did before testing). So while they tested with entries created with that version and it worked great, they never tested against legacy entries.   Imagine if everytime Microsoft released a new version of Windows you had to start over from scratch! <cough>Windows8 RT</cough> 🙂

The moral of this story (and it builds upon an earlier post about knowing your customer) is know your customer’s data and make sure legacy data works as well.  They may not like the idea of wiping out everything and starting over.

SharePoint 2010: Showing the Context menu on any field in a List View when defining lists

July 9, 2012

I am working on a project that requires me to show the context menu on columns other than the default ‘Title” field that normally shows up (at least when creating a list based on the “Item” template).   While doing research I came across two different ways to do this when using XML to define the lists.

The first just involves renaming the “Title” field to something else.  I was taking this approach as I was playing with Visual Studio 2012RC’s new wizards for creating lists and content types (what are great!!!).   While the wizard allows you to change “Title” to whatever you want it never changed the text that shows up in the view header, only when add/editing the item.  Close but no cigar.  Looking at the code that gets generated I saw:





This sets the field when adding/editing a new entry but doesn’t change the header.   After much head banging and powershell scripting I figured out that I needed to add:

Field ID={fa564e0f-0c70-4ab9-b863-0177e6ddd247} Type=Text Name=Title DisplayName=Name Required=TRUE SourceID= StaticName=Title MaxLength=255/>





Field ID={82642ec8-ef9b-478f-acf9-31f7d45fbc31} Type=Text Name=LinkTitle DisplayName=Name />

as well. This is the field that is typically named “Title (linked to item with edit menu)”  Note that the “DisplayName” in both lines have been changed to what I want to actually show.  So now the header will be “Name” and the if I create a new view I will see “Name (linked to item with edit menu)”.

That worked fine for lists that use the existing “Title” field but what about lists that don’t have it or what if I needed to change the field that is used to display the menu depending on the view (which was actually the case)  For this I turned to code….sort of.  I was looking through the object model and saw that I can use “SPField.ListItemMenu” to set the field that I wanted to use.  Taking a chance I added “ListItemMenu=’True’ ” to the field I wanted to use and it worked!

So there you go.  Two different ways to change where the context menu shows!

Conditional requiring of fields using SharePoint 2010 List Validation

February 28, 2012

My customer wanted to be able to make a field required only if another field had a specific value, AKA Conditional Requiring.   While SharePoint has the ability to make fields required (or even check for uniqueness) it still does not yet have the ability to make a field required based on another.  That is where List Validation comes in.

If you do not already know, List Validation (along with Column Validation) allows you to perform checks on fields to make sure that they pass certain requirements.  They can be as simple as checking to make sure a field has a specific value to some rather complex (and cool) validations.  I forget who did it but I saw a “poor man” validation to make sure a human was entering a comment by making sure they entered in the first 3 letters of the today’s day.

Here is my setup.  I have two different fields, FieldA and Field B.   The field names have been changed to protect the innocent 🙂   They are both choice fields, although I would think they could be just about any field type,***certain fields like LOOKUP will not work here and the fields do not actually show up in the validation settings form*** and FieldA is required while FieldB is not.  I also setup FieldB so it does not have any default value.

 In my case I need to check two different things.  The first was that FieldA contained a specific value and if it did (and only if it did) I needed to make sure that FieldB had a selected value.  The first part is simple:


This just states that if FieldA has the value of “ShareBlog” then the validation passes, otherwise it fails.  If you want to ouput text, rather than just check to see if it validation passes you can use the IF function.  The above formula can be rewritten as:


This also checks to see if FieldA has the value of “ShareBlog” but in this case it will ouput “Pass” if it does or “Fail” if it doesn’t.   Pretty close to what I need but not quite there.  I could also use:


but this will just output “True” or “False” (and the validation will actually still fail).   Finally I realized I can take the quotes off of “True” and “False” and just use the static names.   Now I will get my validation to pass if FieldA has the value of “ShareBlog” again.  Almost there.

Now I just need to check to make sure that FieldB has a value.   That is simple enough:


What I need to do now is to embed the two IF statements together to create the new one:


Sharp eyed readers may have notticed that for the outter IF statement I am actually returning TRUE no matter if FieldA has the value of “ShareBlog” or not.   That is not quite true.   If FieldA DOES contain “ShareBlog” then I want to make sure that FieldB has a value so I go into the second IF statement which will return a TRUE if FieldB has a value or FALSE if it doesn’t.   If FieldA DOESN’T contain “ShareBlog” then I don’t care what FieldB has for a value so I will always return TRUE.

That should do it.   You can throw some AND functions in there as needed if you need to check more fields

Creating a Core Results web part for FAST for SharePoint

March 30, 2011

There are a few postings out there that explain how to subclass the core results web part for SharePoint to add your own functionality.  Those work great…until you try to use them with FAST for SharePoint.  For some reason they just do not seem to work.   After much pounding of head against desk and then using .Net Reflector I figured out that there are two variables that need to be set.

base.ShowActionLinks = false;   //This tells the code it is a results web part and not an action links web part

base.bForceOnInit = false;   //Not quite sure why this is needed, MS has it marked as internal use only but it was the only way I could get the web part to work


Hope this helps others and if anyone can explain the “bForceOnInit” variable I would appreciate it