Introduction to Microservices

July 24, 2016

I have been playing around a lot with Azure Service Fabric which can be used for microservices.   When explaining this to some coworkers, who were not IT people, I realized they didn’t know what microservices are so I thought I would try to explain it in non-technical terms.

In technical terms, according to Wikipedia, microservices are “…services in a microservice architecture are processes that communicate with each other over the network in order to fulfill a goal”.  So what does that actually mean?  Let’s break it down  a bit.

The first half of the sentence states that a microservice is a process.  So what is a process.  Just about anything can be a process.  In everyday life, making a pizza, getting the house ready to go to bed at night, taking a shower, etc.  Basically it is a series of steps to produce an outcome.  So for making a pizza you would have things like roll-out the dough, add the sauce, add the toppings, preheat the oven, bake at a certain temperature for a certain amount of time, and so on.

Since not all pizzas are made the same way you may have different processes for creating different pizzas or you can use the same process with some optional steps, like using pesto sauce rather than tomato sauce, to make the different pizzas.  The main point is that there are steps to take to get something done.

The rest of the sentence states that the various processes communicate with each other to complete a job.  So to take our pizza scenario to the next level imagine you work in a pizzeria.  You may be the person that makes the pizza but then you hand it off to someone else to do the cooking and someone else to serve it to the customer.  Each person’s job would be a separate process and the communication could be as simple as shouting to each other that your step is done and the next step is ready to go.  All of you do your own thing, the process, and communicate with each other, the shouting, to get the goal, getting a pizza to the customer, complete.

So if you think of it this way, each person would be a separate microservice that communicate with each other to achieve the goal of getting the pizza out.   With a bit more thought you can see that this can be expanded to include the hostess, the bus boy, the manager, etc…


Creating an Azure Key Vault and adding a certificate.

April 14, 2016

This is a prelude to creating a secure Azure Service Fabric system in Azure using the Portal.  Granted, doing things through the portal is supposed to be easy but that is not always the case.  In any case, let me walk you through what I have found.  Even if you are not going to use Service Fabric, this will explain how to create a vault and upload a certificate.

The first thing is if you want to secure your Service Fabric (and who doesn’t) you need to have a certificate stored in an Azure Key Vault. Remember when I said not everything is easy when done through the Portal?  This is one of those cases.  Actually you cannot do this through the portal, you need to use PowerShell.  I found a couple of article that walked me through creating a vault but when I tried to create the Service Fabric I always got an error about the vault not being enabled for deployment.  Turns out the error message was right.  When I created my vault using the instructions found on the web page I was reading there was no mention of enabling it for deployment.  Luckily it is just a parameter that needed to be added to the PowerShell command.  The commands I used are as follows (replace with the names of your resource group and vault).  Sorry about the lousy formatting but for some reason I do not have much control over formatting right now

#Login to your Azure Account

Login-AzureRmAccount

#Set the default subscription ID.  The subscription ID will be shown after you login

Set-AzureRmContext -SubscriptionId <scubscriptionId>

#Create the resource group for the vault.  If you already have one you want to use, you can skip this step.

New-AzureRmResourceGroup –Name <resourcegroupname> –Location <location>

#Create the new vault.  Make sure the resource group name and location match the previous line.

New-AzureRmKeyVault -VaultName <vaultname> -ResourceGroupName <resourcegroupname> -Location <location> -EnabledForDeployment

 

Now that you have a place to store your certificate, you will need a certificate. There are a couple of ways to get a certificate, including the newly announced Azure App Services Certificate, and uploading them into the Key Vault isn’t too big of a deal so I am going to concentrate on using a self-signed certificate.  Granted, you would only want to do this for testing purposes but it beats paying about $70/year for a “real” certificate.  If you have a certificate you want to use, you can go to https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-security/ and look at Step 2.

No matter if you have a certificate or not it will make your life easier to download the “ServiceFabricRPHelpers” code from GitHub.  The name specifies Service Fabric but it will help upload a certificate in any event.  Once you have that downloaded (I went up two levels from there to the “Service-Fabric” level  and downloaded everything in a Zip file) there are more PowerShell commands to run.   Start PowerShell as an Administrator and go to the directory where the ServiceFabricRPHelpers reside (in my case it was “C:\Service-Fabric\Scripts”) and run the following commands:

#Login to your Azure Account.  Not needed if you continued with the same PowerShell session as above

Login-AzureRmAccount

#Unblock the file so that you do not need to keep saying you agree that you want to use it.  I found this also got rid of an intermittent error I was getting from another file.

Unblock-File -Path “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”

#Import the module so that you can use it.

Import-Module “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”

#This is the big command.  It will create a self-signed certificate and upload it.  If you look at the ServiceFabricRPHelpers.psm1 file you will see a LOT of code that gets run in the background. Make sure to use the same values as before.

Invoke-AddCertToKeyVault -SubscriptionId <subscription> -ResourceGroupName <resourcegroup> -Location <location> -VaultName <vaultname>-CertificateName <certname> -Password <certpassword> -CreateSelfSignedCertificate -DnsName <dnsname> -OutputPath <locationForCertOnLocalComputer>

#Since this is a self-signed certificate you need to import it into your machine’s trusted store.

Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\TrustedPeople -FilePath <locationAndCertName> -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)
Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\My -FilePath <locationAndCertName> -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)

After you run the “Invoke-AddCertToKeyVault” there will be a screen of information including the certificate thumbprint, Source vault, and certificate URL.  It is highly recommended you save those values for later use.

 

Here is a complete example of the commands:

Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId bf32d86a-b46D-4503-95c8-38c744f46389
New-AzureRmResourceGroup –Name ‘ShareBlogKeyVault’ –Location ‘East US’
New-AzureRmKeyVault -VaultName ‘servicefabricvault’ -ResourceGroupName ‘ShareBlogKeyVault’ -Location ‘East US’

Unblock-File -Path “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”
Import-Module “C:\Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1”

Invoke-AddCertToKeyVault -SubscriptionId bf32d86a-b46D-4503-95c8-38c744f46389 -ResourceGroupName ‘ShareBlogKeyVault’ -Location ‘East US’ -VaultName ‘servicefabricvault’  -CertificateName ‘servicefabriccert’ -Password ‘Pass@word1’ -CreateSelfSignedCertificate -DnsName ‘www.shareblog.com’ -OutputPath ‘C:\servicefabric’

Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\TrustedPeople -FilePath C:\servicefabric\servicefabriccert.pfx -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)
Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\My -FilePath C:\servicefabric\servicefabriccert.pfx -Password (Read-Host -AsSecureString -Prompt “Enter Certificate Password “)


Overall plan for next series of posts

January 11, 2016

The image below shows the high level architecture (always subject to change since Azure is changing constantly and if I find new and interesting ways of doing things) of the application I am going to be blogging about.

AzureDemoProgram

Here is a brief explanation

  1. The mobile application (or mobile device, or dummy applicaiton, or whatever) is going to be sending information into an IoT Hub.  This will allow for many different instances of the app to securely send information to be processed. In my next post I will go into more details why I chose an IoT Hub over and Event Hub and how I plan to get the information into it.
  2. The IoT Hub will be used to store the information until needed.
  3. The Stream Analytics job will take one route for the information.  This job will process all the information that comes across and will store it in an Azure SQL Database.  That information can then be processed using things like PowerBI to show reports.
  4. The Azure SQL database will store all the information that has come across for future processing as opposed to the DocumentDB database.
  5. The Service Fabrics apps will be used to determine if there was some sort of event in the data, where the event could be high blood pressure, low blood pressure, high white cell count, etc…   There will be one service fabric application to read the information from the hub, check to see if there is an event (i.e. high blood pressure), and push the data to the appropriate secondary service fabric app to do some processing on the data to make sure it is an event (i.e. the high blood pressure may be normal due to some other circumstances) before storing it.
  6. The DocumentDB will store only those items that had an event associated with them.  Using this mainly to show different ways of storing information in Azure.
  7. The Reporting will be handled via PowerBI.  There may need to be an additional step between the DocumentDB and PowerBI since, as of this writing, the PowerBI add-in to read DocumentDB databases is still in beta and may not be ready for use when I get to this step.

Along the way I plan on using TFS Online to push out all my code using DevOps practices.  I am also hoping to create a mobile application using PowerApps to populate the data if I can get into the program.


Unable to locate PowerShell command

December 26, 2015

The other day I was trying out Azure’s Service Fabric on my new Surface and I was not able to get it to work.  I was constantly getting an error that “Remove-ServiceFabricNodeConfiguration” was not found.  This was really strange since I copied the installation files directly from another laptop that did work correctly.

After much head banging it turns out that I was running the 32bit version of PowerShell on my Surface and was running the 64 bit version on my other laptop.  One way to find out which version you are running is to look at the shortcut that you use to launch the program.  If it is pointing to “c:\Windows\WOW64” it is the 32bit version and if it points to “C:\Windows\system32” it is the 64bit version.   Once I changed the shortcut to point to the 64bit  version everything installed correctly.

There is a thread that shows how you can determine the version of PowerShell you are running while running it located here:

http://stackoverflow.com/questions/8588960/determine-if-current-powershell-process-is-32-bit-or-64-bit


Welcome to the new ShareBlog

December 15, 2015

I have noticed that I have not posted anything on my blog for quite some time (more than 2 years to be precise) so I figure it is time to get back and share what I learn in my life as a consultant.

Some things have changed in the 2 years.  Numerous jobs (which we will not be discussing) and the fact that I have decided to change the focus of my work to Azure from SharePoint.  Don’t get me wrong, I still use and work with SharePoint but I feel it is a time for a change.  I have been working with SharePoint for around 15 years now (even before it was SharePoint) and I am ready for a change so I fell into an Azure project (which is how I actually started with SharePoint as well) and really enjoyed working with it.

What I propose doing next is to work on a somewhat hypothetical application that I have been kicking around in my mind for some time now.  I will create another blog post with the details but the short version is it will be a medical testing program that takes information from a hypothetical wearable device and uploads the data into Azure where it can be manipulated and reported on.  I plan on starting small and working on expanding the application into new areas and new technologies.  One thing with Azure right now is it is constantly changing so there will always be new ways to do something.

If I get ambitious I will also work on writing a JavaScript front-end that the hypothetical doctors and staff would use to interact with the information.  Let me just make one point clear.  This is hypothetical only and I have no idea if the way the program runs would work in the real world or pass FDA standards for testing. 


The value of old data

February 15, 2013

Recently in one of my projects we had a vendor that would deliver code that they swore they tested and passed all validation tests only to have it fail on the first couple of tests every time.  This was frustrating for us, since we were paying this company to test the code before sending it, and for them, since their reputation was getting hurt with each “bad” release.  After they tested the code again and we tested the code again and we each received different answers, we decided to have a web conference to try to figure out what was going on.  They started by creating a new entry and showed how everything worked fine and then we showed them our entry and how the code bombed with the first test.

I could see some lightbulbs go off when they realized we were using entries that were created with an old version of the software.  I won’t go into much detail about some of their suggestions other than to say that completely wiping out our database and starting over is not an option each time a new version of the software comes out (which is what they did before testing). So while they tested with entries created with that version and it worked great, they never tested against legacy entries.   Imagine if everytime Microsoft released a new version of Windows you had to start over from scratch! <cough>Windows8 RT</cough> 🙂

The moral of this story (and it builds upon an earlier post about knowing your customer) is know your customer’s data and make sure legacy data works as well.  They may not like the idea of wiping out everything and starting over.


SharePoint 2010: Showing the Context menu on any field in a List View when defining lists

July 9, 2012

I am working on a project that requires me to show the context menu on columns other than the default ‘Title” field that normally shows up (at least when creating a list based on the “Item” template).   While doing research I came across two different ways to do this when using XML to define the lists.

The first just involves renaming the “Title” field to something else.  I was taking this approach as I was playing with Visual Studio 2012RC’s new wizards for creating lists and content types (what are great!!!).   While the wizard allows you to change “Title” to whatever you want it never changed the text that shows up in the view header, only when add/editing the item.  Close but no cigar.  Looking at the code that gets generated I saw:

  <

 

 

 

This sets the field when adding/editing a new entry but doesn’t change the header.   After much head banging and powershell scripting I figured out that I needed to add:

Field ID={fa564e0f-0c70-4ab9-b863-0177e6ddd247} Type=Text Name=Title DisplayName=Name Required=TRUE SourceID=http://schemas.microsoft.com/sharepoint/v3 StaticName=Title MaxLength=255/>

  <

 

 

 

Field ID={82642ec8-ef9b-478f-acf9-31f7d45fbc31} Type=Text Name=LinkTitle DisplayName=Name />

as well. This is the field that is typically named “Title (linked to item with edit menu)”  Note that the “DisplayName” in both lines have been changed to what I want to actually show.  So now the header will be “Name” and the if I create a new view I will see “Name (linked to item with edit menu)”.

That worked fine for lists that use the existing “Title” field but what about lists that don’t have it or what if I needed to change the field that is used to display the menu depending on the view (which was actually the case)  For this I turned to code….sort of.  I was looking through the object model and saw that I can use “SPField.ListItemMenu” to set the field that I wanted to use.  Taking a chance I added “ListItemMenu=’True’ ” to the field I wanted to use and it worked!

So there you go.  Two different ways to change where the context menu shows!


Conditional requiring of fields using SharePoint 2010 List Validation

February 28, 2012

My customer wanted to be able to make a field required only if another field had a specific value, AKA Conditional Requiring.   While SharePoint has the ability to make fields required (or even check for uniqueness) it still does not yet have the ability to make a field required based on another.  That is where List Validation comes in.

If you do not already know, List Validation (along with Column Validation) allows you to perform checks on fields to make sure that they pass certain requirements.  They can be as simple as checking to make sure a field has a specific value to some rather complex (and cool) validations.  I forget who did it but I saw a “poor man” validation to make sure a human was entering a comment by making sure they entered in the first 3 letters of the today’s day.

Here is my setup.  I have two different fields, FieldA and Field B.   The field names have been changed to protect the innocent 🙂   They are both choice fields, although I would think they could be just about any field type,***certain fields like LOOKUP will not work here and the fields do not actually show up in the validation settings form*** and FieldA is required while FieldB is not.  I also setup FieldB so it does not have any default value.

 In my case I need to check two different things.  The first was that FieldA contained a specific value and if it did (and only if it did) I needed to make sure that FieldB had a selected value.  The first part is simple:

=[FieldA]=”ShareBlog”

This just states that if FieldA has the value of “ShareBlog” then the validation passes, otherwise it fails.  If you want to ouput text, rather than just check to see if it validation passes you can use the IF function.  The above formula can be rewritten as:

=IF([FieldA]=”ShareBlog”,”Pass”,”Fail”)

This also checks to see if FieldA has the value of “ShareBlog” but in this case it will ouput “Pass” if it does or “Fail” if it doesn’t.   Pretty close to what I need but not quite there.  I could also use:

=IF([FieldA]=”ShareBlog”,”True”,”False”)

but this will just output “True” or “False” (and the validation will actually still fail).   Finally I realized I can take the quotes off of “True” and “False” and just use the static names.   Now I will get my validation to pass if FieldA has the value of “ShareBlog” again.  Almost there.

Now I just need to check to make sure that FieldB has a value.   That is simple enough:

=IF([FieldB]<>””,True,False)

What I need to do now is to embed the two IF statements together to create the new one:

=IF([FieldA]=”ShareBlog”,IF([FieldB]<>””,True,False),True)

Sharp eyed readers may have notticed that for the outter IF statement I am actually returning TRUE no matter if FieldA has the value of “ShareBlog” or not.   That is not quite true.   If FieldA DOES contain “ShareBlog” then I want to make sure that FieldB has a value so I go into the second IF statement which will return a TRUE if FieldB has a value or FALSE if it doesn’t.   If FieldA DOESN’T contain “ShareBlog” then I don’t care what FieldB has for a value so I will always return TRUE.

That should do it.   You can throw some AND functions in there as needed if you need to check more fields


Creating a Core Results web part for FAST for SharePoint

March 30, 2011

There are a few postings out there that explain how to subclass the core results web part for SharePoint to add your own functionality.  Those work great…until you try to use them with FAST for SharePoint.  For some reason they just do not seem to work.   After much pounding of head against desk and then using .Net Reflector I figured out that there are two variables that need to be set.

base.ShowActionLinks = false;   //This tells the code it is a results web part and not an action links web part

base.bForceOnInit = false;   //Not quite sure why this is needed, MS has it marked as internal use only but it was the only way I could get the web part to work

 

Hope this helps others and if anyone can explain the “bForceOnInit” variable I would appreciate it

 


Know thy customer

July 31, 2010

I was reading an excellent article by David S. Platt called “Using WPF for Good and Not Evil” in which he takes a sample application written to show off Windows Presentation Foundation (WPF) and tells what he feels are the good and bad points of the program.  One thing that I read that really resonated with me was “Platt’s First, Last, and Only Law of User Experience Design states, “KNOW THY USER, FOR HE IS NOT THEE.” ”   He wrote this about software developers mainly but it would work for consultants as well although I would slightly rewrite it as “KNOW THY CUSTOMER, FOR HE (or SHE) IS NOT THEE”.  

 I have been guilty of this in the past myself.  Most of my early contracts were with the United States military (mainly the USAF) and I was fairly successful (at least I think so).  However when I switched to working with commercial customers I treated them the same way and failed miserably.  It took me a while to get it through my thick skull that commercial customers are not the same as military customers and need to be treated differently.  For that matter, each customer is not like the last customer and needs to be treated differently.  Once I realized that and practiced it I found my customers were much happier with me.

So developers and customers alike, get to know your users/customers and how they would likely do things.  I am not saying that nothing needs to change but with a better understanding of how things are currently done and why you will be able to better present and defend yourself (yes, even though you are hired as the expert you will need to defend why you want things done a certain way) when you suggest changes.