ShareBlog






         Another Azure site

August 16, 2017

Azure Event Grids in Preview now

Filed under: Uncategorized @ 4:50 pm

Azure Event Grid was just released into preview today.  What this does is enable Azure applications to become more reactive.  Rather than polling to see if something has been done, Azure Event Grid can ping your program when it gets an event letting it know that something has occurred.  There is a great tutorial on how to setup a logic app to respond to Azure VM changes.  Go to the link at the bottom of this page to find it.

Right now there is limited support for publishers and handlers but the goal is to eventually allow any Azure app to send or receive events.  As stated in the announcement page: “We are working to deliver many more event sources and destinations later this year, including Azure Active Directory, API Management, IoT Hub, Service Bus, Azure Data Lake Store, Azure Cosmos DB, Azure Data Factory, and Storage Queues.”  Also notice that one of the handlers is webhooks meaning that your applications can respond to these events today!While Azure Event Grid is in preview , the first 100,000 operations per month are FREE and then it is just $0.30 per million operations.

Take a look at the Azure Event Grid announcement page for more information and an introduction video.

August 13, 2017

64bit Visual Studio Code is now available!

Filed under: Uncategorized @ 3:45 pm

The July 2017 update for Visual Studio Code is now available and one of the best new features, in my opinion, is the new 64bit version of VS Code!  You are using VS Code, right?  If not, why not?  It is a great free editor.  Granted it does not have all the features of Visual Studio 2017 but did I mention it was free?  I use it for all of my Angular development projects and if I need to look at some JSON value that was returned it is a lot faster to bring this up than the full blown Visual Studio.  Don’t get me wrong, I still use Visual Studio a lot, especially for my C# development!  But Visual Studio is a full blown Integrated Development Environment  (IDE) while VS Code it just an editor (although with all the add-ins coming out it is becoming more and more like an IDE).

Just like everything else in the world, the right tool for the right job.  As I said, I find VS code works best for my Angular development needs and Visual Studio works best for all my .Net development.

In any case, head out to the July 2017 update announcement page to see what is new and improved in VS Code and if you have not tried it, give it a shot.

August 7, 2017

CAP Theorem

Filed under: Uncategorized @ 6:06 pm

In continuing my discussion of cloud design patterns, this post talks about the CAP theorem (also known as Brewer’s theorem after the computer scientist, Eric Brewer, who coined it) which is about distributed systems. While this is not 100% a cloud design pattern it is very unlikely that any system designed for the cloud can avoid this theorem.

What the theorem states is that a distributed system is comprised of Consistency, Availability, and Partition tolerance.  Out of the three, you can only guarantee two (much like the consultant’s joke “On-time, On-budget, working: choose any two). However it then goes on to say that since no system is safe from network failures you have to choose Partition tolerance as one of your choices so this leaves you with Availability or Consistency.

Now this does not mean that you always have to choose between Availability or Consistency.  This only means that if there is a network failure then you need to choose.  Keep in mind that there is no right or wrong answer. It really depends on the system.  Can you live with being able to access the system and have the data updated at some future point (also known as eventual consistency) or do you have to know that if you access the system the data you enter will immediately be updated.

Here are some real world examples to help you understand this a bit better.  When you access Facebook to enter a new update you almost always get access to it but your post may take a little while to show up.  This is because Facebook choose availability over Consistency in this case (in all honestly I am not 100% sure this is the case but it fits the profile).  You know you have accessed Facebook because it accepted your post and you know that your post will show up at some point in time, usually in a few seconds.

Now on the other hand think about purchasing stock online.  If you purchase 100 shares of Microsoft at X dollars you need to KNOW that the transaction went through completely.  You can imagine what would happen if various people stated that they bought the stock but only one person’s transaction was actually recorded.  This is choosing Consistency over Availability (granted you also always need to be able to access your stock portfolio all the time but work with me here).

So which do you choose when?  There is no hard and fast rule.  It will depend on the system and the data that it is working with.  My personal opinion is that you will see more and more systems choosing Availability. Think about it this way, are you more annoyed when your Facebook posts show up a minute later or when you cannot access Facebook at all?

When I was first presented with CAP I choose Consistency over Availability but as I read more about it and thought it I usually choose Availability over Consistency and rely on eventual consistency.  Actually when I was first presented with CAP it was during an interview and that is NOT when you want to first hear about it 🙂  I am actually embarrassed that I had not heard of it before but live and learn!

Hope this helps someone avoid the same embarrassment that I ran into learning about CAP the hard way.

 

 

July 22, 2017

Cloud patterns in plain English

Filed under: Uncategorized @ 3:00 pm

I was having a discussion with some people after a geeky meetup the other night and we started talking about design patterns for the cloud. A couple of people mentioned that they were a bit confused on some of them due to not understanding the concept behind the pattern. This post is to help alleviate that confusion. It lists the pattern, the definition from the Microsoft Cloud Design Patterns site (https://docs.microsoft.com/en-us/azure/architecture/patterns/), and then, hopefully, a real world example of what the pattern means (some of which will be better than others).

 

Ambassador

Create helper services that send network requests on behalf of a consumer service or application.

Back in the days when communication across the oceans was long process Ambassadors spoke for the nation. They were not the entire nation but they represented the nation and acted on the nation’s behalf

Anti-Corruption Layer

Implement a façade or adapter layer between a modern application and a legacy system.

When I am visiting my niece and her kids I need my niece to interpret what her kids are saying due to their use of slang I don’t understand (funny thing is they say the same thing). My niece is acting as the “Anti-Corruption Layer” by translating what is being said.

Backends for Frontends

Create separate backend services to be consumed by specific frontend applications or interfaces.

If you go to the grocery store there are multiple check-out lanes. Each cashier can be considered a backend for the frontend (the check-out line). Taken a bit further the self-checkout line could be considered one example while the other lines with cashiers could be considered a separate example.

Bulkhead

Isolate elements of an application into pools so that if one fails, the others will continue to function.

You ever watch movies of submarines or ships that get hit by a torpedo or hit a rock? There are people closing those big doors and spin a wheel to lock them so that the water in one room does not continue into the rest of the ship. Those are bulkheads.

Cache-Aside

Load data on demand into a cache from a data store

If you have a Ebook reader with a small amount of memory you can store a limited number of books in the reader and the rest you will need to download from the store to read. In this case the reader is the cache. You can get to the books quickly to read them rather than having to download the book from the store.

Circuit Breaker

Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource.

The electric circuit breaker in your house (hence the name). It is designed to flip, which stops the flow of electricity, if too much current passes through it.

CQRS

Segregate operations that read data from operations that update data by using separate interfaces.

You can compare this to the lines at a movie theater. There is one line where you have to select the movie and purchase your ticket (which is comparable to the update interface) and other line to print out the ticket you bought online (which is comparable to the read interface).

Compensating Transaction

Undo the work performed by a series of steps, which together define an eventually consistent operation.

If you are going on vacation there are a lot of steps to take. Find a good spot, book hotel, buy new bathing suit, book place to board the dog, get time off from work (yes, you should probably due that first but bear with me). You then find out that your boss has rejected your time off request. So now you need to undo all the steps: cancel reservation for dog boarding, return bathing suit, cancel hotel, etc.

Competing Consumers

Enable multiple concurrent consumers to process messages received on the same messaging channel.

Have you seen pictures of the old phone switchboards? There would many ladies answering the calls and connecting them to the right place. That way there would not a lot of people waiting for one person to connect them to the right place.

Compute Resource Consolidation

Consolidate multiple tasks or operations into a single computational unit

Rather than making individual trips in the car to go grocery shopping, pick up some flowers, buy a new bathing suit, and go to the doctors you combine them all into one trip.

Event Sourcing

Use an append-only store to record the full series of events that describe actions taken on data in a domain.

A recipe doesn’t just tell you what the final product should look and taste like. No, it tells you each step you took to get to that final product. You did step 1 and then step 2, step 3, and so on to get to the fried chicken.

External Configuration Store

Move configuration information out of the application deployment package to a centralized location.interface) and a

This is much like having devices get their configuration from the internet rather than having to set them up. Any time you plug in a USB device it can go out to the internet to get the information it needs to setup rather than having you do it. Do you remember having to install the drivers from a floppy disk?

Federated Identity

Delegate authentication to an external identity provider.

Your driver’s license. People take it that the issuing state has done all the needed checks and that you are who you say you are.

Gatekeeper

Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them.

A door that goes into a room where there is another door to leave, but it requires a different key to open the second door.

Gateway Aggregation

Use a gateway to aggregate multiple individual requests into a single request.

Say you have a friend serving in the military and around Christmas time everyone wants to send them a present. Rather than having everyone send a present separately you put all the presents in one big box and send just that one. When the box arrives the soldier unpacks the individual presents

Gateway
Offloading

Offload shared or specialized service functionality to a gateway proxy.

In ancient times everyone was a farmer, hunter, builder, etc. Then people started to specialize. You could go the farm down to the road and trade some extra meat you hunted for your vegetables. By offloading the need for everyone to be able to do everything allowed for specialization.

Gateway Routing

Route requests to multiple services using a single endpoint.

Think of this as a mall. There are multiple stores inside a mall but you only need to go one place to get to all of them.

Health Endpoint Monitoring

Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals.

For me, this is like calling your aging parents frequently to make sure they are feeling alright.

Index Table

Create indexes over the fields in data stores that are frequently referenced by queries.

If you have a lot of songs on your Ipod you want to be able to find them in different ways. For example, maybe you want them sorted alphabetically, by genre, by artist, etc. Each way you sort them is a different index so you can find the songs easily different ways.

Leader Election

Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances.

Democracy. We elect a person to act as our leader whether it be mayor, governor, or president.

Materialized View

Generate prepopulated views over the data in one or more data stores when the data isn’t ideally formatted for required query operations.

A newspaper would be a good example. Rather than going out and grabbing each story that we want to see, the newspaper presents them in one place formatted to make it easier to read

Pipes and
Filters

Break down a task that performs complex processing into a series of separate elements that can be reused.

Growing up each kid had their own set of chores to do in order to keep the house clean. If you say that keeping the house clean is the complex task it can be broken down into the individual chores that can be done separately and at different times (or when your mother screamed at you to do them) with some occurring more than others (you do the dishes every night but vacuum maybe every third day or so)

Priority Queue

Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority.

You ever pay for the VIP line at an event? You get in faster than the other people in the regular line.

Queue-Based Load
Leveling

Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads.

This can be thought of as a to-do list that you work on from top to bottom. You add new tasks to the bottom of the list and take them off (cross them out) at the top. So rather than stressing about everything that needs to get done you can put them on the list (queue) and handle them one at a time

Retry

Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that’s previously failed.

You call someone and it doesn’t go through (either a busy signal or you go to voicemail) so you call again.

Scheduler Agent Supervisor

Coordinate a set of actions across a distributed set of services and other remote resources.

The vacation example that I used for the “Compensating Transaction” works here as well with the difference being that Mom acts as a supervisor and the kids are the agents that actually perform the tasks. Mom assigns the tasks to the kids and keeps track of who has done what. If one of the kids fails at the assigned task (for example cannot find a good hotel) then Mom will ask them try to again and will determine when it is time to give up and cancel everything.

Sharding

Divide a data store into a set of horizontal partitions or shards.

If you have a large number of DVDs (or songs on your Ipod) you can break them up alphabetically, by genre, by star or other ways so that instead of one huge collection you have smaller collections making it easy to manage them.

Sidecar

Deploy components of an application into a separate process or container to provide isolation and encapsulation.

This is also called a sidekick pattern and I like it better since it makes it easier to understand. A superhero’s sidekick (think of Batman’s Robin) is there to support the superhero (Batman) and provide additional features (more Batarangs or a diversion).

Static Content Hosting

Deploy static content to a cloud-based storage service that can deliver them directly to the client.

Most restaurants don’t print out their menus for each person. Rather, since they do not change much, they can print out the menus beforehand and have them available (static content). If there are daily updates those can be written on a chalkboard (dynamic content)

Strangler

Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services.

Imagine you buy an old house and want to fix it up. You may update the plumbing first the electric sometime later, followed by the floors a bit later, and so on.

Throttling

Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service.

Every parent has told their child that was learning to drive “You are going too fast”. The parent is throttling the speed of the car; controlling how fast it can go.

Valet Key

Use a token or key that provides clients with restricted direct access to a specific resource or service.

The valet key that comes with some cars. It can open the doors and start the car but not open the trunk.

July 18, 2017

Create Azure Functions with Visual Studio

Filed under: Uncategorized @ 6:26 pm

Update:

Visual Studio 2017 Version 15.3 was released yesterday and now includes support for Azure Functions.  Take a look at what else is included as well as instructions on how to download the update.

Introduction

If you are not aware of Azure Functions, you really need to be.  They are Azure’s “serverless”, event driven, computing platform that can be written in many different languages including C#, TypeScript/JavaScript, PhP, Bash, Batch, F#, Php, Powershell, and Python.  That should be enough so that most people can write a function in a language they are familiar with.

As I mentioned, Azure Functions are based on events.  These events include items like calling the function via HTTP, having a message entered into a Queue (very useful for long processes), adding a Blob to Blob storage, adding an event to EventHub, having a timer kickoff, and many, many more with the number growing all the time.  As of July 18, 2017,  there were 27 events listed for C# with most being duplicated for the other languages as well.

While you can create functions directly in the portal, there are some limitations as to what you can do.  Chief among those is a lack of Intellisense and the ability to debug your code.  Granted, Azure Functions are meant to be small pieces of code that run quickly but even so you can write some pretty complicated code in a small number of lines making a debugger a necessity.  And really, who doesn’t like Intellisense?

Now there is one caveat with creating Azure Functions in Visual Studio and that is you need to install the Visual Studio 2017 preview version (at least as of July 18, 2017 when I wrote this).  You can install the necessary tools into Visual Studio 2017 but you will not see the Azure Functions listed as a new project type.

Download VS 2017 pre-release

Go to https://www.visualstudio.com/vs/preview/ to download the prerelease version.  The nice thing is since it is a prerelease version you can download any edition you want to so you can get hooked on all the wonderful function and features that Visual Studio 2017 Enterprise has.  If you have not used Visual Studio 2017 before you will be asked what modules to install.  You will need at least the Azure Development module.

Once you have install the preview program, go to Tools -> Extensions and Updates.  In the new window click on Online and in the search box enter “Azure Function”. You should see the Azure Function Tools for Visual Studio 2017 as the first entry.  Select it and follow the prompts to install.  Most likely you will need to close your Visual Studio instance for it to install.  You may also want to install any updates listed at this time.

Create your function

Now you are ready to create your first function.  Restart Visual Studio 2017 Preview and create a new project.  You can either do this going to File -> New -> Project  or by selecting Create new project… on the startup screen.  Either method will open the New Project window.  From there select Visual C# and under that select Cloud.  A listing of possible project types will show up on the right hand side of the window.  Select Azure Functions and give it a meaningful name in the Name box.  In the image below I have called my “ShareBlogFunctions”.  You can also change the location where the file will be stored if you wish to.  Click OK to create the new project.

Notice that there are very few files created automatically.  There are some minimal dependencies included for you, the host.json file which is empty at this point, and the local.settings.json file which just contains some empty keys.  There is not even a function included.   That actually makes sense since you did not choose what type of function you wanted created.

To add a function, right click on the project name and select Add -> New Item…  A new window will open and one of the first items will be Azure Function.  Select it and give it a meaningful name.  In the image below I called mine ShareBlogHttp. Click Add to continue.

A new window will open where you select which type of trigger you wish to use.  In this example I’ll be using the HttpTrigger which will cause the function to fire when you access it via HTTP as with a browser.  You may notice that there are only 16 entries in this list while there were 27 when creating a function via the portal.  I can only guess that this is due to this code being preview and more items will be added as time goes on.  In any case these 16 should be enough to get you started.  Once you select the trigger type, some new controls will show on the right side of the screen where you can select the AccessRights and the name of the function.

AccessRights determines if the user needs to pass in a API key in order to use this function and if so what kind.  The table below shows the various values and what key is needed, if any. The function and the master keys are found in the Keys management panel of your portal when the function is selected in the Azure portal so you will to publish this application once before you can find these keys.

Value Key Required
Admin Master Key
Anonymous None
Function Function Key

In the image shown below I have selected Anonymous so that anyone could use it and I have called my function ShareBlogHttpTrigger. Note that this will be the name that shows up in the Azure portal so you want to make sure it is meaningful.  Click Create to create the new function.

The listing below shows the default code that gets produced.  If it looks familiar, it is the exact same code that gets produced when creating a HTTP trigger function in C# using the portal.

using System.Linq;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;

namespace ShareBlogFunctions
{
    public static class ShareBlogHttp
    {
        [FunctionName("ShareBlogHttpTrigger")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            // parse query parameter
            string name = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
                .Value;

            // Get request body
            dynamic data = await req.Content.ReadAsAsync<object>();

            // Set name to query string or body data
            name = name ?? data?.name;

            return name == null
                ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body")
                : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
        }
    }
}

All this is doing is taking the value of the name parameter and creating a response that has “Hello” and the name parameter in it.

Running your function

Running the code is a little more involved.  In order to test and run your code locally you need to have the Azure CLI installed.  If you press F5 to run your code and you do not have it installed you will get prompted to install it.  Easy enough.

Once it is installed, press F5 to test your code. The code will compile as normally and then the Azure CLI will open a new window that will be running your code.  At the bottom of the text is the actual URL you need to access to kick off your function.  In this case, it is http://localhost:7071/api/ShareBlogHttpTrigger (don’t forget to append ?name= and your test string at the end).  As a side bar if you do not like Command Line Interfaces you are going to hate Azure.  More and more tasks are being done in either  the Azure CLI or PowerShell.  Don’t get me wrong, the portal is not going away (probably ever) but to get things done quickly and efficiently, a command line interface is the way to go.

If you access the URL using a browser you will get a screen like the one shown below.  In this case I passed in “ShareBlog” into the name parameter.

You can also place breakpoints and add watches just like you would any other C# project.  Go ahead and play with it and when ready you can deploy it.

Deploy to Azure

Once you are satisfied with your code it is time to publish it to Azure so it can be used.  This is very simple and if you have published any other project into Azure you will find the steps basically the same.

Right click on your project’s name and select Publish.  This will open the publish window where the are two options: Azure Function App or Folder.  Noticed that under the Azure Function App you also have the option of creating a new app or using an existing one.  We will be creating a new Azure Function App.  Select that and click Publish to continue.

On the next screen you will need to enter the values for the Azure Function App.  You can fill in the needed values as you will.  There is nothing out of the ordinary here in regards to the fields so I won’t go through the explanation of each field.   I went through and create a new Resource Group, App Service Plan, and Storage Account for this Azure Function App but that is not a requirement.  Click Create to continue.  Note that this is just creating the publishing profile, it is not yet publishing the application.

Once everything has been created in the background you can now publish to Azure.  A new screen will appear that shows your publishing information.  Click Publish to continue.  Notice the Site URL link listed on this page.  You will need this to access your function later.

Once the function has been published (which should not take long) you can test it out.  The URL will look like:

http://<functionappname>.azurewebsites.net/api/<functionname>name=<test string>

So for this function it would be

http://shareblogfunctionapp.azurewebsites.net/api/sharebloghttptrigger?name=shareblogtest

The part before /api/ you get from the publishing screen.  Then append /api/.  The trigger name is the function name you gave your code when you created it.  If you cannot remember what it is, look at your code and it will be listed as the FunctionName.

If everything goes correctly you should see a screen like the one below:

A couple of last minute items that I have found so far (which will hopefully be resolved soon)

  1. I cannot test a function created in Visual Studio in the portal.  I get an error about being able to parse the path
  2. You will not see the code in the portal.  Actually I don’t think this is a bug, I just think it is the way it works.
June 30, 2017

Azure Management Library updated with ability to manage more resources

Filed under: Azure Management Library @ 10:33 am

The Azure Management Library version 1.1 was just released and includes the ability to manage CosmosDB (formerly known as DocumentDB), Azure Container Service and Registry, and Active Directory Graph.  Go to the official announcement for more information and sample code.

June 23, 2017

Handling RBAC using Azure Management Libraries

Filed under: Azure Management Library @ 6:54 pm

If you have been using Azure for any length of time, you know Azure has a concept of Role-Based Access Control (RBAC) which allows you to fine tune who can access what resources (read Get Started with Role-Based Access Control in the Azure portal for more information). This is done by granting various roles to the resources and then adding users and Active Directory (AD) groups (preferably groups) to those roles.  By doing this you can easily maintain who can do what in Azure.

The Azure portal itself allows you do this easily but what about using code?  You can do this using Azure Management Libraries.

As a reminder, the main unit in Azure is a tenant which could have one or more subscriptions inside of it.  Each subscription can have one or more resource groups and each resource group can have one or more resources.  Roles and AD groups live at the tenant level so they apply to all the subscriptions, resource groups, and resources.

This demo application is a .Net Core 1.1 console application.  Additional packages need to be add as below:

Install-Package Microsoft.Azure.Common -Version 2.1.4    
Install-Package Microsoft.Azure.Management.Fluent 
Install-Package Microsoft.Azure.Management.Graph.RBAC.Fluent
Install-Package Microsoft.Extensions.Configuration.Json -Version 1.1.2     
Install-Package Microsoft.NETCore.Portable.Compatibility -Version 1.0.2
Install-Package Microsoft.Extensions.Configuration -Version 1.1.2
Install-Package Microsoft.Azure.Management.Authorization -Version 2.5.0-preview -Pre 
Notice that the “Microsoft.Azure.ManagementAuthorization” package is a preview version.  This is the only version that works with .Net Core 1.1 and as such some of the functionality may change later.

Rather than reading the information to get the Service Principal from a file as was done in previous blog entries, the information that is needed is stored in a settings file.  Refer to my previous blog post on Azure Management Libraries  to get this information.

{
  "AMLSettings": {
    "client": "<client ID>",
    "secret": "<secret key>",
    "tenant": "<tenant ID>"
  }
}

It is fairly straight forward to read this information.

var builder = new ConfigurationBuilder()
   .SetBasePath(Directory.GetCurrentDirectory())
   .AddJsonFile("appsettings.json");

Configuration = builder.Build();

Use the information being read from the settings file to get the Service Principle information that is being used for the AML program and the login credentials from that Service Principle.  These credentials can then be used to create the AML variable which will be used later.  Note that, in this case, the variable is only used to create a Resource Group and get its name for later use.  The actual setting of the RBAC is done in a different way.

ServicePrincipalLoginInformation loginInfo = new ServicePrincipalLoginInformation()
{
   ClientId = Configuration["AMLSettings:client"],
   ClientSecret = Configuration["AMLSettings:secret"]
};
var credentials = new AzureCredentials(loginInfo, Configuration["AMLSettings:tenant"], AzureEnvironment.AzureGlobalCloud);
var authenticated = Azure
   .Configure()
   .WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
   .Authenticate(credentials);
var azure = authenticated.WithDefaultSubscription();

Since there is a need to access the tenant level information in order to work with the Azure AD groups, creation of variables needed are separated rather than combined into one command as done in my previous blog posts..  The “authenticated” variable will allow access to the tenant level data, while the “azure” variable will work on the subscription level to create the other resources. There is also the “credentials” variable that will be used later when assigning the roles to the resource.

Create a demo resource group that we can add the role.

string rgName = "amlrbacresourcegroup";     //Demo Resource group Name

var resourceGroup = azure.ResourceGroups.Define(rgName)
   .WithRegion(Region.USEast)
   .Create();

Next, create the Azure AD group that will be added to the role.  If the AD group is already created then this step can be skipped.  Note that without the check it is possible to create multiple AD groups with the same name.

var adGroup = authenticated.ActiveDirectoryGroups.GetByName(grpName);

if (null == adGroup)
{
   GroupCreateParametersInner parameters = new GroupCreateParametersInner()
   {
      DisplayName = grpName,
      MailNickname = grpName,
   };
   var createdGroup = authenticated.ActiveDirectoryGroups.Inner.Create(parameters);

   System.Threading.Thread.Sleep(20000);

   adGroup = authenticated.ActiveDirectoryGroups.GetByName(grpName);
}

This is fairly straight forward.  Setup the parameters used to create the AD group. In this case only the “DisplayName” and “MailNickname” variables, both of which are required, are set and then create it.  If this group is going to be used immediately,  then there needs to be time to propagate through the Azure AD system hence the 20 second sleep delay.

Next, get the role to use.  Even though roles are tenant level entities, they need to be selected at the same subscription level as the resource where they will be added.

var role = authenticated.RoleDefinitions.GetByScopeAndRoleName($"/subscriptions/{azure.SubscriptionId}", "Contributor");

The “AuthorizationManagmentClient” handles the actual setting of the role to the resource.  Create an instance of it passing in the credentials from the Service Principal and set the proper subscription ID.

var authorizationManagementClient = new Microsoft.Azure.Management.Authorization.AuthorizationManagementClient(credentials)
{
   SubscriptionId = azure.SubscriptionId
};

Setup the parameters needs to assign the group to the role in the resource group.  The “PrincipalID” is what is being set which, in this case, is the group that was selected earlier but it could be an individual user account as well.  The “RoleDefinitionId” is the ID of the role to which the group will be added.

var createParameters = new Microsoft.Azure.Management.Authorization.Models.RoleAssignmentCreateParameters();
createParameters.Properties = new RoleAssignmentProperties()
{
   PrincipalId = adGroup.Id,
   RoleDefinitionId = role.Id,
};

Finally, create the role assignment using the variables that have been setup.  The first parameter is the scope to the resource being used, which is typically the resource’s Id .  The second parameter is a new GUID, and the last parameter is the parameters that were setup in the previous step.

var result = authorizationManagementClient.RoleAssignments.Create(
   resourceGroup.Id,
   Guid.NewGuid().ToString(),
   createParameters.Properties
);

That is all there is to it.  The full source code can be download from GitHub

June 11, 2017

Accessing .Net Core APIs from Angular2 using Azure AD

Filed under: Angular2 @ 10:17 pm

Introduction

I am currently writing a demo application for the Azure Management Library and wanted to make it secure (after all the security file could grant the same rights as the Administrator).  I also wanted to make it a Single Page Application (SPA) since 1) I have been doing a lot of work with Angular2 lately and 2) I really think it is the wave of the future (no offense to MVC) as can be shown by the number of packages like Angular that are out there (with more showing up every day).

So I did my research (AKA used a search engine) to figure out the best way to do this but I could not find an example of anyone doing this before.  I guarantee that someone has, but I guess they didn’t write about it. There are lot of posts talking about accessing .Net Core APIs using its own authentication engine from Angular2  and others, including this excellent one called  Using ADAL with Angular2 by Vishal Saroopchand which talks about using the adal-angular package that I relied on a lot (AKA copied the code), discussing authenticating via Azure AD but it stops there. However since the application will be accessing Azure directly it makes sense to authenticate using Azure AD.

I started using the Angular2 quick start seed project that can be downloaded here from GitHub.  It provides a very good starting point and I have used it quite a bit.  You could also use the Command Line Interface (CLI), found here, which provides a slightly different project file but it should still work.  Either one will provide a good starting point.  Please note that I will assume you are already familiar with using Angular2 and .Net Core so I will not be going into step by step details on what to do.

Initial Setup

Next there are a few packages that need to be installed.  You will need one to perform the Azure AD authentication along with a supporting package, and one to make the secure calls. Granted, you can write the code to do both yourself if so inclined, and there are probably others that will work as well, but I will be using these packages for this example.

npm install package adal-angular --save
npm install expose-loader --save
npm install @types/adal --save-dev
npm install package angular2-jwt --save

The first package handles the Azure AD authentication (ADAL stands for Active Directory Authentication Library), the second package is used to expose adal-angular globally (see below), the third installs the needed types, and the last package will make the authenticated calls using JWT (JavaScript Web Tokens) which is a way to pass the credentials of the user in a secure fashion.

If you want to use the code that Vishal already wrote for his blog post, you will just need to add the last package, add the settings to the config.service.ts file as shown below, modify the oauth-callback-guard.ts file as noted below (add the line to save the token), and head to the “Authentication” section

Register your application with Azure AD

Follow the instructions in this site to register your application with Azure AD.

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-app-registration

Once you have created the application Azure copy the “Application ID” for use below.

Authentication Services

Following Vishal’s post, create a “services” folder to store all the authentication related services.

Create a new file called config.service.ts to store the authentication login information.  If you do not already know it, you can get the tenant ID from the “domains” link in Azure AD.  It will most likely be something like <name?.onmicrosoft.com.

import { Injectable } from '@angular/core';

@Injectable()
export class ConfigService {
    constructor() {
    }
    public get getAdalConfig(): any {
        return {
            tenant: '<your tenant ID>',
            clientId: '<application ID from previous step>',
            redirectUri: window.location.origin + '/',
            postLogoutRedirectUri: window.location.origin + '/'
        };
    }
}

Next create adal.service.ts which we will use to wrap the calls to the adal-angular commands.

import { ConfigService } from './config.service';
import { Injectable } from '@angular/core';
import 'expose-loader?AuthenticationContext!../../../node_modules/adal-angular/lib/adal.js';
let createAuthContextFn: adal.AuthenticationContextStatic = AuthenticationContext;

@Injectable()
export class AdalService {

    private context: adal.AuthenticationContext;
    constructor(private configService: ConfigService) {
        this.context = new createAuthContextFn(configService.getAdalConfig);
    }

    login() {
        this.context.login();
    }

    logout() {
        this.context.logOut();
    }

    handleCallback() {
        this.context.handleWindowCallback();
    }

    public get userInfo() {
        return this.context.getCachedUser();
    }

    public get accessToken() {
        return this.context.getCachedToken(this.configService.getAdalConfig.clientId);
    }

    public get isAuthenticated() {
        return this.userInfo && this.accessToken;
    }
}

Note the use of

import 'expose-loader?AuthenticationContext!../../../node_modules/adal-angular/lib/adal.js'

This is used so that we can access the commands in adal-angular as ADAL doesn’t work with the CommonJS pattern.

Next we need to create the Angular2 guard to determine if the user has logged in or not.  Create authenticated.guard.ts

import { Observable } from 'rxjs/Observable';
import { Injectable } from '@angular/core';
import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot, NavigationExtras } from '@angular/router';
import { AdalService } from './../services/adal.service';

@Injectable()
export class AuthenticationGuard implements CanActivate {
    constructor(private router: Router, private adalService: AdalService) {
    }

    canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | Promise<boolean> | boolean {

        let navigationExtras: NavigationExtras = {
            queryParams: { 'redirectUrl': route.url }
        };

        if (!this.adalService.userInfo) {
            this.router.navigate(['login'], navigationExtras);
        }
        return true;
    }
}

There is one caveat with using Azure AD authentication.  It passes back the information in a URL which uses the older hash style rather than the new HTML 5 method.   Because of this, there needs to be a route created that will handle the parsing of the token and saving the information and you need to make sure your application uses the older hash style as well.

I followed Vishal’s lead and created a “login-callback” folder and added oath-callback.component.ts to handle the redirection when the user is logged in

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';

import { AdalService } from './../services/adal.service';

@Component({
    template: '</p>
<div>Please wait...</div>
<p>'
})
export class OAuthCallbackComponent implements OnInit {
    constructor(private router: Router, private adalService: AdalService) {

    }

    ngOnInit() {
        if (!this.adalService.userInfo) {
            this.router.navigate(['login']);
        } else {
            this.router.navigate(['home']);
        }
    }
}

Then create a separate guard that will be used just for this route called oauth-callback-guard.ts.  

import { Routes } from '@angular/router';
import { Injectable } from '@angular/core';
import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router';

import { AdalService } from './../services/adal.service';

@Injectable()
export class OAuthCallbackHandler implements CanActivate {
    constructor(private router: Router, private adalService: AdalService) {
    }

    canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): boolean {

        this.adalService.handleCallback();

        if (this.adalService.userInfo) {
            localStorage.setItem("token", this.adalService.accessToken);
            var returnUrl = route.queryParams['returnUrl'];
            if (!returnUrl) {
                this.router.navigate(['home']);
            } else {
                this.router.navigate([returnUrl], { queryParams: route.queryParams });
            }
        }
        else {
            this.router.navigate(['login']);
        }

        return false;
    }
}

Notice that in line 17 we store the accessToken returned for future use.  This will be used to create the authenticated calls to the .Net Core API.

Finally add a module to wrap everything together.  Call it oauth-callback.module.ts

import { NgModule } from '@angular/core';
import { OAuthCallbackComponent } from './oauth-callback.component';
import { OAuthCallbackHandler } from './oauth-callback.guard';

@NgModule({
    imports: [],
    declarations: [ OAuthCallbackComponent],
    providers: [OAuthCallbackHandler]
})
export class OAuthHandshakeModule { }

That takes care of the basics.  Now you can use the AuthenticationGuard to make sure the user is logged in before accessing secured areas.  Here is an example of it in use

 { path: '', redirectTo: 'login', pathMatch: 'full' },
 { path: 'home', component: HomeComponent, canActivate: [AuthenticationGuard] },
 { path: 'about', component: AboutComponent, canActivate: [AuthenticationGuard] },
 { path: 'login', component: LoginComponent },
 { path: 'id_token', component: OAuthCallbackComponent, canActivate: [OAuthCallbackHandler] },
 { path: 'contact', component: ContactComponent, canActivate: [AuthenticationGuard] }

Of course you still need your login page but you can make it look like anything you want.  You just need to make sure you call the login command when needed.  Here is my login.component.ts

import { Router } from '@angular/router';
import { AdalService } from './../services/adal.service';
import { Component, OnInit } from '@angular/core';

@Component({
    templateUrl: './login.component.html'
})
export class LoginComponent implements OnInit {

    constructor(private router: Router, private adalService: AdalService) { }

    ngOnInit() {
        console.log(this.adalService.userInfo);
    }

    login() {
        this.adalService.login();
    }

    logout() {
        this.adalService.logout();
    }

    public get isLoggedIn() {
        return this.adalService.isAuthenticated;
    }
}

and my login.component.html

</p>
<h3>Login</h3>
<p>
</p>
<div *ngIf="!isLoggedIn">
    <button type="button" class="btn btn-primary" (click)="login()">Login via Azure AD</button>
</div>
<p>

</p>
<div *ngIf="isLoggedIn">
    <button type="button" class="btn btn-danger"(click)="logout()">Logout</button>
</div>
<p>

Authorization

That takes care of logging in via Azure AD.  Now we need to use the token that we saved to make authenticated calls.  That is where the angular2-jwt package comes into play.  If you go to the package’s GitHub repository located here and scroll down, you will see that the README.md gives a lot of information on how to use the application.  I used the “Advanced Configuration” section for the information that I needed.

Per the instructions, I created a auth.module.ts file

import { NgModule } from '@angular/core';
import { Http, RequestOptions } from '@angular/http';
import { AuthHttp, AuthConfig } from 'angular2-jwt';
import { AdalService } from './services/adal.service';

export function authHttpServiceFactory(http: Http, options: RequestOptions) {
return new AuthHttp(new AuthConfig({
tokenName: 'token',
tokenGetter: (() => localStorage.getItem("token")),
globalHeaders: [{'Content-Type':'application/json'}],
}), http, options);
}

@NgModule({
providers: [
{
provide: AuthHttp,
useFactory: authHttpServiceFactory,
deps: [Http, RequestOptions]
}
]
})
export class AuthModule {}

Notice in line 9 I set the “tokenGetter” variable to the token that I stored previously.

Make sure to include this module in your main module.

Now it is simple to make your API calls using authenticated HTTP calls.  The following code shows an example of how I have done it

import { Injectable } from '@angular/core';
import { Headers, Http, Response, RequestOptions } from '@angular/http';
import { Observable } from 'rxjs';
import 'rxjs/Rx';
import {AuthHttp} from 'angular2-jwt';

import { ResourceGroups } from './resourcegroups';

@Injectable()
export class AboutService{

    headers: Headers;
    options: RequestOptions;
 
    errormsg: string;

    constructor(private http: Http, 
    private authhttp: AuthHttp) {
        this.headers = new Headers({ 'Content-Type': 'application/json' });
        this.options = new RequestOptions({ headers: this.headers });
    }

    getResourceGroups() {
        return this.authhttp.get("http://localhost:64564/api/resourcegroups")
            .map(response => <ResourceGroups[]>response.json())
            .catch(this.handleError);
   
    private handleError(error: any): Promise<any> {
        console.error('An Error occurred', error); // for demo purposes only
        return Promise.reject(error.message || error);
    }

} 

Note the use of angular2-jwt’s “authhttp” rather than the typical “http” call in line 24.

That takes care of the Angular2, now onto .Net Core.

.Net Core

This is much easier than the Angular2 side.  All you need to do is to add the correct information in your appsettings.json, modify your Startup.cs file to accept the JWT token, and add the Authorize attribute to your API call,

Let’s start with the appsettings.json.  Just like you did with the config.service.ts file above you need to add the proper settings.  Modify your file to add the “AzureAd” settings shown below

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AzureAd": {
    "AadInstance": "https://login.microsoftonline.com/{0}",
    "Tenant": "<your tenant ID>",
    "Audience": "<application ID from above>"

  }
}

Make sure to use the same settings that you used in the config.service.ts file.  Some examples I have seen want the GUID of the for the tenant ID but that never seemed to work for me.

Now that you have the correct settings, modify the startup.cs file to use them.  Edit the “Configure” method to add the following line to read and use the settings from above.

// Configure the app to use Jwt Bearer Authentication
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
   AutomaticAuthenticate = true,
   AutomaticChallenge = true,
   Authority = String.Format(Configuration["AzureAd:AadInstance"], Configuration["AzureAD:Tenant"]),
   Audience = Configuration["AzureAd:Audience"],
});

And, since you are calling this from JavaScript, you need to add Cross-Origin Resource Sharing (CORS) settings.  This states that you are trusting the JavaScript that is making the calls.  You can set this up in the “Configure” method as well and I highly recommend reading the Enabling Cross-Origin Requests (CORS) site to determine which method will work the best for you and for more examples.

I added the following line to the “Configure” method just under the line above. NOTE: THIS IS NOT A SECURE WAY OF DOING THIS AS IT WILL ALLOW ANY SITE TO CALL THESE APIs AND IS ONLY DONE FOR DEMONSTRATION PURPOSES.  In real life you would list only those trusted orgins.

app.UseCors(builder => builder.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod().AllowCredentials());

Finally, add the [Authorize] attribute to either the entire API, as I have done in the example below, or to individual API calls.

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Management.Fluent;
using System.Threading.Tasks;

namespace AMLDemoAPI.Controllers
{
    [Route("api/[controller]")]
    [Authorize]
    public class ResourceGroupsController : Controller
    {
        private readonly IAzure _azure = Azure.Authenticate("c:\\code\\auth.txt").WithDefaultSubscription();

        // GET api/values
        [HttpGet]
        public async Task<IActionResult> Get()
        {
            var result = await _azure.ResourceGroups.ListAsync();
            return new JsonResult(result);
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public async Task<IActionResult> Get(string id)
        {
            return new JsonResult(await _azure.ResourceGroups.GetByNameAsync(id));
        }

        // POST api/values
        [HttpPost]
        public void Post([FromBody]string value)
        {

        }

        // PUT api/values/5
        [HttpPut("{id}")]
        public void Put(int id, [FromBody]string value)
        {
        }

        // DELETE api/values/5
        [HttpDelete("{id}")]
        public async void Delete(string id)
        {
            await _azure.ResourceGroups.DeleteByNameAsync((id));
        }
    }
}

That is all there is to it.  Hope it helps.  The code for this example can be found on my GitHub repository

Angular2: https://github.com/garybushey/AuthenticationDemo

.Net Core: https://github.com/garybushey/AMLDemoAPI

June 5, 2017

Dependency Injection in a nutshell

Filed under: Azure Management Library @ 4:05 pm

I was working with a new developer on one of my contracts who could not seem to grasp the concept of Dependency Injection. According to Wikipedia, “Dependency injection separates the creation of a client’s dependencies from the client’s behavior, which allows program designs to be loosely coupled and to follow the dependency inversion and single responsibility principles. It directly contrasts with the service locator pattern, which allows clients to know about the system they use to find dependencies.”

Makes sense, right?

I explained it this way:  You are going out and need a car so you contact a ride sharing service.  You may not really care what car you get as long as it meets certain criteria like 4 wheels, brakes, seats, etc…  In that case you just call the company and say “I need a car”.  Dependency Injection works the same way.  You may not care (or in some cases even know) what class you are getting as long as it meets certain criteria.  If you need to log information from your program you may not care if the data is stored in a SQL Server Database, a Cosmo Database, an Azure table, or even a flat file just as long as the data is stored correctly.

He seemed to get it and I hope it helps others.

 

May 29, 2017

Using Azure Management Libraries

Filed under: Azure Management Library @ 7:28 pm

In my last blog post I gave a quick introduction to Azure Management Libraries (AML) for .Net.   In this post we will go through some code examples.  Microsoft has provided an extensive library of samples that you can look through here but the problem is there is no real description of how these were created.

The first thing you will need to do is to download the libraries you want to use.  If you look at the README.MD file in the AML’s GitHub repository, almost at the bottom of the page is the listing of the various Azure Management Libraries and a link to the nuget repository where you can get more information on the library as well as the command needed to download the library.

For this example we will need 3 libraries:

  • Microsoft.Azure.Management.Compute.Fluent;
  • Microsoft.Azure.Management.Fluent;
  • Microsoft.Azure.Management.ResourceManager.Fluent

Create a new Console App using .Net Core as shown below:

Using either PowerShell or the Package Manager Console install the libraries we need from above and then add the appropriate “using” statements.

The first thing we need to do in order for our code to work is authenticate ourselves. This is not done using username and password but rather through Azure Service Principal information either through code or through a file. This file gives you step by step instructions on how to create a Service Principal on your desktop but that can be quite a process (especially with a Windows 10 machine).

Luckily,  at the last Build conference, Microsoft announced that all Azure Portals will have a CLI shell built right into the portal (PowerShell coming soon).  By using that you can by pass all the steps and just use the one shown below.  Note that in the actual file it is suggested to use “jq” to push the output into a file. I do not do that, I just cut the code from my shell and paste it into notepad.  Also note that the code below looks like it wants you to change values like “subscriptionId” but you can paste the line AS IS into the CLI window and it will work.

az ad sp create-for-rbac --expanded-view -o json --query "{subscription: subscriptionId, client: client, key: password, tenant: tenantId, managementURI: endpoints.management, baseURL: endpoints.resourceManager, authURL: endpoints.activeDirectory, graphURL: endpoints.activeDirectoryGraphResourceId}"

This program will generate text like what is shown below (of course the values of the GUIDs have been changed otherwise you could have tenant admin rights to my tenant).  Copy and paste the text into a file for use in your code.  Note that you need to guard this file just as you would your admin username and password credentials.

"authURL": "https://login.microsoftonline.com", "baseURL": "https://management.azure.com/", "client": "55550809-4168-4846-8620-98b32163a9da", "graphURL": "https://graph.windows.net/", "key": "55555555-603b-5555-8d7a-3762d60fe9fb", "managementURI": "https://management.core.windows.net/", "subscription": "55555555-b5da-460d-5555-ac985d5f3b83", "tenant": "55555555-ede8-4da6-5555-2d9d5fd5295f"

If you care what the code is doing is to create a new App registration in Azure AD and stores the key into the Keys area so if your file ever falls into the wrong hands you can delete that app to render the file useless.

Replace the line in the Main method with:

var azure = Azure.Authenticate("c:\\code\\auth.txt").WithDefaultSubscription();

(substituting the name and the location of the file).   This will use the file to get access to the tenant and then use the default subscription for everything.  If you know your subscription’s ID you can use “WithSubscription(<subscriptionId>)” instead.  Your code should look like the following:

Now that are authenticated we can start to manipulate the resources in Azure.  You may have noticed that all the libraries we imported end in “Fluent”.  What this means is that the command can be chained together into one call.  So rather than stating the VM’s name, resource group, and network in 3 different commands, we can do it all at once.  Also, I make a call to “using Microsoft.Azure.Management.ResourceManager.Fluent.Core” rather than just the “Fluent” so that I can get access to some of the enums used below.

There is one caveat to this.  While the commands can be chained together there is an order to how they can be chained together.  For instance to create a VM we need to specify the name first (actually this is done as part of the definition), then we MUST define the region, followed by the resource group, then the  primary network, the private IP address information, public IP Address, machine image, Admin username, Admin password, and then you can add optional information like disks, tags, size, and many more options.  One nice thing about AML commands is that if you are expected to enter complex information like Windows Server image name, there is a enum that has all the values defined like “KnownWindowsVirtualMachineImage.WindowsServer2008R2_SP1”.

Once you have all your values defined you just need to make a call to “Create()” to kick off the process.  There is also a “CreateAsync()” command but, as of this writing, that is still in preview and will not be covered here. A bare minimum VM implementation looks like:

If you notice, I use the “azure” variable that I had defined previously to create the VM.  By calling the “VirtualMachines” class I am saying I want to manipulate VMs and by calling the “Define” method I am saying I want to create a new VM.  Be careful, if the machine already exists this call will through an error.

If I wanted to change values I would use “Update” instead of “Define”.  In order to do that I need to have a reference to the VM.  I can do this a couple of different ways including using the Id of the VM or by using the name of the resource group the VM belongs to as well as the name of the VM itself (this is the method I use most of the time) as shown below:

var windowsVM1 = azure.VirtualMachines.GetByResourceGroup("test", "test");

(note that I called my VM and my Resource Group “test” which probably wouldn’t happen in the real world).

Once you have the reference you can then update the VM.  You do things like add a new data disk (shown below), add tags, change the machine size, and so on.  Once you have all the commands chained together call “Apply” to apply the updates as shown below

One other thing you can do is to manage the resources like turn a VM on or off . This is very easy to do.  Get a reference to the VM like we did above and then just call the “PowerOff()” method to turn the VM off or “Start()” to turn it on.  You can also reset the VM by calling “Restart()”.  No additional commands are needed.

The last thing you can do is to get information about the VM.  Much like you did above, get the reference to the VM and then if you just type in the variable’s name and a period, the Intellisense will show you the VM’s values that you can retrieve.

As you can imagine you can create programs that allow people like help desk users to create and manage resources in Azure without having to have actual logins using Azure Management Libraries.  In my posts I will go through doing just that.

Here is the complete code.

using System;
using Microsoft.Azure.Management.Compute.Fluent;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;

namespace AMLDemo
{
    class Program
    {
        static void Main(string[] args)
        {
            var azure = Azure.Authenticate("c:\\code\\auth.txt").WithDefaultSubscription();
            var windowsVM = azure.VirtualMachines.Define("test")
                .WithRegion(Region.USEast)
                .WithNewResourceGroup("test")
                .WithNewPrimaryNetwork("10.0.0.0/28")
                .WithPrimaryPrivateIPAddressDynamic()
                .WithoutPrimaryPublicIPAddress()
                .WithPopularWindowsImage(KnownWindowsVirtualMachineImage.WindowsServer2008R2_SP1)
                .WithAdminUsername("garybushey")
                .WithAdminPassword("ThisIsAFakeP@ssw0rd1234")
                .Create();
            Console.WriteLine(windowsVM.Id);

            var windowsVM1 = azure.VirtualMachines.GetByResourceGroup("test", "test");
            windowsVM1.Update()
                .WithNewDataDisk(10)
                .Apply();

        }

    }
}
« Previous PageNext Page »

© 2023 ShareBlog   Provided by WPMU DEV -The WordPress Experts   Hosted by Microsoft MVPs