Getting Release Management to fail a release when using a custom PowerShell component

If you have a custom PowerShell script you wish to run you can create a tool in release Management (Inventory > Tools) for the script which deploys the .PS1, PSM files etc. and defines the command line to run it.

The problem we hit was that our script failed, but did not fail the build step as the PowerShell.EXE running the script exited without error. The script had thrown an exception which was in the output log file, but it was marked as a completed step.

The solution was to use a try/catch in the .PS1  script that as well as writing a message to Write-Error also set the exit code to something other than 0 (Zero). So you end up with something like the following in your .PS1 file

param
(
[string]$Param1 ,
[string]$Param2 )

try
{
    # some logic here

} catch
{
    Write-Error $_.Exception.Message
    exit 1 # to get an error flagged so it can be seen by RM
}


Once this change was made an exception in the PowerShell caused the release step to fail as required. The output from the script appeared as the Command Output.


Source: Rfennell

ALM Ranger’s release DevOps Guidance for PowerShell DSC – perfectly timed for DDDNorth

In a beautiful synchronicity the ALM Rangers DevOps guidance for PowerShell DSC has been release at at the same time as I am doing my DDDNorth session ‘What is Desired State Configuration and how does it help me?’

This Rangers project has been really interesting to work on, and provide much of the core of my session for DDDNorth.

Well worth a look if you want to create your own DSC resources.


Source: Rfennell

Cannot build a SSRS project in TFS build due to expired license

If you want to get your TFS build process to product SSRS RDL files you need to call the vsDevEnv custom activity to run Visual Studio (just like for SSIS packages). On our new TFS2013.3 based build agents this step started to fail, turns out the issue was not incorrect versions of DLLs or a some badly applied update, but that the license for Visual Studio on the build agent had expire.

I found it by looking at diagnostic logs in the TFS build web UI.

image

To be able to build BI project with Visual Studio you do need a licensed copy of Visual Studio on the build agent. You can use a trial license, but it will expire. Also remember if you license VS by logging in with your MSDN Live ID that too needs to be refreshed from time to time (that is what go me), so better to use a product key.


Source: Rfennell

Experiences using a DD-WRT router with Hyper-V

I have been playing around with the idea of using a DD-WRT-V router on a Hyper-V VM to connect my local virtual machines to the Internet as discussed by Martin Hinshlewood in his blog post. I learned a few things that might be of use to others trying the same setup.

What I used to do

Prior to using the router I had been using three virtual switches on my Windows 8 Hyper-V setup with multiple network adaptors to connect both my VMs and host machine to the switches and networks

image

So I had

  • One internal virtual switch only accessible on my host machine and my VMs
  • Two external virtual switches
    • one linked to my physical Ethernet adaptor
    • the other linked to my physical WiFi adaptor

Arguably I could have had just one ‘public’ virtual switch and connected it to either my Ethernet or Wifi as needed. However, I found it easier to swap virtual switch in the VM settings rather than swap the network adaptor inside the virtual switch settings. I cannot really think of a compelling reason to pick one method over another, just person taste or habit I guess.

This setup had worked OK, if I needed to access a VM from my host PC I used the internal switch. This switch had no DHCP server on it, so I used the alternate configuration IP addresses assigned by Windows, managing machine IP addresses via a local hosts file. To allow the VMs to access the internet I added a second network adaptor to each VM which I bound to one of the externally connected switches, the choice being dependant which Internet connection I had at any given time.

However, all was not perfect, I have had problems with some Linux distributions running in Hyper-V with then not getting an IP address via DHCP over Wifi. There was also the complexity of having to add second network adaptors to each VM.

So would a virtual router help? I thought it worth a try, so I followed Martin’s process. But hit a few problems.

Setting up the DD-WRT router

As Martin said his post more work was needed to fully configure the router to allow external access. The problem I had for a long time was that as soon as I enabled the WAN port I seemed to lose connection. After much fiddling this was the process that worked for me

  1. Install the router as detailed in Martin’s post
  2. Link your internal Hyper-V switch to the first Ethernet (Eth0) port on the router VM. This seems a bit counter intuitive as the DD-WRT wiki says the first port is for the WAN – more on that later

    image
  3. Boot the router, your should be able to login on the address 192.168.1.1 as root with the password admin on both the console or via a web browser from your host PC
  4. On the basic setup tab (the default page) enable the WAN by selecting ‘Automatic Configuration (DHCP)’ and save the change

    It is at this point I kept getting disconnected. I then realised it was because the ports were being reassigned, at this point Eth0 had indeed become the WAN port and Eth1 the internal port
  5. So in Hyper-V manager
    • Re-assign the first Ethernet port (eth0) to external hyper-v switch (in turn connected to your Internet connection)
    • Assign the second Ethernet port (Eth1) to the internal virtual switch

      image
  6. You can now re-connect to 192.168.1.1 in a browser from your host machine to complete your configuration

So now all my VMs connected to the virtual switch could get a 192.168.1.x address via DHCP (using their single network adaptors) but they could not see the internet. However, on the plus side DHCP seems to work OK for all operating Systems, so my Linux issues seemed to be fixed

It is fair to say I now had a fairly complex network, so it was no unsurprising I had routing issues.

image

The issue seems to have been that the VMs were not being passed the correct default router and DNS entries by DHCP. I had expect this to be set by default by the router, but it was not the case. They seem to need to be set by hand as shown below.

image

Once these were set, the change saved on the router and the VMs renewed their DHCP settings they had Internet access

At one point I thought I had also lost Internet access from my host PC, or the Internet access was much slower. I though I had developed a routing loop with all traffic passing through the router whether it was needed or not. However, once the above router gateway IP settings were set these problem when away.

When I checked my Windows 8 host’s routing table using netstat –r it showed two default roots (0.0.0.0), my primary one (192.168.0.1) my home router and my Hyper-V router (192.168.1.1 ). The second one had a much higher metric, so should not be used unless sending packets to the 192.168.100.x network, all other traffic should go out the primary link it should be using.

image

It was at this time I noticed the problem of getting a DHCP based IP address from Wifi had not gone away completely. If I had my router’s WAN port connected to my WiFI virtual switch, depending on the model/setup of WiFI router DHCP worked sometimes and sometimes not. I think this was mostly down to an authentication issue; not a major issues as thus far the only place I have a problem is our office where our WiFi is secured via radius server based AD authentication. Here I just switched to using either our guest WiFi or our Ethernet which both worked.

So is this a workable solution?

It seems to be OK this far, but there were more IP address/routing issues during the setup than I would like, you need to know your IPV4.

There are many option on the DD-WRT console I am unfamiliar with. By default it is running just like a home one, in a Network Address Translation (NAT) mode. This has the advantage of hiding the internal switch, but I was thinking would it be easier to run the DD-WRT as simple router?

The problem with that mode of operation is I need to make sure my internal virtual LAN  does  not conflict with anything on networks I connect to, and with automated router protocols such as RIP could get interesting fast; making me a few enemies with IT managers who networks I connect too.

A niggle is that whenever I connect my PC to a new network I need to make sure I remember do a DHCP renew of my WAN port (Status > WAN > DHCP Renew), it does  not automatically detect the change in connection.

Also I still need to manage my VMs IP addresses with a host file on the host Windows PC. As I don’t want to edit this file too often, it a good idea to increase the DHCP lease time on the router (Setup > Basic Setup) to a few days instead of a day.

As to how well this work we shall see, but it seems OK for now


Source: Rfennell

Version stamping Windows 8 Store App manifests in TFS build

We have for a long time used the TFSVersion custom build activity to stamp all our TFS builds with a unique version number that matches out build number. However, this only edits the AssemblyInfo.cs file. As we are now building more and more Windows 8 Store Apps we also need to edit the XML in the Package.appxmanifest files used to build the packages too. Just like a Wix MSI project it is a good idea the package version matches some aspect of the assemblies it contains. We need to automate the update of this manifest as people too often forget to increment the version, causing confusion all down the line.

Now I could have written a new TFS custom activity to do the job, or edited the existing one, but both options seemed a poor choice. We all know that custom activity writing is awkward and a pain to support going forward. So I decided to use the hooks in the 2013 generation build process template to just call a custom PowerShell script to do the job.

I added a PreBuildScript.PS1 file as a solution item to my solution.

I placed the following code in the file. It uses the TFS environment variables to get the build location and version; using these to find and edit the manifest files. The only gotcha is files on the build box are read only (it is a server workspace) so the manifest file has to be set it to allow it to be written back too.

# get the build number
$buildnum = $env:TF_BUILD_BUILDNUMBER.Split('_')[1]
# get the manifest file paths
$files = Get-ChildItem -Path $env:TF_BUILD_BUILDDIRECTORY -Filter "Package.appxmanifest" -Recurse
foreach ($filepath in $files)
{
    Write-Host "Updating the Store App Package '$filepath' to version ' $buildnum '"
   # update the identity value
  
$XMLfile=NEW-OBJECT XML
    $XMLfile.Load($filepath.Fullname)
    $XMLFile.Package.Identity.Version=$buildnum
   # set the file as read write
    Set-ItemProperty $filepath.Fullname -name IsReadOnly -value $false
    $XMLFile.save($filepath.Fullname)
}

Note that any output sent via Write-Host will only appear in the diagnostic log of TFS. If you use Write-Error (or errors are thrown) these messages will appear in the build summary, but the build will not fail, but will be marked as a partial success.


Once this file was checked in i was able to reference the file in the build template


image


The build could not be run and got my Windows 8 Store packages with the required version number


Source: Rfennell

Updated blog server to BlogEngine.NET 3.1

Last night I upgraded this blog server to BlogEngine.NET 3.1. I used the new built in automated update tool, in an offline backup copy of course.

It did most of the job without any issues. The only extra things I needed to do was

  • Removed a <add name=”XmlRoleProvider” …> entry in the web.config. I have had to do this before on every install.
  • Run the SortOrderUpdate.sql script to add the missing column and index (see issue 12543)

Once done and tested locally I upload the tested site to my production server. Just a point to note, that the upgrade creates some backup ZIPs of your site before the upgrade, you don’t need to copy these around as they are large.


Source: Rfennell

Swapping the the Word template in a VSTO project

We have recently swapped the Word template we use to make sure all our proposals and other documents are consistent. The changes are all cosmetic, fonts, footers etc. to match our new website; it still makes use of the same VSTO automation to do much of the work. The problem was I needed to swap the .DOTX file within the VSTO Word Add-in project, we had not been editing the old DOTX template in the project, but had created a new one based on a copy outside of Visual Studio.

To swap in the new .DOTX file for the VSTO project I had to…

  • Copy the new TEMPLATE2014.DOTX file to project folder
  • Opened the VSTO Word add-in .CSPROJ file on a text editor and replaced all the occurrences of the old template name with the new e.g. TEMPLATE.DOTX for TEMPLATE2014.DOTX
  • Reload the project in Visual Studio 2013, should be no errors and the new template is listed in place of the old
  • However, when I tried to compile the project I got a DocumentAlreadyCustomizedException. I did not know, but the template in the VSTO project needs to a copy with no association with any VSTO automation. The automation links are applied during the build process, makes sense when you think about it. As we had edited a copy of our old template, outside of Visual Studio, our copy already had the old automation links embedded. These needed to be removed, the fix was to
    • Open the .DOTX file in Word
    • On the File menu > Info > Right click on Properties (top right) to get the advanced list of properties

      image
    • Delete the _AssemblyName and _AssemblyLocation custom properties
    • Save the template
    • Open the VSTO project in Visual Studio and the you should be able to build the project
  • The only other thing I had to do was make sure my VSTO project was the start-up project for the solution. Once this was done I could F5/Debug the template VSTO combination

Source: Rfennell

Using MSDEPLOY from Release Management to deploy Azure web sites

Whilst developing our new set of websites we have been using MSDeploy to package up the websites for deployment to test and production Azure accounts. These deployments were being triggered directly using Visual Studio. Now we know this is not best practice, you don’t want developers shipping to production from their development PCs, so I have been getting around to migrating these projects to Release Management.

I wanted to minimise change, as we like MSDeploy, I just wanted to pass the extra parameters to allow a remote deployment as opposed to a local one using the built in WebDeploy component in Release Management

To do this I created a new component based on the WebDeploy tool. I then altered the arguments to

__WebAppName__.deploy.cmd /y /m:__PublishUrl__" -allowUntrusted /u:"__PublishUser__" /p:"__PublishPassword__" /a:Basic

image


With these three extra publish parameters I can target the deployment to an Azure instance, assuming WebDeploy is installed on the VM running the Release Management deployment client.


The required values for these parameters can be obtained from the .PublishSettings you download from your Azure web site’s management page. If you open this file in a text editor you can read the values need (bits you need are highlighted in yellow)


<publishData><publishProfile profileName=”SomeSite – Web Deploy” publishMethod=”MSDeploy” publishUrl=”somesite.scm.azurewebsites.net:443” msdeploySite=”SomeSite” userName=”$SomeSite” userPWD=”m1234567890abcdefghijklmnopqrstu” destinationAppUrl=http://somesite.azurewebsites.net SQLServerDBConnectionString=”” mySQLDBConnectionString=”” hostingProviderForumLink=”” controlPanelLink=”http://windows.azure.com”><databases/></publishProfile><publishProfile profileName=”SomeSite- FTP” publishMethod=”FTP” publishUrl=ftp://site.ftp.azurewebsites.windows.net/site/wwwroot ftpPassiveMode=”True” userName=”SomeSite$SomeSite” userPWD=”m1234567890abcdefghijklmnopqrstu” destinationAppUrl=http://somesite.azurewebsites.net SQLServerDBConnectionString=”” mySQLDBConnectionString=”” hostingProviderForumLink=”” controlPanelLink=”http://windows.azure.com”><databases/></publishProfile></publishData>


These values are used as follows


  • WebAppName – This is the name of the MSDeploy package, this is exactly the same as a standard WebDeploy component.
  • PublishUrl – We need to add the https and the .axd to the start and end of the url e.g: https://somesite.scm.azurewebsites.net:443/MsDeploy.axd
  • PublishUser – e.g:  $SomeSite
  • PublishPassword – This is set as an encrypted parameter  so it cannot be viewed in the Release Management client e.g: m1234567890abcdefghijklmnopqrstu

On top of these parameters, we can still pass in extra parameters to transform the web.config using the setparameters.xml file as detailed in this other post  this allowing to complete the steps we need to do the configuration for the various environments in our pipeline.


Source: Rfennell

Moving our BlogEngine.NET server to Azure

As part of our IT refresh we have decided to move this BlogEngine.NET server from a Hyper-V VM in our office to an Azure website.

BlogEngine.NET is now a gallery item for Azure website, so a few clicks and your should be up and running.

image

However, if you want to use SQL as opposed to XML as the datastore you need to do a bit more work. This process is well documented in the video ‘Set BlogEngine.NET to use SQL provider in Azure’, but we found we needed to perform some extra steps due to where our DB was coming from.

Database Fixes

The main issue was that our on premises installation of BlogEngine.NET used a SQL 2012 availability group. This amongst other things, adds some extra settings that stop the ‘Deploy Database to Azure’ feature in SQL Management Studio from working. To address these issues I did the following:

Took a SQL backup of the DB from our production server and restored it to a local SQL 2012 Standard edition. I then tried the  Deploy to Azure

image

But got the errors I was expecting

image

There were three types

Error SQL71564: Element User: [BLACKMARBLEAUser] has an unsupported property AuthenticationType set and is not supported when used as part of a data package.
Error SQL71564: Element Column: [dbo].[be_Categories].[CategoryID] has an unsupported property IsRowGuidColumn set and is not supported when used as part of a data package.
Error SQL71564: Table Table: [dbo].[be_CustomFields] does not have a clustered index.  Clustered indexes are required for inserting data in this version of SQL Server.

The first fixed by simply deleting the listed users in SQL Management Studio or via the query

DROP USER [BLACKMARBLEAuser]

The second were addressed by removing the  ‘IsRowGuidColumn’  property in Management Studio

image

or via the query

ALTER TABLE dbo.be_Categories SET (LOCK_ESCALATION = TABLE)

Finally II had to replace the non-cluster index with a cluster one. I got the required definition form the setup folder of our BlogEngine.NET installation, and ran the command

DROP INDEX [idx_be_CustomType_ObjectId_BlogId_Key] ON [dbo].[be_CustomFields]

CREATE CLUSTERED INDEX [idx_be_CustomType_ObjectId_BlogId_Key] ON [dbo].[be_CustomFields]
(
    [CustomType] ASC,
    [ObjectId] ASC,
    [BlogId] ASC,
    [Key] ASC
)

Once all this was done in Management Studio I could Deploy DB to Azure, so after a minute or two had a BlogEngine.NET DB on Azure

Azure SQL Login

The new DB did not have user accounts associated with it. So I had to create one

On the SQL server’s on Master  DB I ran

CREATE LOGIN usrBlog WITH password='a_password';

And then on the new DB I ran

CREATE USER usrBlog FROM LOGIN usrBlog ;
EXEC sp_addrolemember N'db_owner', usrBlog

Azure Website

At this point we could have created a new Azure website using the BlogEngine.NET template in the gallery. However, I chose to create an empty site as our version of BlogEngine.NET (3.x) is newer than the version in the Azure gallery (2.9).

Due to the history of our blog server we have a non-default structure, the BlogEngine.NET code is not in the root. We retain some folders with redirection to allow old URLs to still work. So via an FTP client we create the following structure, copying up the content from our on premises server

  • sitewwwroot  – the root site, we have a redirect here to the blogs folder
  • sitewwwrootbm-bloggers – again a redirect to the blogs folder, dating back to our first shared blog
  • sitewwwrootblogs – our actual server, this needs to be a virtual application

    Next I set the virtual application on the Configure section for the new website, right at the bottom, of the page

    image

    At this point I was back in line with the video, so need to link our web site to the DB. This is done using the link button on the Azure  web site’s management page. I entered the new credentials for the new SQL DB and the DB and web site were linked. I could then get the connection string for the DB and enter it into the web.config.


  • Unlike  in the video the only edit I need to make was to the connection string, as all the other edits had already been made for the on premises SQL


    Once the revised web.config was uploaded the site started up, and you should be seeing it now


    Source: Rfennell

    Publishing more than one Azure Cloud Service as part of a TFS build

    Using the process in my previous post you can get a TFS build to create the .CSCFG and .CSPKG files needed to publish a Cloud Service. However, you hit a problem if your solution contains more that one Cloud Service project; as opposed to a single cloud service project with multiple roles, which is not a problem.

    The method outlined in the previous post drops the two files into a Packages folder under the drops location. The .CSPKG files are fine, as they have unique names. However there is only one ServiceConfiguration.cscfg, whichever one was created last.

    Looking in the cloud service projects I could find no way to rename the ServiceConfiguration file. It looks like it is like a app.config or web.config file i.e. it’s name is hard coded.

    The only solution I could find was to add a custom target that is set to run after the publish target. This was added to the end of each .CCPROJ files using a text editor just before the closing </project>

     <Target Name="CustomPostPublishActions" AfterTargets="Publish">
        <Exec Command="IF '$(BuildingInsideVisualStudio)'=='true' exit 0
        echo Post-PUBLISH event: Active configuration is: $(ConfigurationName) renaming the .cscfg file to avoid name clashes
        echo Renaming the .CSCFG file to match the project name $(ProjectName).cscfg
        ren $(OutDir)PackagesServiceConfiguration.*.cscfg $(ProjectName).cscfg
        " />
      </Target>
       <PropertyGroup>
        <PostBuildEvent>echo NOTE: This project has a post publish event</PostBuildEvent>
      </PropertyGroup>

     


    Using this I now get unique name for the .CSCFG files as well as for .CSPKG files in my drops location. All ready for Release Management to pickup


    Notes:


    • I echo out a message in the post build event too just as a reminder that I have added a custom target that cannot be seen in Visual Studio, so is hard to discover
    • I use an if test to make sure the commands are only run on the TFS build box, not on a local build. The main reason for this is the path names are different for local builds as opposed to TFS build. If you do want a rename on a local build you need to change the $(OutDir)Packages path to $(OutDir)app.publish. However, it seemed more sensible to leave the default behaviour occur when running locally

    Source: Rfennell

    The random thoughts of Richard Fennell on technology and software development'