What new in TFS from Teched 2014?

If you use TFS then it is well worth a look at Brian Harry’s Teched2014 session ‘Modern Application Lifecycle Management’. It goes through changes and new features with TFS both on-premise and in the cloud, including

Not all these features are in 2013.2 (which was released during the conference). However, in the session they said Visual Studio 2013.3CTP is going to be available next week, so not long to wait if you want a look at the latest features.

New release of TFS Alerts DSL that allows work item state rollup

A very common question I am asked at clients is “is it possible for a parent TFS work item to be automatically be set to ‘done’ when all the child work items are ‘done’?”. The answer is not out the box, the is no work item state roll up in TFS.

However it is possible via the API. I have modified my TFS Alerts DSL CodePlex project to expose this functionality. I have added a couple of methods that allow you to find the parent and child of a work item, and hence create your own rollup script.

To make use of this all you need to do is create a TFS Alert that calls a SOAP end point where the Alerts DSL is installed. This end point should be called whenever a work item changes state. It will in turn run a Python script similar to the following to perform the roll-up

import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv[0] == "WorkItemEvent" :
    wi = GetWorkItem(int(sys.argv[1]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage("Work item '" + str(wi.Id) + "' has no parent")
    else:
        LogInfoMessage("Work item '" + str(wi.Id) + "' has parent '" + str(parentwi.Id) + "'")

        results = [c for c in GetChildWorkItems(parentwi) if c.State != "Done"]
        if len(results) == 0:
            LogInfoMessage("All child work items are 'Done'")
            parentwi.State = "Done"
            UpdateWorkItem(parentwi)
            msg = "Work item '" + str(parentwi.Id) + "' has been set as 'Done' as all its child work items are done"
            SendEmail("richard@typhoontfs","Work item '" + str(parentwi.Id) + "' has been updated", msg)
            LogInfoMessage(msg)
        else:
            LogInfoMessage("Not all child work items are 'Done'")
else:
    LogErrorMessage("Was not expecting to get here")
    LogErrorMessage(sys.argv)


So now there is a fairly easy way to correct your own rollups, based on your own rules

Getting ‘The build directory of the test run either does not exist or access permission is required’ error when trying to run tests as part of the Release Management deployment

Whilst running tests as part of a Release Management deployment I started seeing the error ‘The build directory of the test run either does not exist or access permission is required’, and hence all my tests failed. It seems that there are issues that can cause this problem, as mentioned in the comments in Martin Hinshelwood’s post on running tests in deployment, specially spaces in the build name can cause this problem, but this was not the case for me.

Strangest point was it used to work, what had I changed?

To debug the problem I logged into the test VM as the account the deployment service was running as (a shadow account as the environment was network isolated). I got the command line that the component was trying to run by looking at the messages in the deployment log

image

I then went to the deployment folder on the test VM

%appdata%\local\temp\releasemanagement\[the release management component name]\[release number]

and ran the same command line. Strange thing was this worked! All the tests ran and passed OK, TFS was updated, everything was good.

It seemed I only had an issue when triggering the tests via a Release Management deployment, very strange!

A side note here, when I say the script ran OK it did report an error and did not export and unpack the test results from the TRX file to pass back to the console/release management log. Turns out this is because the MTMExec.ps1 script uses the command [System.IO.File]::Exist(..) to check if the .TRX file has been produced. This fails when the script is run manually. This is because it relies on [Environment]::CurrentDirectory, which is not set the same way when run manually as when a script is called by the deployment service. When run manually it seems to default to c:\windows\system32 not the current folder.

If you are editing this script, and want it to work in both scenarios, then probably best to use the PowerShell Test-Path(..) cmdlet as opposed to [System.IO.File]::Exist(..) 

So where to look for this problem, the error says something can’t access the drops location, but what?

A bit of thought as to who is doing what can help here

image

When the deployment calls for a test to be run

  • The Release Management deployment agent pulls the component down to the test VM from the Release Management Server
  • It then runs the Powershell Script
  • The PowerShell script runs TCM.exe to trigger the test run, passing in the credentials to access the TFS server and Test Controller
  • The Test Controller triggers the tests to be run on the Test Agent, providing it with the required DLLs from the TFS drops location – THIS IS THE STEP WITH THE PROBLEM IS SEEN
  • The Test Agent runs the tests and passes the results back to TFS via the Test Controller
  • After the PowerShell script triggers the test run it loops until the test run is complete.
  • It then uses TCM again to extract the test results, which it parses and passes back to the Release Management server

So a good few places to check the logs.

Turns out the error was being reported on the Test Controller.

image

(QTController.exe, PID 1208, Thread 14) Could not use lab service account to access the build directory. Failure: Network path does not exist or is not accessible using following user: \\store\drops\Sabs.Main.CI\Sabs.Main.CI_2.3.58.11938\ using blackmarble\tfslab. Error Code: 53

The error told me the folder and who couldn’t access it, the domain service account ‘tfslab’ the Test Agents use to talk back to the Test Controller.

I checked the drops location share and this user has adequate access rights. I even logged on to the Test Controller as this user and confirmed I could open the share.

I then had a thought, this was the account the Test Agents were using to communicate with the Test Controller, but was it the account the controller was running as? A check showed it was not, the controller was running as the default ‘Local System’. As soon as I swapped to using the lab service account (or I think any domain account with suitable rights) it all started to work.

image

So why did this problem occur?

All I can think of was that (to address another issue with Windows 8.1 Coded-UI testing) the Test Controller was upgraded to 2013.2RC, but the Test Agent in this lab environment was still at 2013RTM. Maybe the mismatch is the issue?

I may revisit and retest with the ‘Local System’ account when 2013.2 RTM’s and I upgrade all the controllers and agents, but I doubt it. I have no issue running the test controller as a domain account.

Setting the LocalSQLServer connection string in web deploy

If you are using Webdeploy you might wish to alter the connection string the for the LocalSQLServer that is used by the ASP.NET provider for web part personalisation. The default is to use ASPNETDB.mdf in the APP_Data folder, but in a production system you could well want to use a ‘real’ SQL server.

If you look in your web config, assuming you are not using the default ‘not set’ setting, will look something like

<connectionStrings>
    <clear />
    <add name=”LocalSQLServer” connectionString=”Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf” providerName=”System.Data.SqlClient” />
  </connectionStrings>

Usually you expect any connection strings in the web.config to appear in the Web Deploy publish wizard, but it does not. I have no real idea why, but maybe it is something to do with having to use <clear /> to remove the default?

image

If you use a parameters.xml file to add parameters to the web deploy you would think you could add the block

<parameter name=”LocalSQLServer” description=”Please enter the ASP.NET DB path” defaultvalue=”__LocalSQLServer__” tags=””>
  <parameterentry kind=”XmlFile” scope=”\\web.config$” match=”/configuration/connectionStrings/add[@name=’LocalSQLServer’]/@connectionString” />
</parameter>

However, this does not work, in the setparameters.xml that is generated you find  two entries, first yours then the auto-generated one, and the last one wins, so you don’t get the correct connection string.

<setParameter name=”LocalSQLServer” value=”__LocalSQLServer__” />
<setParameter name=”LocalSQLServer-Web.config Connection String” value=”Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf” />

The solution I found manually add your parameter in the parameters.xml file as

<parameter name=”LocalSQLServer-Web.config Connection String” description=”LocalSQLServer Connection String used in web.config by the application to access the database.” defaultValue=”__LocalSQLServer__” tags=”SqlConnectionString”>
  <parameterEntry kind=”XmlFile” scope=”\\web.config$” match=”/configuration/connectionStrings/add[@name=’LocalSQLServer’]/@connectionString” />
</parameter>

With this form the connection string was correctly modified as only one entry appears in the generated file

Changing WCF bindings for MSDeploy packages when using Release Management

Colin Dembovsky’s excellent post ‘WebDeploy and Release Management – The Proper Way’ explains how to pass parameters from Release Management into MSDeploy to update Web.config files. On the system I am working on I also need to do some further web.config translation, basically the WCF section is different on a Lab or Production build as it needs to use Kerberos, whereas local debug builds don’t.

In the past I dealt with this, and editing the AppSettings, using MSDeploy web.config translation. This worked fine, but it meant I built the product three time, exactly what Colin’s post is trying to avoid. The techniques in the post for the AppSettings and connection strings are fine, but don’t apply so well for the large block swapouts, as I need for WCF bindings section.

I was considering my options when I realised there a simple option.

  • My default web.config has the bindings for local operation i.e. no Kerberos
  • The web.debug.config translation hence does nothing
  • Both web.lab.config and web.release.confing translations have Kerberos bindings swapped out

So all I needed to do was build the Release build (as you would for production release anyway) this will have the correct bindings in the MSDeploy package for both Lab and Release. You can then use Release Management to set the AppSettings and connection strings as required.

Simple, no extra handling required. I had thought my self into a problem I did not really have.

Release Management components fail to deploy with a timeout if a variable is changed from standard to encrypted

I have been using Release Management to update some of our internal deployment processes. This has included changing the way we roll out MSDeploy packages; I am following Colin Dembovsky’s excellent post of the subject.

I hit an interesting issue today. One of the configuration variable parameters I was passing into a component was a password field. For my initial tests had just let this be a clear text ‘standard’ string in the Release Management. Once I got this all working I thought I better switch this variable to ‘encrypted’, so I just change the type on the Configuration Variables tab.

image 

On doing this I was warned that previous deployment would not be re-deployable, but that was OK for me, it was just a trial system. I would not be going back to older versions.

However when I tried to run this revised release template all the steps up to the edited MSDeploy step were fine, but the MSDeploy step never ran it just timed out. The component was never deployed to the target machine %appdata%\local\temp\releasemanagement folder.

image

In the end, after a few reboots to confirm the comms were OK, I just re-added the component to the release template and entered all the variables again. It then deployed without a problem.

I think this is a case of a misleading error message.