Making the drops location for a TFS build match the assembly version number

A couple of years ago I wrote about using the TFSVersion build activity to try to sync the assembly and build number. I did not want to see build names/drop location in the format ‘BuildCustomisation_20110927.17’, I wanted to see the version number in the build something like  ‘BuildCustomisation 4.5.269.17′. The problem as I outlined in that post was that by fiddling with the BuildNumberFormat you could easily cause an error where duplicated drop folder names were generated, such as

TF42064: The build number ‘BuildCustomisation_20110927.17 (4.5.269.17)’ already exists for build definition ‘\MSF Agile\BuildCustomisation’.

I had put this problem aside, thinking there was no way around the issue, until I was recently reviewing the new ALM Rangers ‘Test Infrastructure Guidance’. This had a solution to the problem included in the first hands on lab. The trick is that you need to use the TFSVersion community extension twice in you build.

  • You use it as normal to set the version of your assemblies after you have got the files into the build workspace, just as the wiki documentation shows
  • But you also call it in ‘get mode’ at the start of the build process prior to calling the ‘Update Build Number ‘ activity. The core issue being you cannot call ‘Update Build Number’ more than once else you tend to see the TF42064 issues. By using it in this manner you will set the BuildNumberFomat to the actual version number you want, which will be used for the drops folder and any assembly versioning.

So what do you need to do?

  1. Open you process template for editing (see the custom build activities documentation if you don’t know how to do this)
  2. Find the sequence ‘ Update Build Number for Triggered Builds’ and at the top of the process template

    image
    • Add TFSVersion activity – I called mine ‘Generate Version number for drop’
    • Add an Assign activity – I called mine ‘Set new BuildNumberFormat’
    • Add a WriteBuildMessage activity – This is option but I do like to see what it generated



  3. Add a string variable GeneratedBuildNumber with the scope of ‘Update Build Number for Triggered Builds’

    image

  4. The properties for the TFSVersion activity should be set as shown below

    image

    • The Action is the key setting, this needs to be set to GetVersion, we only need to generate a version number not set any file versions
    • You need to set the Major, Minor and StartDate settings to match the other copy of the activity in your build process. I good tip is to just cut and paste from the other instance to create this one, so that the bulk of the properties are correct
    • The Version needs to be set to you variable GeneratedBuildNumber this is the outputed version value

  5. The properties for the Assign activities are as follows

    image

    • Set To to BuildNumberFormat
    • Set Value to String.Format(“$(BuildDefinitionName) {0}”, GeneratedBuildNumber), you can vary this format to meet your own needs

  6. I also added a WriteMessage activity that outputs the generated build value, but that is optional


Once all this was done and saved back to TFS it could be used for a build. You now see that the build name, and drops location is in the form


[Build name] [Major].[Minor].[Days since start date].[TFS build number]


image


This is a slight change from what I previously attempted where the 4th block was the count of builds of a given type on a day, now it is the unique TFS generate build number, the number shown before the build name is generated. I am happy with that. My key aim is reached that the drops location contains the product version number so it is easy to relate a build to a given version without digging into the build reports.

I can never remember the command line to add use to the TFS Service Accounts group

I keep forgetting when you use TFS Integration Platform that the user who the tool (or service account is running as a service) is running as has to be in the “Team Foundation Service Accounts” group on the TFS servers involved. If they are not you get a runtime conflict something like

Microsoft.TeamFoundation.Migration.Tfs2010WitAdapter.PermissionException: TFS WIT bypass-rule submission is enabled. However, the migration service account ‘Richard Fennell’ is not in the Service Accounts Group on server ‘http://tfsserver:8080/tfs’.

The easiest way to do this is to use the TFSSecurity command line tool on the TFS server. Now you will find some older blog posts about setting the user as a TFS admin console user to get the same effect, but this only seems to work on TFS 2010. This command is good for all versions

C:\Program Files\Microsoft Team Foundation Server 12.0\tools> .\TFSSecurity.exe /g+ "Team Foundation Service Accounts
" n:mydomain\richard /server:http://localhost:8080/tfs

and expect to see

Microsoft (R) TFSSecurity – Team Foundation Server Security Tool
Copyright (c) Microsoft Corporation.  All rights reserved.

The target Team Foundation Server is http://localhost:8080/tfs.
Resolving identity "Team Foundation Service Accounts"…
s [A] [TEAM FOUNDATION]\Team Foundation Service Accounts
Resolving identity "n:mydomain\richard"…
  [U] mydomain\Richard
Adding Richard to [TEAM FOUNDATION]\Team Foundation Service Accounts…
Verifying…

SID: S-1-9-1551374245-1204400969-2333986413-2179408616-0-0-0-0-2

DN:

Identity type: Team Foundation Server application group
   Group type: ServiceApplicationGroup
Project scope: Server scope
Display name: [TEAM FOUNDATION]\Team Foundation Service Accounts
  Description: Members of this group have service-level permissions for the Team Foundation Application Instance. For se
rvice accounts only.

1 member(s):
  [U] mydomain\Richard

Member of 2 group(s):
e [A] [TEAM FOUNDATION]\Team Foundation Valid Users
s [A] [DefaultCollection]\Project Collection Service Accounts

Done.

Once this is done, and the integration platform run restarted all should be OK

An attempted return for ‘Brian the build bunny’

Background

Back in 2008 Martin Woodward did a post on using a Nabaztag as a build monitor for TFS, ‘Brian the build bunny’. I did a bit more work on this idea and wired into our internal build monitoring system. We ended up with a system where a build definition could be tagged so that it’s success or failure caused the Nabaztag to say a message.

image

This all worked well until the company that made Nabaztag went out of business, the problem being all communication with your rabbit was via their web servers. At the time we did nothing about this, so just stopped using this feature of our build monitors.

Getting it going again

When the company that made Nabaztag went out of business a few replacements for their servers appeared. I choose to look at the PHP based one OpenNab, my longer plan being to use a Raspberry PI as a ‘backpack’ server for the Nabaztag.

Setting up your Apache/PHP server

I decided to start with a Ubuntu 12.04 LT VM to check out the PHP based server, it was easier to fiddle with whilst travelling as I did not want to carry around all the hardware.

Firstly I installed Apache 2 and PHP 5, using the command

sudo apt-get install apache2
sudo apt-get install php5
sudo apt-get install libapache2-mod-php5
sudo /etc/init.d/apache2 restart

I then downloaded the OpenNab files and unzipped them into /var/www/vl

Next I tried started to work through the instructions on the http://localhost/vl/check_install.html I instantly got problems.

The first test is to check is that if you ask for a page that does not exist (a 404 error) it should redirect to the bc.php page. The need for this is that the Nabaztag will make a call to bc.jsp, this cannot be altered so we need to redirect the call. The problem is this is mean to be handled by a .htaccess file in the /var/www/vl folder that contains

ErrorDocument 404 /vl/bc.php

I could not get this to work. In the end I edited the Apache /etc/apache2/httpd.conf  and put the same text in this file. I am not expert on Apache but the notes I read seemed to infer that httpd.conf was being favoured over .htaccess, so it might be a version issue.

Once this change was made I got the expected redirections, asking for an invalid folder or page caused the bc.php file to be loaded (it showing a special 404 message – watch out for this the text in the message it is important, I had thought mine was working before as I saw 404, but it was Apache reporting the error not the bc.php page)

Next I checked the http://localhost/vl/tests to run all the PHP tests. Most passed but I did see a couple of failures and loads of exceptions. The fixes were

  • Failure of ‘testFileGetContents’ – this is down to whether Apache returns compressed content or not. You need to disable this feature by running the command

sudo a2dismod deflate

  • All the exceptions are because deprecated calls are being made (OpenNab is a few years old). I edited the /etc/php5/apache2/php.ini  file and set the error reporting to not show deprecation warnings. Once this was done the PHP tests all passed

error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED

Next I could try a call to a dummy address and see that files ‘burrows’ folder and I got some gibberish message returned. This proved the redirect worked and I had all the tools wired up

Note: Some people have had permission problems, you might need to grant to write permissions to folder as temporary files are created, but this was not something I needed to alter.

It is a good idea to make sure you have not firewall issues by accessing the test pages from another PC/VM

Getting the Rabbit on the LAN

Note: I needed to know the IP address of my PHP server. Usually you would use DNS and maybe DHCP leases to manage this. For my test I just hard coded it. The main reason was that Ubuntu cannot use Wifi based DHCP on a Hyper-V

The first thing to note is that the Nabaztag Version 2 I am using does not support WPA2 for WIFI security. It only supports WPA, so I had to dig out a base station for it to use as I did not want to downgrade my WIFI security.

Note: The Nabaztag Version 1 only does WEP, if you have one of them you need to set your security appropriately.

To setup the Nabaztag

  • Hold down the button on the top and switch it on, the nose should go purple
  • On a PC look for new WIFI base stations, connect to the one with a name like Nabaztag1D
  • In a browser connect to 192.168.0.1 and set the Nabaztag to connect to your WIFI base station

image

  • In the advance options, you also need to set the IP address or DNS name of your new OpenNab server

image

  • When you save the unit should reboot
  • Look to see that a new entry in the /vl/burrows folder on your OpenNab server

So at this point I thought it was working, but the Nabaztag kept rebooting, I saw three tummy LEDs go green but the nose was flashing orange/green then a reboot.

After much fiddling I think I worked out the problem. The OpenNab software is a proxy. It still, by default, calls to the old Nabaztag site. Part of the boot process is to pull a bootcode.bin file down from the server to allow the unit to boot. This was failing.

To fix this I did the following

  • Edited the /vl/opennab.ini file
    • Set the LogLevel = 4 so I got as much logging as possible in the /vl/logs folder
    • Set the ServerMode = standalone so that it does not try to talk to the original Nabaztag  site
    • I saw that the entry BootCode = /vl/plugin/saveboot/files/bootcode.bin, a file I did not have. The only place I could find a copy was on the volk Nabaztag tools site
  • Once all these changes were made my Nabaztag booted OK, I got four green LEDs, the ears rotated

 

When you power up the Nabaztag, it runs through a start-up sequence with orange and green lights. Use this to check where there’s a problem:

  • First belly light is the connection to your network – green is good
  • Second belly light is that the bunny has got an IP address on your network – green is good
  • Third belly light means that the bunny can resolve the server web address – green is good
  • The nose light confirms whether the server is responding to the rabbit’s requests – green is good

A pulsing purple light underneath the rabbit means that the Nabaztag is connected and working OK.

Sending Messages to the Rabbit on the LAN

Now I could try sending messages via the API demo pages. The messages seemed to be sent OK, but nothing happened on the Rabbit. I was unsure of it had booted OK or even if the bootcode.bin file was correct.

At this point I got to thinking, the main reason I wanted this working again was the text to speech (TTS) system. This is not part of the OpenNab server, this function is passed off to the original Nabaztag service. So was all this work going to get me what I wanted?

I was at the point I had learnt loads about getting Apache, PHP and OpenNab going but was frankly I was not nearer what I was after.

A Practical Solution

At this point I went back to look at the other replacement servers. I decided to give Nabaztaglives.com a go, and it just worked. Just follow their setup page. They provide TTS using the Google API, so just what I needed.

Ok it is not a Raspberry PI backpack, not a completely standalone solution,  but I do have the option to use the Nabaztag in the same manner as I used to as a means to signal build problems

Adding another VM to a running Lab Management environment

If you are using network isolated environment in TFS Lab management there is no way to add another VM unless you rebuild and redeploy the environment. However, if you are not network isolated you can at least avoid the redeploy issues to a degree.

I had a SCVMM based environment that was a not network isolated environment that contained a single non-domain joined server. This was used to host a backend simulation service for a project. In the next phase of the project we need to test accessing this service via RDP/Terminal Server so I wanted to add a VM to act in this role to the environment.

So firstly I deleted the environment in MTM, as the VMs in the environment are not network isolated they are not removed. The only change is to remove the XML meta data from the properties description.

I now needed to create my new VM. I had thought I could create a new environment adding the existing deployed and running VM as well as  a new one from the SCVMM library. However you get the error ‘ cannot create an environment consisting of both running and stored VMs’

image

So here you have two options.

  1. Store the running VM in the library and redeploy
  2. Deploy out, via SCVMM, a new VM from some template or stored VM

Once this is done you can create the new environment using the running VMs or stored images depending on the option chosen in the previous step.

So not any huge saving in time or effort. Just wish there was a way to edit deployed environments

Experiences with a Kindle Paperwhite

I wrote a post a while ago about ‘should I bug a Kindle’, well I put if off for over a year using the Kindle app on my WP7 phone, reading best part of 50 books and been happy enough without buying an actual Kindle. The key issue being poor battery life, but that’s phones for you.

However, I have eventually got around to getting a Kindle device. They key was I had been waiting for something that used touch, had no keyboard,  but most importantly worked in the dark without an external light. This is because I found one of the most useful features of the phone app was reading in bed without the need for a light.

This is basically the spec of the Kindle Paperwhite, so I had no excuse to delay any longer.

Kindle Paperwhite e-reader

 

This week was my first trip away with it and it was interesting to see my usage pattern. On the train and in the hotel I used the Kindle, but standing on the railway station or generally  waiting around I still pulled out my phone to read. This had the effect that I did have to put my phone into WIFI hotspot mode so the Kindle could sync up my last read point via whispersync when I wanted to switch back to the Kindle. This was because I had not bought the 3G version of the Paperwhite, and I still don’t think I would bother to get, as firing up a hotspot is easy if I am on the road and the Kindle uses my home and work WIFI most of the time.

So I have had it for a few weeks now and must say I am very happy with it, I can heartily recommend it. I still have reservations over having to carry another device, but it is so much more pleasant to read on the Kindle screen. So most of the time it is worth carrying it and for when it is not I just use my phone.

Minor issue on TFS 2012.3 upgrade if you are using host headers in bindings

Yesterday I upgraded our production 2012.2 TFS server to update 3. All seemed to go OK and it completed with no errors, it was so much easier now that the update supports the use of SQL 2012 Availability Groups within the update process, no need to remove the DBs from the availability group prior to the update.

However, though there were no errors it did reported a warning, and on a quick check users could not connects to the upgraded server on our usually https URL.

On checking the update log I saw

[Warning@09:06:13.578] TF401145: The Team Foundation Server web application was previously configured with one or more bindings that have ports that are currently unavailable.  See the log for detailed information.
[Info   @09:06:13.578]
[Info   @09:06:13.578] +-+-+-+-+-| The following previously configured ports are not currently available… |+-+-+-+-+-
[Info   @09:06:13.584]
[Info   @09:06:13.584] 1          – Protocol          : https
[Info   @09:06:13.584]            – Host              : tfs.blackmarble.co.uk
[Info   @09:06:13.584]            – Port              : 443
[Info   @09:06:13.584] port: 443
[Info   @09:06:13.585] authMode: Windows
[Info   @09:06:13.585] authenticationProvider: Ntlm

The issue appears if you use host headers, as we do for our HTTPS bindings. The TFS configuration tool does not understand these, so sees more than one binding in our case on 443  (our TFS server VM also hosts as a nuget server on https 443, we use host headers to separate the traffic) . As the tool does not know what to do with host headers, it just deletes the bindings it does no understand.

Anyway the fix was to  manually reconfigured the HTTPS bindings in IIS and all was OK.

On checking with Microsoft it seems this is a know issue, and on their radar to sort out in future.

Setting SkyDrive as a trusted location in Office 2013

We use a VSTO based Word template to make sure all our documents have the same styling and are suitably reformatted for shipping to clients e.g revision comments removed, contents pages up to date etc. Normally we will create a new document using this template from our SharePoint server and all is OK. However sometimes you are on the road when you started a document so you just create it locally using a locally installed copy of the template. In the past this has not caused me problems. I have my local ‘My documents’ set in Word as a trusted location and it just works fine.

However, of late due to some SSD problems, I have taken to using the SkyDrive desktop application. So I now save to C:\Users\[username]\SkyDrive and this syncs up to my SkyDrive space whenever it gets a chance. It has certainly saved me a few times already (see older posts my my SSD failure adventures)

However, the problem is that I can create a new document OK, VSTO runs, and save it to my local SkyDrive folder, but then I come back to open it for editing I get the error

image

The problem is my SkyDrive folder is not in my trusted locations list (Word > options > trust center > trust center settings (button lower right) > Trust locations)

image

So I tried adding C:\Users\[username]\SkyDrive – it did not work.

I then noticed that when I load or save from SkyDrive dialogs say it is coping to ‘https://d.docs.live.net/[an unique id]’. So I entered https://d.docs.live.net (and its sub folders) as a trusted location and it worked.

Now I don’t really want to trust the whole of SkyDrive, so needed to find my SkyDrive ID. Now I am sure there is a easy way to do this but I don’t know it.

The solution I used was to

  1. Go to the browser version of SkyDrive.
  2. Pick a file
  3. Use the menu option ‘Embed’ to generate the HTML to embed the file
  4. From this URL extracted the CID
  5. Added this to the base URL so you get https://d.docs.live.net/12345678/
  6. Add this new URL to the trusted locations (with sub folders) and my VSTO application works

Simple wasn’t it.