All posts by rfennell

Setting a build version in a JAR file from TFS build

Whilst helping a Java based team (part of larger organisation that used many sets of both Microsoft and non-Microsoft tools) to migrate from Subversion to TFS I had to tackle their Jenkins/Ant based builds.

They could have stayed on Jenkins and switched to the TFS source provider, but they wanted to at least look at how TFS build would better allow them to  trace their builds against TFS work items.

All went well, we setup a build controller and agent specifically for their team and installed Java onto it as well the TFS build extensions. We were very quickly able to get our test Java project building on the new build system.

One feature that their old Ant scripts used was to store the build name/number into the Manifest of any JAR files created, a good plan as it is always good to know where something came from.

When asked as how to do this with TFS build I thought ‘no problem I will just use TFS build environment variable’ and add something like the following

<property environment="env"/>

<target name="jar">
        <jar destfile="${basedir}/javasample.jar" basedir="${basedir}/bin">
            <manifest>
                <attribute name="Implementation-Version" value="${env.TF_BUILD_BUILDNUMBER}" />
            </manifest>   
        </jar>
</target>


But this did not work, I just saw the text ${env.TF_BUILD_BUILDNUMBER}” in my manifest, basically the environment variable could not be resolved.


After a bit more of think I realised the problem is that the Ant/Maven build extensions for TFS are based on TFS 2008 style builds, the build environment variables are a TFS 2012 and later feature, so of course they are not set.


A quick look in the automatically generated TFSBuild.proj file generated for the build showed that the MSBuild $(BuildNumber) was passed into the Ant script as a property, so it could be referenced in the Ant Jar target (note the brackets change from () to {})

<target name="jar">
        <jar destfile="${basedir}/javasmaple.jar" basedir="${basedir}/bin">
            <manifest>
                <attribute name="Implementation-Version" value="${BuildNumber}" />
            </manifest>   
        </jar>
</target>

 


Once this change was made I then got the manifest I expected including the build number

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.9.4
Created-By: 1.8.0_25-b18 (Oracle Corporation)
Implementation-Version: JavaSample.Ant.Manual_20141216.7

Source: Rfennell

Great book full of easily accessible tips to apply the concept of user stories to your team

As with many concepts it is not the the idea that is hard but it’s application. ‘Fifty Quick Ideas to Improve Your User Stories’ by  Gojko Adzic and David Evans provides some great tips to apply the concept of user stories to real world problems. Highlighting where they work and where they don’t, and what you can do about it.

I think this book is well worth a read for anyone, irrespective of their role in a team; it’s short chapters (usually a couple of pages per idea) means it easy to pickup and put down when you get a few minutes. Perfect for that commute


Source: Rfennell

Can’t build SSDT projects in a TFS build

Whilst building a new TFS build agent VM using our standard scripts I hit a problem that SSDT projects would not build, but was fine on our existing agents. The error was

C:Program Files (x86)MSBuildMicrosoftVisualStudiov12.0SSDTMicrosoft.Data.Tools.Schema.SqlTasks.targets (513): The “SqlBuildTask” task failed unexpectedly.
System.MethodAccessException: Attempt by method ‘Microsoft.Data.Tools.Schema.Sql.Build.SqlTaskHost.OnCreateCustomSchemaData(System.String, System.Collections.Generic.Dictionary`2<System.String,System.String>)’ to access method ‘Microsoft.Data.Tools.Components.Diagnostics.SqlTracer.ShouldTrace(System.Diagnostics.TraceEventType)’ failed.

The problem was fixed by doing an update via the Visual Studio > Tools > Extensions and Updates. Once this was completed the build was fine.

Seems there may have been an issue with the Update 3 generation SSDT tools, so older and newer versions seem OK. Our existing agents had already been patched.


Source: Rfennell

Living with a DD-WRT virtual router – one month on

I posted a month or so ago about my ‘Experiences using a DD-WRT router with Hyper-V’, well I have been living with it over a month? How has it been going?

Like the curate’s egg ‘good in parts’. It seems OK for while and then everything would get a bit slow to stop.

Just as a reminder this is what I had ended up with

image

In essence, a pair of virtual switches, one internal using DHCP on the DD-WRT virtual router, and a second one connected to an active external network (usually Ethernet, as DHCP with virtual switches and WIFI in Hyper-V seem a very hit and miss affair).

From my Hyper-V VMs the virtual router seems to be fine, they all have a single network adaptor linked to the virtual switch that issue IP addresses via DHCP. The issues have been for the host operating system. I wanted to connect this to the internal virtual switch to allow easy access to my VMs (without the management complexity of punching holes in the router firewall), but when I did this I got inconsistent performance (made harder to diagnose due to moving house from a fast Virgin cable based Internet connection to a slow BT ADSL based link who’s performance profile varies greatly based on the hour of the day. I was never sure if it was problem with my router or BT’s service).

The main problem I saw was that it seemed the first time I accessed a site it was slow, but then was often OK. So a lookup issue, DNS?

Reaching back into my distant memory as network engineer (early 90s some IP but mostly IPX and NETBIOS) I suspected routing or DNS look up issue. Routing you can do something about via routing tables and metrics, but DNS is harder to control with multiple network connections.

The best option to manage DNS appeared to be changing the binding order for my various physical and virtual network adaptors so the virtual switches were the lowest priority.

image

This at least made most DNS requests go via physical devices.

Note: Also on my Virtual Network Switch adaptor on the host machine I told it not to use the DNS settings provided from the virtual router, but this seemed to have little effect as when using nslookup it still picked the virtual router, until I changed the binding order.

On the routing front, I set the manual metric on IP4 traffic via the virtual router adaptor to a large number, to make it the least likely route anywhere. Doing this should mean only traffic  to the internal 192.168.1.x network should use that adaptor

image

This meant my routing table on my host operating system looks as follows when the system is working OK

image

Outstanding Issues

Routing

I did see some problem if the route via the virtual switch appeared first in the list, this can happen when you change WIFI hotspot. The fix is to delete the unwanted route (0.0.0.0 to 192.168.1.1)

route delete 0.0.0.0 MASK 0.0.0.0 192.168.1.1

But most of the time fixed the binding order seemed enough, so I did not need to do this

External DHCP Refresh

If you swap networks, going from work to home, your external network will have a different IP address.  You do have to restart the router VM (or manually renew DHCP to get a new address)

DHCP and WIFI

There is still the problem getting DHCP working over Hyper-V virtual switched. You can do some tricks with bridging, but it is not great.

The solution I have used is to use Hyper-V checkpoint on my router VM. One set for DHCP and another with the static IP settings for my home network. Again not great but workable for me most of the time. I am happier editing the router VM rather than many guest VMs.


Source: Rfennell

Why am I getting ‘cannot access outlook.ost’ issues with Office 365 Lync?

We use O365 to provide Lync messaging. So when I rebuilt my PC I thought I needed to re-install the client; so I logged into the O365 web site and selected the install option. Turns out this was a mistake. I had Office 2013 installed, so I already had the client, I just had not noticed.

If you do install O365 Lync client (as well as Office 2013 one) you get file access errors reported with your outlook.ost files. If this occurs, just un-install the O376 client and use the one in Office 2013, the errors go away


Source: Rfennell

TFS announcements roundup

There have been a load on announcements about TFS, VSO and Visual Studio in general in the past couple of week, mostly at the Connect() event.

Just to touch on a few items

If you have not had a chance to have look at these features try the videos of all the sessions on Channel9, the keynotes are a good place to start. Also look, as usual, at the various posts on Brian Harry’s Blog. It is time of rapid change in ALM tooling


Source: Rfennell

Errors running tests via TCM as part of a Release Management pipeline

Whilst getting integration tests running as part of a Release Management  pipeline within Lab Management I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)

The issue was two fold..

Permissions/Path Problems accessing the build drops location

The build drops location passed is passed into the component using the argument $(PackageLocation). This is pulled from the component properties, it is the TFS provided build drop with a appended on the end.

image 

Note that the in the text box is there as the textbox cannot be empty. It tells the component to uses the root of the drops location. This is the issue, as when you are in a network isolated environment and had to use NET USE to authenticate with a the TFS drops share the trailing causes a permissions error (might occur in other scenarios too I have not tested it).

Removing the slash or adding a . (period) after the fixes the path issue, so..

  • \serverDropsServices.ReleaseServices.Release_1.0.227.19779        –  works
  • \serverDropsServices.ReleaseServices.Release_1.0.227.19779      – fails 
  • \serverDropsServices.ReleaseServices.Release_1.0.227.19779.     – works 

So the answer is add a . (period) in the pipeline workflow component so the build location is $(PackageLocation). as opposed to $(PackageLocation) or to edit the PS1 file that is run to do some validation to strip out any trailing characters. I chose the later, making the edit

if ([string]::IsNullOrEmpty($BuildDirectory))
    {
        $buildDirectoryParameter = [string]::Empty
    } else
    {
        # make sure we remove any trailing slashes as the cause permission issues
        $BuildDirectory = $BuildDirectory.Trim()
        while ($BuildDirectory.EndsWith(""))
        {
            $BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)
        }
        $buildDirectoryParameter = "/builddir:""$BuildDirectory"""
    }
   

Cannot find the TRX file even though it is present


Once the tests were running I still had an issue that even though TCM had run the tests, produced a .TRX file and published it’s contents back to TFS, the script claimed the file did not exist and so could not pass the test results back to Release Management.


The issue was the call being used to check for the file existence.


[System.IO.File]::Exists($testRunResultsTrxFileName)


As soon as I swapped to the recommended PowerShell way to check for files


Test-Path($testRunResultsTrxFileName)


it all worked.


Source: Rfennell

Linking VSO to your Azure Subscription and Azure Active Directory

I have a few old Visual Studio Online (VSO) accounts (dating back to TFSPreview.com days). We use them to collaborate with third parties, it was long overdue that I tidied them up; as a problem historically has been that all access to VSO has been using a Microsoft Accounts (LiveID, MSA), these are hard to police, especially if users mix personal and business ones.

The solution is to link your VSO instance to an Azure Active Directory (AAD). This means that only users listed in the AAD can connect to the VSO instance. As this AAD can be federated to an on-prem company AD it means that the VSO users can be either

  • Company domain users
  • MSA accounts specifically added to AAD

Either way it gives the AAD administrator an easy way to manage access to VSO. A user with a MSA, even if an administrator in VSO cannot add any unknown users to VSO. For details see MSDN. All straight forward you would think, but it I had a few issues.

The problem was I had setup my VSO accounts using a MSA in the form user@mycompany.co.uk, this was also linked to my MSDN subscription.  As part of the VSO/AAD linking process I needed to add the MSA user@mycompany.co.uk to our AAD, but I could not. The AAD was setup for federation of accounts in the mycompany.com domain, so you would have thought I would be OK, but back in our on-prem AD (the one it was federated to) I had  user@mycompany.co.uk as an email alias for user@mycompany.com. Thus blocked the adding of the user to AAD, hence I could got link VSO to Azure.

The answer was to

  1. Add another MSA account to the VSO instance, one unknown to our AD even as an alias e.g. user@live.co.uk 
  2. Make this user the owner of the VSO instance.
  3. Add the user@live.co.uk MSA to the AAD directory
  4. Make them an Azure Subscription administrator.
  5. Login to the Azure portal as this MSA, once this was done the VSO could be linked to the AAD directory.
  6. I could then make an AAD user (user@mycompany.com) a VSO user and then the VSO owner
  7. The user@live.co.uk MSA could then be deleted from VSO and AAD
  8. I could then login to VSO as  my user@mycompany.com AAD account, as opposed to the old user@mycompany.co.uk MSA account

Simple wasn’t it!

We still had one problem, and that was user@mycompany.com was showing as a basic user in VSO, if you tried to set it to MSDN eligible flipped back to basic.

The problem here was we had not associated the AAD account user@mycompany.com with the MSA account user@mycompany.co.uk in the MSDN portal (see MSDN).

Once this was done it all worked as expected, VSO picking up that my AAD account had a full MSDN subscription.


Source: Rfennell

Video card issues during install of Windows 8.1 causes very strange issues

Whilst repaving my Lenovo W520 I had some issues with video cards. During the initial setup of Windows the PC hung. I rebooted, re-enabled in the BIOS the problematic video card and I thought all was OK. The installation appeared to pickup where it left off. However, I started to get some very strange problems.

  • My LiveID settings did not sync from my other Windows 8.1 devices
  • I could not change my profile picture
  • I could not change my desktop background
  • I could not change my screen saver
  • And most importantly Windows Update would not run

I found a few posts that said all of these problems could be seen when Windows was activated, but that was not the issue for me. It showed as being activated, changing the product key had no effect.

In the end I re-paved my PC again, making sure my video cards were correctly enabled so there was no handing, and this time I seem to have a good Windows installation


Source: Rfennell

Issues repaving the Lenovo W520 with Windows 8.1 – again

Every few months I find a PC needs to be re-paved – just too much beta code has accumulated. I reached this point again on my main 4 year old Lenovo W520 recently. Yes it is getting on a bit in computer years but it does the job; the keyboard is far nicer than the W530 or W540’s we have and until an ultrabook is shipped with 16Gb of memory (I need local VMs, too many places I go to don’t allow me to get to VMs on Azure) I am keeping it.

I have posted in the past about the issue with the W520 (or any laptop that uses the Nvidia Optimus system), well that struck again, with a slight twist to confuse me.

Our IT team have moved to System Center to give a self provisioning system to our staff, so I …

  • Connected my PC (that had Windows 8.1 on it) to the LAN with Ethernet
  • Booted using PXI boot (pressed the blue ThinkVantage button, then F12 to pick the boot device)
  • As the PC was registered with our System Center it found a boot image, reformatted by disk and loaded our standard Windows 8.1 image
  • It rebooted and then it hung….

It was the old video card issue. The W520 has a Intel GPU on the i7 CPU and also a separate Nvidia Quatro GPU. Previously I had the Intel GPU disabled in BIOS as I have found that having both enabled means it is very hard to connect to projector when presenting (but remember you do need both GPUs enabled if you wish to use two external monitors and the laptop display, but I don’t do this). However, you do need the Intel GPU to install Windows. The problem is Windows setup gets confused if it just sees the Nvidia for some reason. You would expect it to treat it as basic VGA until it gets drivers, but it just locks.

  • So I rebooted the PC, enable the Intel GPU in BIOS (leaving the Nvidia enabled too) and Windows setup picked up where it left off and I thought I had a rebuild my PC.

Even with the problems this was very quick to get a domain joined PC. I then started to install the applications using a mixture of System Center Software Center and Chocolatey.

However I knew I would hit the same problem with projectors, so I went back into BIOS and disabled the Intel GPU. The PC booted fine, worked for a minute or two then hung. This was strange as this same configuration had been working with Windows 8,1 before the re-format!

So I re-enabled the Intel GPU, and all seemed OK, until I tried to use Visual Studio 2013. This loaded OK, but crashed within a few seconds. The error log showed

Faulting application name: devenv.exe, version: 12.0.30723.0, time stamp: 0x53cf6f00
Faulting module name: igdumd32.dll_unloaded, version: 9.17.10.3517, time stamp: 0x532b0b5b

The igdum32.dll is an Intel driver. So I disabled the Intel adaptor, this time via Admin Tools > Computer Manager > Device Manager. Visual Studio now loaded OK. I found I could re-enable the Intel GPU after Visual Studio  loaded without issue. So the problem was something to do with the extended load process.


So I had a usable system, but still had problems when using a projector.


The solution in the end was simple – remove the Intel Drivers


  • In Admin Tools > Computer Manager > Device Manager delete the Intel GPU – Select the option to delete the drivers too
  • Reboot the PC and in BIOS disable the integrated Intel GPU
  • When the PC reboot it will just use the Nvidia GPU

The key here is to delete the Intel drivers, the basic fact of their presence, whether running or not, cause the problems either to the operating system or Visual Studio depending on your BIOS settings


Source: Rfennell