‘Windows Phone 8.1 Update’ update

I have been running Windows Phone 8.1 Update for a couple of weeks now and have to say I like. I have not suffered the poor battery life others seem to have suffered. Maybe this is an feature of the Nokia 820 no needing as many firmware updates from Nokia (which aren’t available yet) note having such power hungry features as the larger phones.

The only issue I have had is that I lost an audio channel when using a headset. Initially I was unsure if it was a mechanical fault on the headphone socket, but I checked the headset was good, it sounded as if the balance was faded to just one side as you could just hear something faint on the failing side. Anyway as often is the case in IT, a reboot of the phone fixed the issue.

The return of Visual Studio Setup projects – just because you can use them should you?

A significant blocker for some of my customers moving to Visual Studio 2013 (and 2012 previously) has been the removal of Visual Studio Setup Projects; my experience has been confirmed by UserVoice. Well Microsoft have addressed this pain point by releasing a Visual Studio Extension to re-add this Visual Studio 2010 functionality to 2013. This can be downloaded from the Visual Studio Gallery.

Given this release, the question now becomes should you use it? Or should you take the harder road in the short term of moving to Wix, but with the far greater flexibility this route offers going forward?

At Black Marble we decided when Visual Studio Setup projects were dropped to move all active projects over to Wix, the learning curve can be a pain, but in reality most Visual Studio Setup project convert to fairly simple Wix projects. The key advantage for us is that you can build a Wix project on a TFS build agent via MSBuild; not something you can do with a  Visual Studio Setup Project without jump through hoops after installing Visual Studio on the build box.

That said I know that the upgrade cost of moving to Wix is a major blocker for many people, and this extension will remove that cost. However, please consider the extension a tool to allow a more staged transition of installer technology, not an end in itself. Don’t let you installers become a nest of technical debt

All upgraded to the Windows Phone 8.1 Update

My Nokia 820 phone is now updated to 8.1 with the developer preview.

image

The actual upgrade was straight forward, the only issue was that the Store was down last night so updating apps could not be done until this morning. This was made more of an issue by the fact I had had to remove all my Nokia Maps and the iPodcast application (and downloaded podcasts) to free up space on the phone to allow the upgrade. Both these apps could only store data on the phone (not the SDcard) thus blocked the upgrade. This lack of space on the actual phone has been a constant issue for me on the Nokia 820.

So what is new and immediately useful to me?

  • You can now store virtually anything on an SDCard, not just music and images
  • The notification bar is great, no need for the connectivity shortcuts, but it does so much more
  • And at last podcasting is built back in, only issue is I am not sure I want the hassle of re-entering all my subscriptions, iPodcast does such a great job storing them in the cloud making re-installation or device swaps so easy – time will tell on that one if I move or not.

At this point I decided to leave my phone on UK settings, so did not get Cortana enable, just letting others in the office map out any issues that may occur with playing with region settings and the Store.

So now to see what WP8.1 is like to live with…

Where has my picture password login sign in gone on Windows 8?

I have had a Surface 2 for about six months. It is great for watching videos on the train, or a bit of browsing, but don’t like it for note taking in meetings. This is a shame, as this is what I got it for; a light device with good battery life to take to meetings. What I needed was something I could hand write on in OneNote, an electronic pad. The Surface 2 touch screen is just not accurate enough.

After Rik’s glowing review I have just got a Dell Venue 8 Pro and stylus. I setup the Dell with a picture password and all was OK for a while, I could login via a typed password or a picture as you would expect. However the picture password sign-in option disappeared from the lock/login screen at some point after running the numerous update and application installation I needed.

I am not 100% certain, but I think the issue is that when I configured the Windows 8 Mail application to talk to our company Exchange server I was asked to accept some security settings from our domain. I think this blocked picture password for non-domain joined devices. I joined the Dell to our domain (you can do this as it is Atom not ARM based , assuming you re willing to do a reinstall with Windows 8 Pro) and this seems to have fixed my problem. I have installed all the same patches and apps and I still have the picture password option.

So roll on the next meeting to see if I can take reasonable hand written notes on it, and that OneNote desktop manages to get them converted to text.

Handling .pubxml files with TFS MSBuild arguments

With Visual Studio 2012 there were changes in the way Web Publishing worked; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.

Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the /p:DeployOnBuild=True MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.

This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.

You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish

image

To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument /p:DeployOnBuild=True, now we would use /p:DeployOnBuild=True;PublishProfile=MyProfile, where there is a .pubxml file in the path

[Project]/properties/PublishProfiles/MyProfile.pubxml

Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the [your package name].SetParameters.xml file, which you can alter with whichever transform technology you wish to use e.g. SlowCheetah or Release Management workflows.

One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <DesktopBuildPackageLocation>. This will be the output path you used when you created the publish profile using the wizard in Visual Studio.

If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument /p:DesktopBuildPackageLocation=””. 

You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.

What I learnt getting Release Management running with a network Isolated environment

In my previous post I described how to get a network isolated environment up and running with Release Management, it is all to do with shadow accounts. Well getting it running is one thing, having a useful release process is another.

For my test environment I needed to get three things deployed and tested

  • A SQL DB deployed via a DACPAC
  • A WCF web service deployed using MSDeploy
  • A web site deployed using MSDeploy

My environment was a four VM network isolated environment running on our TFS Lab Management system.

image 

The roles of the VMs were

  • A domain controller
  • A SQL 2008R2 server  (Release Management deployment agent installed)
  • A VM configured as a generic IIS web server (Release Management deployment agent installed)
  • A VM configured as an SP2010 server (needed in the future, but its presence caused me issues so I will mention it)

Accessing domain shares

The first issue we encountered was that we need the deployment agent on the VMs to be able to access domain shares on our corporate network, not just ones on the local network isolated domain. They need to be able to do this to download the actual deployment media. The easiest way I found to do this was to place a NET USE command at the start of the workflow for each VM I was deploying too. This allowed authentication from the test domain to our corporate domain and hence access for the agent to get the files it needed. The alternatives would have been using more shadow accounts, or cross domain trusts, both things I did not want the hassle of managing.

image

The run command line activity runs  the net command with the arguments use \\store\dropsshare [password] /user:[corpdomain\account]

I needed to use this command on each VM I was running the deployment agent on, so appears twice in this workflow, once for the DB server and once for the web server.

Version of SSDT SQL tools

My SQL instance was SQL 2008R2, when I tried to use the standard Release Management DACPAC Database Deployer tool it failed with assembly load errors. Basically the assemblies downloaded as part of the tool deployment did not match anything on the VM.

My first step was to install the latest SQL 2012 SSDT tools on the SQL VM. This did not help the problem as there was still a mismatch between the assemblies. I therefore create a new tool in the Release Management inventory, this was a copy of the existing DACPAC tool command, but using the current version of the tool assemblies from SSDT 2012

image

Using this version of the tools worked, my DB could be deployed/updated.

Granting Rights for SQL

Using SSDT to deploy a DB (especially if you have the package set to drop the DB) does  not grant any user access rights.

I found the easiest way to grant the rights the web service AppPool accounts needed was to run a SQL script. I did this by creating a component for my release with a small block of SQL to create DB owners, this is the same technique as used for the standard SQL create/drop activities shipped in the box with Release Management.

The arguments I used for the sqlcmd were -S __ServerName__ -b -Q “use __DBname__ ; create user [__username__] for login [__username__];  exec sp_addrolemember ‘db_owner’, ‘__username__';”

image

Once I had created this component I could pass the parameters I needed add DB owners.

Creating the web sites

This was straight forward, I just used the standard components to create the required AppPools and the web sites. It is worth nothing that these command can be run against existing site, the don’t error if the site/AppPool already exists. This seems to be the standard model with Release Management as there is no decision (if) branching in the workflow, so all tools have to either work or stop the deployment.

image

I then used the irmsdeploy.exe Release Management component to run the MSDeploy publish on each web site/service

image

A note here: you do need make sure you set the path to the package to be the actual folder the .ZIP file is in, not the parental drop folder (in my case Lab\_PublishedWebsites\SABSTestHarness_Package not Lab)

image

Running some integration tests

We now had a deployment that worked. It pulled the files from our corporate LAN and deployed them into a network isolated lab environment.

I now wanted to run some tests to validate the deployment. I chose to use some SQL based tests that were run via MSTest. These tests had already been added to Microsoft Test Manager (MTM) using TCM, so I thought I had all I needed.

I added the Release Management MTM component to my workflow and set the values taken from MTM for test plan and suite etc.

image

However I quickly hit cross domain authentication issues again. The Release Management component does all this test management via a PowerShell script that runs TCM. This must communicate with TFS, which in my system was in the other domain, so fails.

The answer was to modify the PowerShell script to also pass some login credentials

image

The only change in the PowerShell script was that each time the TCM command is called the /login:$LoginCreds block is added, where $LoginCreds are the credentials passed in the form corpdomain\user,password

$testRunId = & “$tcmExe” run /create /title:”$Title” /login:$LoginCreds /planid:$PlanId /suiteid:$SuiteId /configid:$ConfigId /collection:”$Collection” /teamproject:”$TeamProject” $testEnvironmentParameter $buildDirectoryParameter $buildDefinitionParameter $buildNumberParameter $settingsNameParameter $includeParameter
   

An interesting side note is that if you try to run the TCM command at the command prompt you only need to provide the credentials on the first time it is run, they are cached. This does not seem to be the case inside the Release Management script, TCM is run three times, each time you need to pass the credentials.

Once this was in place, and suitable credentials added to the workflow I expected my test to run. They did but 50% failed – Why?

It runs out the issue was that in my Lab Management environment setup I had set the roles of both IIS server and SharePoint server to Web Server.

My automated test plan in MTM was set to run automated tests on the Web Server role, so sent 50% of the tests to each of the available servers. The tests were run by Lab Agent (not the deployment agent) which was running as the Network Service machine accounts e.g. Proj\ProjIIS75$ and Proj\ProjSp2010$. Only for former of these had been granted access to the SQL DB (it was the account being used for the AppPool), hence half the test failed, with DB access issues

I had two options here, grant both machine accounts access, or alter my Lab Environment. I chose the latter. I put the two boxes in different roles

image

I then had to load the test plan in MTM so it was updated with the changes

image

Once this was done my tests then ran as expected.

Summary

So I now have a Release Management deployment plan that works for a network isolated environment. I can run integration tests, and will soon add some CodeUI ones, it is should only be a case of editing the test plan.

It is an interesting question of how well Release Management, in its current form, works with Lab Management when it is SCVMM/Network Isolated environment based, is is certainly not its primary use case, but it can be done as this post shows. It certainly provides more options than the TFS Lab Management build template we used to use, and does provide an easy way to extend the process to manage deployment to production.

Fix for ‘Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid’ errors on TFS 2013.2 build

When working with web applications we tend to use MSDeploy for distribution. Our TFS build box, as well as producing a _PublishedWebsite copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager

To create this package we add the MSBuild Arguments /p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True 

image

This was been working fine, until I upgraded our TFS build system to 2013.2. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets (3883): Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid.)

If I removed the /p:DeployOnBuild=true argument, the build was fine, just no ZIP package was created.

After a bit of thought I realised that I had also upgraded my PC to 2013.2 RC, the publish options for a web project are more extensive, giving more options for Azure.

So I assumed the issue was a mismatch between MSBuild and target files, missing these new options. So I replaced the contents of C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web on my build box with the version from my upgraded development PC and my build started working again.

Seems there are some extra parameters set in the newer version of the build targets. Lets see if it changes again when Visual Studio 2013.2 RTMs.

Upgrading a VSTO project from VS 2008 to 2013

To make sure all our Word documents are consistent we use a Word template that include VSTO action pane.

image

This allow us to insert standard blocks of text, T&C and the like, and also makes sure document revisions and reviews are correctly logged. We have used this for years without any issues, but I recently needed to make some changes to the underlying Word .dotx template and I had to jump through a couple of hoops to get it rebuilding in Visual Studio 2013 for Office 2013 (previously it had been built against Visual Studio 2008 generation tools)

The old VSTO project opened in Visual Studio 2013 without a problem, doing the one way upgrade. However, when I tried to build the project (which also signs it) I got the error

The "FindRibbons" task failed unexpectedly.
System.IO.FileNotFoundException:
  Could not load file or assembly 'BMAddIn, Version=1.0.0.0, Culture=neutral, 
  PublicKeyToken=null' or one of its dependencies.
  The system cannot find the file specified.

The  issue was that you need to remove the SecurityTransparent attribute from the end of the AssemblyInfo.cs file as detailed in MSDN.


Once this error was clear, I also got a problem when I tried to sign the assembly


error CS1548: Cryptographic failure while signing assembly. Unknown error (8013141c)


This was fixed by sorting the rights on my PC as I am running Visual Studio as a non-admin account. You need to give your current user ‘Full Access’ to C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys, or run Visual Studio as admin

So now it rebuilds and can be deployed I can make my modifications and enhance our VSTO solution, a much underused technology.

TFS 2013.2 has RTM’d

TFS 2013.2 got RTM’d last night and is available on MSDN; interestingly Visual Studio 2013.2 is still only an RC, we have to wait for that to RTM.

As we had a good proportion of our team at Build 2014 I took the chance to do the upgrade today. It went smoothly, no surprises, though the  installation phase (the middle bit after the copy and before the config wizard) took a while. On our build agents, they all seemed to want reboot (or two) at this point, the TFS server did not, but took a good few minutes with no progress bar movement when I assume it was updating libraries.

So what do we get in 2013.2?

  • Can query on Work Item Tagging
  • Backlog management improvements
  • Work item charting improvements (can pin charts to the homepage)
  • Export test plan to HTML
  • Release Management “Tags”
  • An assortment of Git improvements

I bet the charts on the home page and querying tags will be popular