The best way to enjoy Build on the road.

The way most big conferences manage to virtually live stream everything is very impressive. I started watching the stream of yesterdays Microsoft Build keynote on the office’s big projection screen with everyone else at Black Marble. I have always said the best way to enjoy a keynote is on the comfy sofa with a beer at the end of the day. So much better than an early queue then a usually over air conditioned hall with 10,000 close friends.

Unfortunately I had to leave about 1 hour into the keynote, so fired up my Lumia 800 Window 7.8 phone (yes an older one, but I like the size) and hit the Channel 9 site. This picked up the stream, just in the browser, and I was able to listen to the rest of the session via 3G whilst travelling home, I of course made sure the screen was off as I was driving. It was seamless.

Clever what this technology stuff can do now isn’t it

A day of TFS upgrades

After last nights release of new TFS and Visual Studio bits at the Build conference I spent this morning upgrading my demo VMs. Firstly I upgraded to TFS 2012.3 and then snapshotting before going onto 2013 Preview. So by changing snapshot I can now demo either version. In both cases the upgrade process was as expected, basically a rerun of the configuration wizard with all the fields bar the password prefilled. Martin Hinshelwood has done a nice post if you want more details on the process

Looking at the session at Build on Channel9 there are not too many on TFS, to find out more about the new features then you are probably better to check out the TechEd USA or TechEd Europe streams.

Why can’t I find my build settings on a Git based project on TFS Service?

Just wasted a bit of time trying to find the build tab on a TFS Team Project hosted on the hosted http://tfs.visualstudio.com using a Git repository. I was looking on team explorer expecting to see something like

image

But all I was seeing the the Visual Studio Git Changes option (just the top bit on the left panel above).

It took to me ages to realise that the issue was I had cloned the Git repository to my local PC using the Visual Studio Tools for Git. So I was just using just Git tools, not TFS tools. As far as Visual Studio was concerned this was just some Git repository it could have been local, GitHub, TFS Service or anything that hosts Git.

To see the full features of TFS Service you need to connect to the service using Team Explorer (the green bits), not just as a Git client (the red bits)

image

Of course if you only need Git based source code management tools, just clone the repository and use the Git tooling, where inside or outside Visual Studio. The Git repository in TFS is just a standard Git repro so all tools should work. From the server end TFS does not care what client you use, in fact it will still associate you commits, irrespective of client, with TFS work items if you use the #1234 syntax for work item IDs in your comments.

However if you are using hosted TFS from Visual Studio, it probably makes more sense to use a Team Explorer connection so all the other TFS feature light up, such as build. The best bit is that all the Git tools are still there as Visual Studio knows it is still just a Git repository. Maybe doing this will be less confusing when I come to try to use a TFS feature!

Error adding a new widget to our BlogEngine.NET 2.8.0.0 server

Background

if you use Twitter in any web you will probably have noticed that they have switched off the 1.0 API, you have to use the 1.1 version which is stricter over OAUTH. This meant the Twitter feeds into our blog server stopped working on the 10th of June. The old call of

http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble

did not work and just change 1 to 1.1 did not work.

So I decided to pull down a different widget for BlogEngine.NET to do the job, choosing Recent Tweets.

The Problem

However when I tried to access our root/parent blog site and go onto the customisation page to add the new widget I got

Ooops! An unexpected error has occurred.

This one’s down to me! Please accept my apologies for this – I’ll see to it that the developer responsible for this happening is given 20 lashes (but only after he or she has fixed this problem).

Error Details:

Url : http://blogs.blackmarble.co.uk/blogs/admin/Extensions/default.cshtml
Raw Url : /blogs/admin/Extensions/default.cshtml
Message : Exception of type ‘System.Web.HttpUnhandledException’ was thrown.
Source : System.Web.WebPages
StackTrace : at System.Web.WebPages.WebPageHttpHandler.HandleError(Exception e)
at System.Web.WebPages.WebPageHttpHandler.ProcessRequestInternal(HttpContextBase httpContext)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
TargetSite : Boolean HandleError(System.Exception)
Message : Item has already been added. Key in dictionary: ‘displayname’ Key being added: ‘displayname’

Looking at the discussion forums it seem be had some DB issues.

The Fix

I could see nobody with the same problems, so I pulled down the source code from Codeplex and had a look at the DBBlogProvider.cs (line 2350) where the error was reported. I think the issue is that when a blog site is set ‘Is for site aggregation’, as our root site where I needed to install the new widget is, the SQL query that generates the user profile list was not filtered by blog, so it saw duplicates.

I disabled ‘Is for site aggregation’ for our root blog and was then able to load the customisation page and add my widget.

Interestingly, I then switched back on ‘Is for site aggregation’ and all was still OK. I assume the act of opening the customisation page once fixes the problem.

Also worth noting ….

In case you had not seen it, I hadn’t, there is a patch for 2.8.0.0 that fixes a problem that the slug (the url generate for a post) was not being done correctly, so multiple posts on the same day got group as one. This cause search and navigation issues. Worth installing this if you are likely to write more than one post on a blog a day.

Using SYSPREP’d VM images as opposed to Templates in a new TFS 2012 Lab Management Environment

An interesting change with Lab Management 2012 and SCVMM 2012 is that templates become a lot less useful. In the SCVMM 2008 versions you had a choice when you stored VMs in the SCVMM library. …

  • You could store a fully configured VM
  • or a generalised template.

When you added the template to a new environment you could enter details such as the machine name, domain to join and product key etc. If you try this with SCVMM 2012 you just see the message ‘These properties cannot be edited from Microsoft Test Manager’

image

So you are meant to use SCVMM to manage everything about the templates, not great if you want to do everything from MTM. However, is that the only solution?

An alternative is to store a SYSPREP’d VM as a Virtual Machine in the SCVMM library. This VM can be added as many times as is required to an environment (though if added more than once you are asked if you are sure)

image

This method does however bring problems of its own. When the environment is started, assuming it is network isolated, the second network adaptor is added as expected. However, as there is no agent on the VM it cannot be configured, usually for a template Lab Management would sort all this out, but because the VM is SYSPREP’d it is left sitting at the mini setup ‘Pick your region’ screen.

You need to manually configure the VM. So the best process I have found is

  1. Create the environment with you standard VMs and the SYSPRED’d one
  2. Boot the environment, the standard ready to use VMs get configured OK
  3. Manually connect to the SYSPREP’d VM and complete the mini setup. You will now have a PC on a workgroup
  4. The PC will have two network adapters, neither connected to you corporate network, both are connected to the network isolated virtual LAN. You have a choice
    • Connect the legacy adaptor to your corporate LAN, to get at a network share via SCVMM
    • Mount the TFS Test Agent ISO
  5. Either way you need to manually install the Test Agent and run the configuration (just select the defaults it should know where the test controller is). This will configure network isolated adaptor to the 192.168.23.x network
  6. Now you can manually join the isolated domain
  7. A reboot the VM (or the environment) and all should be OK

All a bit long winded, but does mean it is easier to build generalised VMs from MTM without having to play around in SCVMM too much. 

I think all would be a good deal easier of the VM had the agents on it before the SYSPREP, I have not tried this yet, but that is true in my option of all VMs used for Lab Management. Get the agents on early as you can, just speeds everything up.

Great experience moving my DotNetNuke site to PowerDNN

I posted recently about my experiences in upgrading DotNetNuke 5 to 7, what fun that was! Well I have now had to do the move for real. I expected to follow the same process, but had problems. Turns out the key was to go 5 > 6 > 7. Once I did this the upgrade worked, turns out this is the recommended route. Why my previous trial worked I don’t know?

Anyway I ended up with a local DNN 7 site running against SQL 2012. It still was using DNN 5 based skin (which has problems with IE 10) which I needed to alter, but was functional. So it was time to move my ISP.

Historically I had the site running on Zen Internet, but their Windows hosting is showing its age, they do not offer .NET 4,  and appear to have no plans to change this when I last asked. Also there is no means to do a scripted/scheduled backup on their servers.

The lack of .NET 4  meant I could not use Zen for DNN 7. So I choose to move to PowerDNN, which is a DNN specialist, offers the latest Microsoft hosting and was cheaper.

I had expect the migrate/setup to be awkward, but far from it. I uploaded my backups to PowerDNN’s FTP site and the site was live within 10 minutes. I had a good few questions over backup options, virtual directories for other .NET applications etc. all were answered via email virtually instantly. Thus far the service has been excellent, PowerDNN are looking a good choice.

Using git tf to migrate code between TFS servers retaining history

Martin Hinshelwood did a recent post on moving source code between TFS servers using  git tf. He mentioned that you could use the –deep option to get the whole changeset check-in history.

Being fairly new to using Git, in anything other than the simplest scenarios, it took me a while to get the commands right. This is what I used in the end (using the Brian Keller VM for sample data) …

C:\tmp\git> git tf clone http://vsalm:8080/tfs/fabrikamfibercollection $/fabrikamfiber/Main oldserver –deep

Connecting to TFS…

Cloning $/fabrikamfiber/Main into C:\Tmp\git\oldserver: 100%, done.

Cloned 5 changesets. Cloned last changeset 24 as 8b00d7d

C:\tmp\git> git init newserver

Initialized empty Git repository in C:/tmp/git/newserver/.git/

C:\tmp\git> cd newserver

C:\tmp\git\newserver [master]> git pull ..\oldserver –depth=100000000

remote: Counting objects: 372, done.

remote: Compressing objects: 100% (350/350), done.

96% (358/372), 2.09 MiB | 4.14 MiB/s

Receiving objects: 100% (372/372), 2.19 MiB | 4.14 MiB/s, done.

Resolving deltas: 100% (110/110), done.

From ..\oldserver

* branch HEAD -> FETCH_HEAD

C:\tmp\git\newserver [master]> git tf configure http://vsalm:8080/tfs/fabrikamfibercollection $/fabrikamfiber/NewLocation

Configuring repository

C:\tmp\git\newserver [master]> git tf checkin –deep –autosquash

Connecting to TFS…

Checking in to $/fabrikamfiber/NewLocation: 100%, done.

Checked in 5 changesets, HEAD is changeset 30

The key was I had missed the –autosquash option on the final checkin.

Once this was run I could see my checking history, the process is quick and once you have the right command line straight forward. However, just like TFS Integration Platform time is compressed, and unlike TFS Integration Platform you also lose the ownership of the original edits.

image

This all said, another useful tool in the migration arsenal.

Where did my parameters go when I edited that standard TFS report?

I have been doing some editing of the standard scrum TFS 2012 Sprint Burndown report in SQL 2012 Report Builder. When I ran the report after editing the MDX query in the dsBurndown DataSet to return an extra column I got an error:


  • on a remote PC it just said error with dsBurndown dataset
  • on the server hosting reporting services, or in Report Builder, I got a bit more information, it said the TaskName parameter was not defined.

On checking the state of the dataset parameters before and after my edit I could see that the TaskName parameter had been lost


image


Manually re-adding it fixed the problem.


Interestingly which parameters were lost seemed to depend on the MDX query edit I made, I assume something is inferring the parameters from the MDX query.


So certainly one to keep an eye on. I suspect this is a feature of Report Builder, maybe I am better just using trusty Notepad to edit the .RDL file. Oh how I love to edit XML in Notepad

Nice discussion on real world issues with software projects

Just watched a good session from TechEd USA 2013, it was billed as Agile Software Development with Microsoft Visual Studio ALM but has little that was specifically TFS based; no demos just war stories from Aaron Bjork and Peter Provost

It is a good discussion of the problems, experiences and solutions the Microsoft Visual Studio team went through when trying to move to agile development, including

  • Sprint lengths, be consistent across teams
  • Retrospectives, do you actually act on them?
  • Technical debt, do you write features or bugs?
  • Measure what you do
  • What is the role of the product owner/manager?

Well worth a watch