Moving Environments between TPCs when using TFS Lab Management

Background

One area I think can be found confusing in TFS Lab Management is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.

image

 

This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.

This problem effects companies like ourselves as we have many TPCs because we tend to have one per client, an arrangement not that uncommon for consultancies.

It is not just in Lab Management this is an issue for us. The isolated nature of TPCs, a great advantage for client security, has caused us to have an ever growing number of Build Controllers and Test Controllers which were are regularly reassigning to whichever are our active TPCs. Luckily multiple the Build Controller can be run on the same VM (I discussed this unsupported hack here), but unfortunate there is no similar workaround for Test Controllers.

MTM is not your friend when  storing environments for use beyond the current TP

What I want to discuss in this post is how, when you have a working environment in one TP you can get it into another TP with as little fuss as possible.

Naively you would think that you use the Store in Library option within MTM that is available for a stopped environment.

 image

This does store the environment on the SCVMM Library, but it is only available for the TP that is was stored from. It is stored in the A1 silo in the SCVMM Library. Now you might ask why, the SCVMM Library is just a share, so anything in it should be available to all? But it turns out it is not just a share. It is true the files are on a UNC share, you can see the stored environments as a number of Lab_[guid] folders, but there is also a DB that stores meta data, this is the problem. This meta data associates the stored environment with a given TP.

The same is true if you choose to just store a single VM from within MTM whether you choose to store it as a VM or a template.

Why is this important you might ask? Well it is all well and good you can build your environment from VMs and templates in the SCVMM Library, but these will not be fully configured for your needs. You will build the environment, making sure TFS agents are in place, maybe putting extra applications, tools or test data on system. It is all work you don’t want to have to repeat for what is in effect the same environment in another TP or TPC. This is a problem we see all the time. We do SharePoint development so want a standard environment (couple of load balanced servers and a client) we can use for many client projects in different TPCs  (Ok VM factory can help, but this is not my point here).

A workaround of sorts

The only way I have found to ease this problem is when I have a fully configured environment to clone the key VMs (the servers) into the SCVMM Library using SCVMM NOT MTM

  1. Using MTM stop the environment you wish to work with.
  2. Identify the VM you wish to store, you need its Lab name. This can be found in MTM if you connect to the lab and check the system info for the VM

    image
  3. Load SCVMM admin console, select Virtual Machines tab and find the correct VM

    image
  4. Right click on the VM and select Clone
  5. Give the VM as new meaningful name e.g. ‘Fully configured SP2010 Server’
  6. Accept the hardware configuration (unless you wish to change it for some reason)
  7. IMPORTANT On the destination tab select the to ‘store the virtual machine in the library’. This appears to be the only means to get a VM into the library such that it can be imported into any TPC/TP.

    image
  8. Next select the library share to use
  9. And let the wizard complete.
  10. You should not have a VM in the SCVMM Library that can be imported into new environments.

 

You do have to at this point recreate the environment in your new TP, but at least the servers you import into this environment are configured OK. If for example you have a pair of SP2010 servers, a DC and a NLB, as long as you drop them into a new isolated environment they should just leap into life as they did before. You should not have to do any extra re-configuration.

The same technique could be used for workstation VMs, but it might be as quick to just use template (sysprep’d) clients. You just need to take a view on this for your environment requirements

Debugging CodedUi Tests when launching test as a different user

If you are working with CodedUI tests in Visual Studio you sometimes get unexpected results, such as the wrong field be selected in replays. When trying to work out what has happened the logging features are really useful. These are probably already switched on, but you can check by following the details in this post.

Assuming you make no logging level changes from the default, if you look in the

      %Temp%\UITestLogs\LastRun

you should see a log file containing warning level messages in the form

Playback – {1} [SUCCESS] SendKeys "^{HOME}" – "[MSAA, VisibleOnly]ControlType=’Edit’"

E, 11576, 113, 2011/12/20, 09:55:00.344, 717559875878, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window

W, 11576, 113, 2011/12/20, 09:55:00.439, 717560081047, QTAgent32.exe, Playback – {2} [SUCCESS] SendKeys "^+{END}" – "[MSAA, VisibleOnly]ControlType=’Edit’"

E, 11576, 113, 2011/12/20, 09:55:00.440, 717560081487, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window

W, 11576, 113, 2011/12/20, 09:55:00.485, 717560179336, QTAgent32.exe, Playback – {3} [SUCCESS] SendKeys "{DELETE}" – "[MSAA, VisibleOnly]ControlType=’Edit’"

A common problem with coded UI tests can be who you are running the test as. It is possible to launch the application user test using the following command at the start of a test

     ApplicationUnderTest.Launch(“c:\my.exe”, 
                              “c:\my.exe”, 
                             ””, 
                             ”username”, 
                            securepassword, 
                           ”domain”)

 

I have found that this launch mechanism it can cause problems with fields not being found in the CodeUI test unless you run Visual Studio as administrator (using the right click run as Administrator in Windows). This is down to who is allowed to access who’s UI thread in Windows if a user is not an administrator.

So if you want to use ApplicationUnderTest.Launch and change the user for an CodedUI test, best the process running the test is an administrator.

DevOps are testers best placed to fill this role?

DevOps seems to be the new buzz role in the industry at present. People who can bridge the gap between the worlds of development and IT pros. Given my career history this could be a description of  the path I took. I have done both, and now sit in the middle covering ALM consultancy where I work with both roles. You can’t avoid a bit of development and a bit of IT pro work when installing and configuring TFS with some automated build and deployment.

The growth of DevOps is an interesting move because of late I have seen the gap between IT Pros and developers grow. Many developers seem to have less and less understanding of operational issues as times go on. I fear this is a due to the greater levels of abstractions that new development tools cause. This is only going to get worse was we move into the cloud, why does a developer need to care about Ops issues, AppFabric does that for them – doesn’t it?

In my view this is dangerous, we all need at least a working knowledge of what underpins the technology we use. Maybe this should hint at good subjects for informal in-house training, why not get your developers to give intro training to the IT pros and vice versa? Or encourage people to listen to podcasts on the other roles subjects such as Dot Net Rocks (a dev podcast) and Run As Radio (an IT pro podcast). It was always a nice feature of the TechEd conference that it had a dev and IT pro track, so if the fancy took you could hear about technology from the view of the the other role.

However, these are longer term solutions, it is all well and good promoting these but in the short term who is best placed to bridge this gap now?

I think the answer could be testers, I wrote a post a while ago that it was great to be a tester as you got to work with a wide range of technologies, isn’t this just an extension of this role. DevOps needs a working understanding of development and operations, as well as a good knowledge of deployment and build technologies. All aspects of the tester role, assuming your organisation considers a tester not to be a person who just ticks boxes on a check list, but a software development engineer working in test.

This is not to say that DevOps and testers are the same, just that there is some commonality and so you may have more skills in house than you thought you did. DevOps is not new, someone was doing the work already, they just did not historically give it that name (or probably any name)

When you try to run a test in MTM you get a dialog ‘Object reference not set to an instance of an object’

When trying to run a newly created manual test in MTM I got the error dialog

‘You cannot run the selected tests, Object reference not set to an instance of an object’.

image

 

On checking the windows event log I saw

Detailed Message: TF30065: An unhandled exception occurred.

Web Request Details Url: http://……/TestManagement/v1.0/TestResultsEx.asmx

So not really that much help in diagnosing the problem!

Turns out the problem was I had been editing the test case work item type. Though it had saved/imported without any errors (it is validated during these processes) something was wrong with it. I suspect to do with filtering the list of users in the ‘assigned to’ field as this is what I last remember editing, but I might be wrong, it was on a demo TFS instance I have not used for a while.

The solution was to revert the test case work item type back to a known good version and recreate the failing test(s). Its seems once a test was created from the bad template there was nothing you could do about fixing it.

Once this was done MTM ran the tests without any issues.

When I have some time I will do an XML compare of the exported good and bad work item types to see what the problem really was.

The battle of the Lenovo W520 and projectors

My Lenovo W520 is the best laptop I have owned, but I have had one major issue with it, external projectors. The problem is it does not like to duplicate the laptop screen output to a projector, it works fine if extending the desktop, not duplicating.

Every time I have tried to use it with a projector I either end up only showing on the projector and looking over my shoulder, or fiddling for ages until it suddenly works, usually at a low resolution, I don’t know what I did to get to this point so I don’t dare fiddle any more so use it anyway. A bit of a problem given the number of presentations I do. A quick search shows I am not alone in this problem.

The issue it seems is down to the fact the Lenovo has two graphics systems, an integrated (Intel) one and a discrete (Nvidia) one. The drivers in Windows 7 allow it to switch dynamically between the two to save power. This is called Nvidia Optimus Switching.

The answer to the problem is to disable this Optimus feature in the BIOS, this is at the cost of some battery life, but better to have a system that works as I need and have to plug it in, than does not work at most client sites.

So to make the change

  1. Reboot into BIOS (press the ThinkVantage button)
  2. Select the Discrete graphics option (the Nvidia 1000M)
  3. Disable the Opitmus features
  4. Save and Reboot
  5. Windows 7 re-detects all the graphics drivers and then all seems OK (so far…)

On more point it is worth noting I again fell for the problem that as my WIndows 7 partition is BitLockered you have to enter your recovery key if you change anything in the BIOS, see my past post for details of how to fix this issue. Was a bit surprised by this as I thought BitLocker would only care about changes to the master boot record, but you live and learn.