High CPU utilisation on the data tier after a TFS 2010 to 2013 upgrade

There have been significant changes in the DB schema between TFS 2010 and 2013. This means that as part of an in-place upgrade process a good deal of data needs to be moved around. Some of this is done as part of the actual upgrade process, but to get you up and running quicker, some is done post upgrade using SQL SPROCs. Depending how much data there is to move this can take a while, maybe many hours. This is the cause the SQL load.

A key factor as to how long this takes is the size of your pre upgrade tbl_attachmentContent table, this is where amongst other things test attachments are stored. So if you have a lot of test attachments it will take a while as these are moved to their new home in tbl_content.

If you want to minimise the time this takes it can be a good idea to remove any unwanted test attachments prior to doing the upgrade. This is done with the test attachment cleaner from the appropriate version of TFS Power Tools for your TFS server. However beware that if you don’t have a suitably patched SQL server there can be issues with ghost files (see Terje’s post).

If you cannot patch your SQL to a suitable version to avoid this problem then it is best to clean our old test attachments only after the while TFS migrate has completed i.e. wait until the high SQL CPU utilisation caused by the SPROC based migration has completed. You don’t want to be trying to clean out old test attachments at the same time TFS is trying to migrate them.

A walkthrough of getting Kerberos working with a Webpart inside SharePoint accessing a WCF service

In the past I have posted on how to get Kerberos running for multi tier applications. Well as usual when I had to redeploy the application onto new hardware I found my notes were not as clear as I would have hoped. So here is what is meant to be a walkthrough for getting our application working in our TFS lab environment.

What we are building

Our lab is a four box system, running in a test domain proj.local

image

  • ProjDC – the domain controller for the proj.local domain
  • ProjIIS75 – a web server hosting our WCF web service
  • ProjSQL2008R2 – the SQL box for the applications in the domain
  • ProjSP2010 –  a SharePoint server

The logical system we are trying to build is a SharePoint site with a webpart that calls a WCF service which in turn makes calls to a SQL database. We need the identity the user logs into SharePoint server as to be passed to WCF service via impersonation.

image

Though not important to this story, all this was all running a TFS Lab management infrastructure as a network isolated environment

Application Deployment

We have to deploy a number of layers for our application

 

DB

  1. Using a SSDT DACPAC deployment we created a new DB for our application on ProjSQL2008R2
  2. We grant the machine account proj\ProjIIS75$ owner access to this DB (the WCF service will run as this account)

 

WCF Service

  1. Using MSDeploy we deploy a new copy of our WCF web site onto ProjIIS75.
  2. We bound this to port 8081
  3. We set the AppPool set to run as Network Service (the proj\ProjIIS75$  account we just granted DB access to)
  4. We made sure the web site authentication is set enable for anonymous authentication, ASP.NET impersonation and  windows authentication
  5. Set the DB connection string to point to the new DB on ProjSql2008R2, and other server specific AppSettings, in the web.config
  6. Made sure port 8081 was open on the firewall

 

SharePoint

  1. Add the WSP solution containing our front end  to the SharePoint farm (you can use STSadm or powershell commands to do this)
  2. Using SharePoint Central Admin we deployed this solution to the web application
  3. Activated the feature on the site the solution has been deployed to.
  4. Create a new web page to host the webpart e.g. http://share2010.proj.local/sitepages/mypage.aspx (Note here the name we use to access this SharePoint site is share2010 not ProjSp2010. This host name is resolved via the DNS on ProjDC of our lab environment. This lab setup has a fully configured SharePoint 2010 with a number of web applications each with their own name and associated service accounts, this is important later on)
  5. We added our webpart to the page and set the webpart properties to
  •  
    • The Url for the WCF web service http://ProjIIS75.proj.local:8081/callservice.svc
    • The SPN for the WCF web service http/ProjIIS75.proj.local:8081

Note: we provide the URL and SPN as a parameters as we build the WCF connection programmatically within the webpart. This is as it would be awkward to put this information in a web.config file on a multi server SharePoint farm and we don’t want to hard code them.

Our Code

The WCF service is configured via its web.config

<system.serviceModel>
    <bindings>
      <wsHttpBinding>
           <binding name=”MyBinding”>
          <security mode=”Message”>
            <message clientCredentialType=”Windows” negotiateServiceCredential=”false” establishSecurityContext=”false” />
          </security>
        </binding>
      </wsHttpBinding>
    </bindings>
    <services>
      <service behaviorConfiguration=”BlackMarble.Sabs.WcfService.CallsServiceBehavior” name=”BlackMarble.Sabs.WcfService.CallsService”>
        <endpoint address=”” binding=”wsHttpBinding” contract=”BlackMarble.Sabs.WcfService.ICallsService” bindingConfiguration=”MyBinding”></endpoint>
        <endpoint address=”mex” binding=”mexHttpBinding” contract=”IMetadataExchange” />
      </service>
     </services>
    <behaviors>
      <serviceBehaviors>
        <behavior name=”BlackMarble.Sabs.WcfService.CallsServiceBehavior”>
          <serviceMetadata httpGetEnabled=”true” />
          <serviceDebug includeExceptionDetailInFaults=”true” />
          <serviceAuthorization impersonateCallerForAllOperations=”true” />
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>

The webpart does the same programmatically

log.Trace(String.Format(“Using URL: {0} SPN: {1} “, this.callServiceUrl, this.callServiceSpn));
var callServiceBinding = new WSHttpBinding();
callServiceBinding.Security.Mode = SecurityMode.Message;
callServiceBinding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
callServiceBinding.Security.Message.NegotiateServiceCredential = false;
callServiceBinding.Security.Message.EstablishSecurityContext = false;
var  ea = new EndpointAddress(new Uri(this.callServiceUrl),  EndpointIdentity.CreateSpnIdentity(this.callServiceSpn));
callServiceBinding.MaxReceivedMessageSize = 2000000;
callServiceBinding.ReaderQuotas.MaxArrayLength = 2000000;

this.callServiceClient = new BlackMarble.Sabs.WcfWebParts.CallService.CallsServiceClient(callServiceBinding, ea);
this.callServiceClient.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
this.callServiceClient.Open();

Getting the Kerberos bits running

First remember that this is a preconfigured test lab where the whole domain, including the SP2010 instance, is already setup for Kerberos authentication. These notes just detail the bits we need to alter to check.

To make sure out new WCF series works in this environment we needed to do the following. All this editing can be done on the domain controller

  1. Using ASDIEDIT, make sure the the computer running the WCF web service, ProjIIS75, has any entry in it’s ServicePrincipalName for the correct protocol and port i.e. HTTP/projiis75.proj.local:8081

    image

  2. Using Active Directory  Users and Computers tool make sure the computer running the WCF web service, ProjIIS75, is set to allow delegation

    image

  3. Using Active Directory Users and Computers tool make sure the service account running the Sharepoint web application, in our case proj\sp2010_share,  is set to allow Kerberos delegation to the computer SPN set in step 1. HTTP/projiis75.proj.local:8081. To do this you press the add button, select the correct server then pick the SPN from the list.

    image


IMPORTANT Now you would expect that you could just set the ‘Trust  the user for delegation to any service’; however we were unable to get this to work. Now this might just be something we set wrong, but if so I don’t know what it was.


Once this was all set we did an IIS reset on ProjSP2010 and reloaded the SharePoint page and it all leapt into life.


How to try to debug when it does not work


There is no simple answer to how to debug this type of system, if it fails it just seems to not work and you are left scratching your head. The best option is plenty of in product logging which I tend to surface using DebugView, also WCFStorm can be useful to check the WCF service is up


 


So I hope I find this post useful when I next need to rebuild this system. Maybe someone else will find it useful too.

Speaking at Gravitas’s Tech Talk #3 -

I am speaking at Gravitas’s Tech Talk #3 – “TFS & Visual Studio 2013 vs The Rest” on Tuesday march the 4th about


“Microsoft’s Application Lifecycle product has had a lot of changes in the past couple of years. In this session will look at how it can be used provide a complete solution from project inception through development, testing and deployment for project both using Microsoft and other vendor technologies”


Hope to see you there

GDR3 update for my Nokia 820

The GDR3 update for my Nokia 820 has at last arrived. As my phone is a developer unit it is at the end of the automated update list. I suppose I could have pulled it down by hand, but I was not in a major rush as I am not doing WP8 development at present.


The update seems to have gone on OK. Only strange thing was I was getting low space warnings prior to the upgrade. I suspect this was the fact the new patch had been download onto the phones main storage, but this was a bit full of content from iPodcast (the otherwise excellent app I use for podcasts can’t store to my SD card). Just to be on the safe side I uninstalled iPodcast, did the GDR3 update and then reinstalled iPodcast. As I have the premium version I could easily restore my podcast lists, including what I had and had not listened to, from the cloud.

Upgraded older Build and Test Controllers to TFS 2013

All has been going well since our upgrade from TFS 2012 to 2013, no nasty surprises.


As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.


So we created a new pair of Windows 2012R2 domain joined server VMs, on  one we installed a test controller and on the build controller and a single build agent


Note: I always tend to favour a single build agent per VM, usually using a single core VM. I tend to find most builds are IO locked not CPU locked so having more smaller VMs, I think, tends to be easier to manage at the VM hosting resource level.


Test Controller


Most of the use of our Test Controller is as part of our TFS Lab Management environments. If you load MTM 2013 you will see that it cannot manage 2012 Test Controller, they appear as off line. Lab Management is meant to keep the test agents upgraded, so it should upgrade an agent from one point release to another e.g. 2012.3 to 2012.4. However, this upgrade feature does not extend to 2012 to 2013 major release upgrades. Also I have always found the automated deployment/upgrade of test agents as part of an environment deployment is problematic at best. You often seem to suffer DNS and timeout issues. Easily the most reliable method is to make sure the correct (or at least compatible) test agents are installed on all the environment VMs prior to their configuration at the deployment/restart stage.


Given this the system that seems to work for getting environment’s test agents talking to the new 2013 Test Controller is:


  1. In MTM stop the environment
  2. Open the environment settings and change the controller to the new one
  3. Restart the environment, you will see the VMs show as not ready. The test agents won’t configure.
  4. Connect to each VM
    1. Uninstall the 2012 Test Agent
    2. install the 2013 Test Agent
  5. Stop and restart the environment and all should work – with luck the VMs will configure properly and show as ready
  6. If the don’t
    1. Try a second restart, sometimes sorts it
    2. You can try a repair, re-entering the various password.
    3. If problem really persist try running the Test Agent Configuration tool on each VM, press next,next, next etc. and it will try to configure. It will probably fail, but hopefully it will have done enough port opening etc. to allow the next environment restart to work correctly
    4. If it still fails you need to check the logs, but suspect a DNS issue.

Obvious you could move step 4 to the start if you make the fair assumption it is going to need manual intervention


Build Controller


Swapping your build over from 2012 to 2013 will have site specific issues. It all depends what build activities you are using. If they are bound to TFS 2012 API they may not work unless you rebuild them. However from my first tests I have found my Build 2012 processes template seem to work I, this is whether I set my  build controller ‘custom assemblies path’ to either my 2012 DLL versions or their 2013 equivalents. So .NET is managing to resolve usable DLLs to get the build working.


Obviously there is still more to do here, checking all my custom build assemblies, maybe a look at revising the whole build scripts to make use to 2013 features, but that can wait.


What I have now allows me to upgrade our Windows 8.1 build agent VM so it can connect to our 2013 Build Controller. Thus allowing use to run full automated builds and tests of Windows 8.1 application. Up to now with TFS 2012 we had only been able to the basic build working due to having to hack i the build process as you need Visual Studio 2013 generation tools to fully build and test Windows 8.1 applications.


 


So we are going to have 2012 build and test controllers around for  while, but we have provided the migration is not going to be too bad. Maybe just needs a bit of thought over some custom build assemblies.