Category Archives: 8381

Problems re-publishing an Access site to SharePoint 2010

After applying SP1 to a SharePoint 2010 farm we found we were unable to run any macros in an Access Services site, it gave a –4002 error. We had seen this error in the past, but the solutions that worked then did not help. As this site was critical, as a workaround, we moved the site to a non-patched SP2010 instance. This was done via a quick site collection backup and restore process.  This allowed us to dig into the problem at our leisure.

Eventually we fixed the problem by deleting and recreating the Access Services application within SharePoint on the patched farm. We assume some property was changed/corrupted/deleted in the application of the service pack.

So we now had a working patched farm, but also a duplicate of Access Services site with changed data. We could not just backup and restore this as other sites in the collection had also changed. Turns out getting this data back onto the production farm took a bit of work, more than we expected. This is the process we used

  1. Open the Access Services site in a browser on the duplicate server
  2. Select the open in Access option, we used Access 2010, which it had originally been created in
  3. When Access had opened the site, use the ‘save as’ option to save a local copy of the DB. We now had a disconnected local copy on a PC. We thought we could just re-publish this, how wrong we were.
  4. We ran the web compatibility checker expecting no errors, but it reported a couple of them. In one form and one query extra column references had been added that referenced the auto created SharePoint library columns (date and id stamps basically) These had to be deleted by hand.
  5. We then could publish back to the production server
  6. We watched as the structure and data was publish
  7. Then it errored. On checking the log we saw that it claimed a lookup reference had invalid data (though we could not see offending rows and it was working OK). Luckily the table in question contained temporary data we could just delete, so we tried to publish again
  8. Then it errored .On checking the logs again we saw it reported it could not copy to – No idea why it looking for localhost! Interestingly if we tried to publish back to another site URL on the non-patched server it work! Very strange
  9. On a whim I repeated this whole process but using Access 2013 RC, and strangely it worked

So I now had my Access Services site re-published and fully working on a patched farm. That was all a bit too complex for my tastes

More on using the VS11 fake library to fake out SharePoint

I recently posted on how you could use the new fakes tools in VS11 to fake out SharePoint for testing purposes. I received comments on how I could make my Shim logic easier to read so though I would revisit the post. This led me down a bit of a complex trail, and to Pete Provost for pointing the way out!

When I did the previous post I had used SP2007, this was because I was comparing using Microsoft Fakes with a similar sample I had written ages ago for Typemock Isolator. There was no real plan to this choice, it was just what had to hand at the time. This time I decided to use SP2010, this was the process used that actually worked (more on my mistakes later) …

  1. Using a Windows 7 PC that did not have SP2010 installed, I created a new C# Class Library project in VS11 Beta
  2. I added a reference to Microsoft.SharePoint.DLL (this was referenced from a local folder that contained all the DLLs from the SP2010 14 hive and also the GAC)
  3. THIS IS THE IMPORTANT BIT – I changed the project to target .NET 4.0 not the default 4.5. Now, I could have changed to .NET 3.5 which is what SP2010 targets, but this would mean I could not use MSTest as, since VS2010, this has targeted .NET 4.0. I could of course have changed to another testing framework that can target .NET 3.5, such as nUnit, as discussed in my previous post in the VS11 test Runner.
  4. You can now right click on the Microsoft.SharePoint.DLL reference and ‘add fakes assembly’. A warning here, adding this reference is a bit slow, it took well over a minute on my PC. If you look in the VS Output windows you see a message the process is starting then nothing until it finishes, be patient, you only have to do it once! I understand that you can edit the .fakes XML file to reduce the scope of what is faked, this might help reduce the generation time. I have not experimented here yet.
  5. You should now see a new reference to the Microsoft.SharePoint. and you can start to write your tests


So why did I get lost? Well before I changed the targeted framework, I had tried to keep adding extra references to DLLs that were referenced by the DLL I was trying to fake, just as mentioned in my previous post. This went on and on adding many SharePoint and supporting DLLs, and I still ended up with errors and no Microsoft.SharePoint. In fact this is a really bad way to try to get out of the problem as it does  not help and you get strange warnings and errors about failures in faking the are not important or relevant e.g.

“\ShimTest\obj\Debug\Fakes\msp\f.csproj" (default target) (1) ->1>  (CoreCompile target) –> “\ShimTest\f.cs (279923,32): error CS0544: ‘Microsoft.SharePoint.ApplicationPages.WebControls.Fakes.StubAjaxCalendarView.ItemType’: cannot override because ‘Microsoft.SharePoint.WebControls.SPCalendarBase.ItemType’ is not a property

The key here is that you must be targeting a framework that the thing your are trying to fake targets. For SP2010 this should really be .NET 3.5 but you seem to get away .NET 4.0 but 4.5 is certainly a step too far. If you have the wrong framework you can end up in this chain of added dependency references that you don’t need and are confusing at best and maybe causing the errors nor fixing them. In my case it seem a reference to Microsoft.SharePoint.Library.DLL stops everything working, even if you then switch to the correct framework. When all is working you don’t need to add the dependant references this is all resolved behind the scenes, not by me adding then explicitly.

So once I had my new clean project, with the correct framework targeted and just the right assemblies referenced and faked I could write my tests, so now to experiment a bit more.

Update on using Typemock Isolator to allow webpart development without a Sharepoint server

I have in the past posted about developing SharePoint web parts without having to use a SharePoint server by using Typemock Isolator. This technique relies on using Cassini or IIS Express as the web server to host the aspx page that in turn contains the webpart. This is all well and good for SharePoint 2007, but we get a problem with SharePoint 2010 which seems to be due to 32/64bit issues.

Working with SharePoint 2007 assemblies when SharePoint 2010 assemblies are in the GAC 

I started this adventure with a SharePoint 2007 webpart solution setup as discussed in my previous post. In this solution’s web test harness I was only referencing the SharePoint 2007 Microsoft.Sharepoint.dll. This had been working fine on a PC that had never had SharePoint installed, the required DLL was loaded from a local solution folder of SharePoint assemblies.

This was until I installed SharePoint 2010 onto my Windows 7 development PC (a great way to do SharePoint development). This put the SharePoint 2010 assemblies into the GAC. So now when I ran my Sharepoint 2007 test harness I got the error

Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.
Compiler Error Message: CS1705: Assembly 'Microsoft.SharePoint, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c' uses 'Microsoft.SharePoint.Library, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c' which has a higher version than referenced assembly 'Microsoft.SharePoint.Library, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c'

The solution is fairly simple, assuming you want work with the 2007 assemblies. All you need to do is make sure the test harness project also references the 2007 Microsoft.Sharepoint.library.dll so it does not pickup version in the GAC.


Once this is done the 2007 based test harness worked again

But what about using 2010 assemblies?

If you want to work against SharePoint 2010 assemblies there are other problems. If you just reference the 2010 Microsoft.sharepoint.dll you get the error

Could not load file or assembly 'Microsoft.Sharepoint.Sandbox' or one of its dependencies. An attempt was made to load a program with an incorrect format

As I said, on my PC I now have a SharePoint local installation, so I have the SharePoint 2010 assemblies in the GAC. It is from here the test harness tries to load the Microsoft.Sharepoint.Sandbox.dll assembly. The problem is that this is not a standard MSIL assembly but a 64bit one. The default Cassini development web server is 32bit. Hence the incorrect format error, the WOW64 technology behind the scenes cannot manage the loading. The only option is to use a 64bit web server to address the problem; so this rules out Cassini and IIS Express at this time as these are 32bit only.

A possible solution is to use the full IIS 7.5 installation available with Windows7, as this must be 64bit as it is able to run SharePoint 2010. The problem here is that when you load the test harness you get the error

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: TypeMock.TypeMockException:
*** Typemock Isolator is not currently enabled.
To enable do one of the following:
* To run Typemock Isolator as part of an automated process you can:
– run tests via TMockRunner.exe command line tool
– use 'TypeMockStart' tasks for MSBuild or NAnt
* To work with Typemock Isolator inside Visual Studio.NET:
set Tools->Enable Typemock Isolator from within Visual Studio
For more information consult the documentation (see 'Running' topic)

This is because this IIS instance is not under the control of Visual Studio and so it cannot start Isolator for you. To get round this you have to start Isolator manually, maybe you could do it in your test harness pages. However, you also have to remember that you if you want to debug against this IIS instance you must run Visual Studio as administrator – OK this will work, but I don’t like any of this. I really do try not to run as administrator these days.

So what we need is a 64bit web server. The best option appears to be This can be used as a direct replacement for Cassini. This is still a 32bit build by default, but if you pull the source down you can change this. You need to change all the projects from x86 to Any CPU, rebuild and copy the resultant EXE and DLLs over the Cassini installation. I recommend you copy the 32bit release build over first to get the right .config files in place. You probably don’t want to use the ones from the source code zip.

Once this is all done you have a web server that can load 32bit and 64bits without issue. So for my test project I referenced the SharePoint 2010 assemblies (I maybe could have referenced less, but this works)


So we have a workaround, once setup it is used automatically. It is just a shame that the default web servers are all set to be x86 as opposed to Any CPU.

Linking a TFS work item to a specific version of a document in SharePoint

SharePoint in my opinion is a better home for a Word or Visio requirements document than TFS. You can use all the SharePoint document workspace features to allow collaboration in the production of the document. When you have done enough definition to create your projects user stories or requirement then you can create them in TFS using whatever client you wish e.g. Visual Studio, Excel, Project etc.

You can add a Hyperlink from each of these work items back to the SharePoint hosted document they relate to, so you still retain the single version of the source document. The thing to note here is that you don’t have to link to the last version of the document. If SharePoint’s revision control is enabled for the document library you can refer to any stored version. Thus allowing the specification document to continue evolving for future releases whilst the development team are still able to reference the specific version their requirements are based on.

The process to do this is as follows..

Open your version history enabled document library, select the dropdown for a document and select version history


If you cut the hyperlink for the 4.0 version of the document you get an ordinary Url link  “…/BlackMarble/SharePoint Engagement Document.docx”

If you cut the hyperlink for the 2.0 version of the document you get  a Url like this with a version in it “.../_vti_history/1024/Black Marble/SharePoint Engagement Document.docx”

You can paste these into ‘Add link to requirement’ dialog as often as required


So there is a link to each revision of the document


[More] Fun with WCF, SharePoint and Kerberos

This is a follow up to the post Fun with WCF, SharePoint and Kerberos – well it looks like fun with hindsight

When I wrote the last post I thought I had our WCF Kerberos issues sorted, I was wrong. I had not checked what happened when I tried to access the webpart from outside our TMG firewall. When I did this I was back with the error that I had no security token. To sort this we had to make some more changes.

This is the architecture we ended  with.


The problem was that the Sharepoint access rule used a listener in TMG that was setup to HTML form authentication against our AD


and the rule then tried to authenticate our Sharepoint server via Kerberos using the negotiated setting in the rule. This worked for accessing the Sharepoint site itself but the second hop to the WCF service failed. This was due to use transitioning between authentication methods.

The solution was to change the access rule to Constrained Kerberos (still with the same Sharepoint server web application SPN)


The TMG gateway computer (in the AD) then needed to be set to allow delegation. In my previous post we had just set up any machines requiring delegation to ‘Trust this computer for delegation to any service’. This did not work this time as we had forms authentication in the mix. We had to use ‘Trust this computer for delegation to specific services only’ AND ‘use any authentication protocol’. We then added the server hosting the WCF web service and the Sharepoint front end into the list of services that could be delegated too


So now we had it so that the firewall could delegate to the Sharepoint server SPN, but this was the wrong SPN for the webpart to use when trying to talk to the WCF web service. To address this final problem I had to specifically set the SPN in the programmatic creation of the WCF endpoint

this.callServiceClient = new CallService.CallsServiceClient(
    new EndpointAddress(new Uri("http://mywcfbox:8080/CallsService.svc"), EndpointIdentity.CreateSpnIdentity("http/mywcfbox:8080")));

By doing this a different SPN is used to connect to the WCF web service (from inside the webpart hosted in Sharepoint) to the one used by the firewall to connect to the Sharepoint server itself.

Simple isn’t it! The key is that you never authenticated with the firewall using Kerberos, so it could not delegate what it did not have.

Fun with WCF, SharePoint and Kerberos – well it looks like fun with hindsight

I have been battling some WCF authentication problems for a while now; I have been migrating our internal support desk call tracking system so that it runs as webpart hosted inside Sharepoint 2010 and uses WCF to access the backend services all using AD authentication. This means both our staff and customers can use a single sign on for all SharePoint and support desk operations. This replaced our older architecture using forms authentication and an complex mix of WCF and ASMX webservices that have grown up over time; this call tracking system started as an Access DB with a VB6 front end well over 10 years ago!

As with most of our SharePoint development I try not work inside a SharePoint environment when developing, for this project this was easy as the webpart is hosted in SharePoint but makes no calls to any SharePoint artefacts. This meant I could host the webpart within a test .ASPX web page for my development without the need to mock out SharePoint. This I did, refactoring my old collection of web services to the new WCF AD secured based architecture.

So at the end of this refactoring I thought I had a working webpart, but when I deployed it to our SharePoint 2010 farm it did not work. If I checked my logs I saw I had WCF authentication errors. The webpart programmatically created WCF bindings, worked in my test harness, but failed when in production.

A bit of reading soon showed the problem lay in the Kerberos double hop issues, and this is where the fun began. In this post I have tried to detail the solution not all the dead ends I went down to get there. The problem is that for this type of issue there is one valid solution, and millions of incorrect ones, and the diagnostic options are few and far between.

So you may be asking what is the kerberos double hop issue? Well a look at my test setup shows the problem.

[It is worth at this point getting an understanding of Kerberos, The Teched session ‘Kerberos with Mark Minasi’ is good primer]


The problem with this test setup is that the browser and the webserver, that hosts the test webpage (and hence webpart), are on the same box and running under the same account. Hence have full access to the credentials and so can pass them onto the WCF host, so no double hop.

However when we look at the production SharePoint architecture


We see that we do have a double hope. The PC (browser) passes credentials to the SharePoint server. This needs to be able to pass them onto the WCF hosted services so it can use them to access data for the original client account (the one logged into the PC), but by default this is not allowed. This is a classic Kerberos double hop. The SharePoint server must be setup such that is allow to delegate the Kerberos tickets to the next host, and the WCF host must be setup to accept the Kerberos ticket.

Frankly we fiddled for ages trying to sort this in SharePoint, but was getting nowhere. The key step for me was to modify my test harness so I could get the same issues outside SharePoint. As with all technical problems the answer is usually to create a simpler model that can exhibit the same problem. The main features of this change being that I had to have three boxes and needed to be running the web pages inside a web server I could control the account it was running as i.e. not Visual Studio’s default Cassini development web server.

So I built this system


Using this model I could get the same errors inside and outside of the SharePoint. I could then build up to a solution step by step. It is worth noting that I found the best debugging option was to run DebugView on the middle Development PC hosting the IIS server. This showed all the logging information from my webpart, I saw no errors on the WCF host as the failure was at the WCF authentication level, well before my code was accessed.

Next I started from the WCF kerberos sample on Marbie’s blog. I modified the programmatic binding in the webpart to match this sample

var callServiceBinding = new WSHttpBinding();
callServiceBinding.Security.Mode = SecurityMode.Message;
callServiceBinding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
callServiceBinding.Security.Message.NegotiateServiceCredential = false;
callServiceBinding.Security.Message.EstablishSecurityContext = false;
callServiceBinding.MaxReceivedMessageSize = 2000000;
this.callServiceClient = new BlackMarble.Sabs.WcfWebParts.CallService.CallsServiceClient(
    new EndpointAddress(new Uri(“http://mywcfbox:8080/CallsService”)));
this.callServiceClient.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;

I then created a new console application wrapper for my web service. This again used the programmatic binding from the sample.

static void Main(string[] args)
    // create the service host
    ServiceHost myServiceHost = new ServiceHost(typeof(CallsService));
    // create the binding
    var binding = new WSHttpBinding();
    binding.Security.Mode = SecurityMode.Message;
    binding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
    // disable credential negotiation and establishment of the security context
    binding.Security.Message.NegotiateServiceCredential = false;
    binding.Security.Message.EstablishSecurityContext = false;
    // Creata a URI for the endpoint address
    Uri httpUri = new Uri("http://mywcfbox:8080/CallsService");
    // Create the Endpoint Address with the SPN for the Identity
    EndpointAddress ea = new EndpointAddress(httpUri,
    // Get the contract from the interface
    ContractDescription contract = ContractDescription.GetContract(typeof(ICallsService));
    // Create a new Service Endpoint
    ServiceEndpoint se = new ServiceEndpoint(contract, binding, ea);
    // Add the Service Endpoint to the service
    // Open the service
    Console.WriteLine("Listening... " + myServiceHost.Description.Endpoints[0].ListenUri.ToString());
    // Close the service

I then needed to run the console server application on the WCF host. I had made sure the the console server was using the same ports as I had been using in IIS. Next I needed to run the server as a service account. I copied this server application to the WCF server I had been running my services within IIS on, obviously I stopped the IIS hosted site first to free up the IP port for my end point.

As Marbie’s blog stated I needed run my server console application as a service account (Network Service or Local System), to do this I used the at command to schedule it starting, this is because you cannot login as either of these accounts and also cannot use runas as they have no passwords. So my start command was as below, where the time was a minute or two in the future.

at 15:50 cmd /c c:\tmp\WCFServer.exe

To check the server was running I used task manager and netstat –a to make sure something was listening on the expect account and port, in my case local service and 8080. To stop the service I also used task manager.

I next need to register the SPN of the WCF end point. This was done with the command

setspn -a HOST/ mywcfbox

Note that as the final parameter was mywcfbox (the server name). In effect I was saying that my service would run as a system service account (Network Service or Local System), which for me was fine. So what had this command done? It put an entry in the Active Directory to say that this host and this account are running an approved service.

Note: Do make sure you only declare a given SPN once, if you duplicate an SPN neither works, this is a by design security feature. You can check the SPN defined using

setspn –l mywcfbox

I then tried to run load  my test web page, but it still do not work. This was because the DevelopmentPC, hosting the web server, was not set to allow delegation. This is again set in the AD. To set It I:

  1. connected to the Domain Server
  2. selected ‘Manage users and computers in Active Directory’.
  3. browsed to the computer name (DevelopmentPC) in the ‘Computers’ tree
  4. right click to select ‘properties’
  5. selected the ‘Delegation’ tab.
  6. and set ‘Trust this computer for delegation to any service’.

I also made sure the the IIS server setting on the DevelopmentPC were set as follows, to make sure the credentials were captured and passed on.


Once all this was done it all leap into life. I could load and use my test web page from a browser on either the DevelopmentPC itself or the other PC.

The next step was to put the programmatically declared WCF bindings into the IIS web server’s web.config, as I still wanted to host my web service in IIS. This gave me web.config servicemodel section of

       <binding name="SabsBinding">
         <security mode="Message">
            <message clientCredentialType="Windows" negotiateServiceCredential="false" establishSecurityContext="false" />
     <service behaviorConfiguration="BlackMarble.Sabs.WcfService.CallsServiceBehavior" name="BlackMarble.Sabs.WcfService.CallsService">
       <endpoint address="" binding="wsHttpBinding" contract="BlackMarble.Sabs.WcfService.ICallsService" bindingConfiguration="SabsBinding">
       <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
       <behavior name="BlackMarble.Sabs.WcfService.CallsServiceBehavior">
         <serviceMetadata httpGetEnabled="true" />
         <serviceDebug includeExceptionDetailInFaults="true" />
         <serviceAuthorization impersonateCallerForAllOperations="true" />

I then stopped the EXE based server, made sure I had the current service code on my IIS hosted version and restarted IIS, so my WCF web service was running as network service under IIS7 and .NET4. It still worked, so I now had an end to end solution using Kerberos. I knew both my server and client had valid configurations and in the format I wanted.

Next I upgraded my Sharepoint solution that it included the revised webpart code and tested again, and guess what, it did not work. So it was time to think was was different between my test harness and Sharepoint?

The basic SharePoint logical stack is as follows


The key was the account which the webpart was running under. In my test box the IIS server was running as Network Server, hence it was correct to set in the AD that delegation was allowed for the computer DevelopmentPC. On our Sharepoint farm we had allowed similar delegation for SharepointServer1 and SharepointServer2 (hence Network Service on these servers). However our webpart was not running under a Network Service account, but under a domain named account. It was this account blackmarble\spapp that needed to be granted delegation rights in the AD.

Still this was not the end of it, all these changes need to be synchronised out to the various box, but after a repadmin on the domain controller an IISreset on both the SharePoint front end server it all started working.

So I have the solution  was after, I can start to shut off all the old system I was using and more importantly I have a simpler stable model for future development. But what have I learnt? Well Kerberos is not as mind bending as it first appears, but you do need a good basic understanding of what is going on. Also that there are great tools like Klist to help look at Kerberos tickets, but for problems like this the issue is more a complete lack of ticket. The only solution is to build up you system step by step. Trust me you will learn more doing this way, there is no quick fix, and you learn far more than failure rather than success.