I have been having some binding problems when trying to use Ivonna 2.0.0 against a version of Typemock Isolator other than the 5.3.0 build it was built to run against. This is a know issue, if your version of Ivonna and Typemock don’t match then you have to use .Net Binding redirection to get around the problem.
So to track down the exact problem I used the Fusion logger shipped with the .NET SDK (via fuslogvw.exe). This in itself has been an interesting experience. A few points are worth noting:
- You cannot alter the settings (as to what it logs) from fuslogvw.exe unless you are running as administrator (because these a really just registry edits in HKLM\SOFTWARE\Microsoft\Fusion node). However you can use the viewer to view logs even if not an administrator as long as the registry entries are correct.
- I could only get the Fusion log to work if I was running my ASP.NET application in Visual Studio 2008 running as administrator, if I was a standard user nothing was logged.
- You have to remember to press refresh on fuslogvw.exe a lot. If you don’t you keep thinking that it is not working when it really is.
Anyway using the fusion logger I found I had two problem assemblies, not just the one I had expected. I had guessed I need to intercept the loading of the main Typemock assembly, but what the fusion logger showed me was I also needed to intercept the Typemock.Intergration assembly. Also I needed to reference the Typemock.Intergration assembly in my test project and make sure it was copied locally (something I had not needed to explicitly do when using Typemock 5.3.0 where it had found via I assume via the GAC)
Now it is important to remember that if using MSTEST and Ivonna you need to point the build directory for the Test Project to Web Application under test’s bin directory. This means that the .NET loader will check the web.config in the web application for any binding information, not just in the app.config in the test project as I had first assumed.
So all this means that I needed to add the following to my Web Application’s web.config and app.config
Once this was done all my test loaded as expected
I have been looking at the various install and upgrade stories for the TFS 2010 Beta. I have to say they are very nice compared to the older TFS versions. You now have a SharePoint like model where you install the product then use a separate configuration tool to upgrade or setup the features required. There are plenty of places to verify your setting as you go along to greatly reducing the potential mistakes.
One side effect of this model is that it is vital to get all your prerequisites in place. The lack of these has been the cause of the only upgrade scenario I have tried that has failed. This was on a VPC I used for TFS 2008 demos. This VPC used a differencing VHD using the older 2004 format that had a 16Gb limit and this disk was virtual full. To upgrade to TFS 2010 I needed to upgrade SQL to 2008 and this in turn needed Visual Studio 2008 patched to SP1 which needed over 6Gb free space, which was never going to happen on that VHD. So my upgrade failed, but that said this is not a realistic scenario, who has servers with just 16Gb these days!
Ok a bit sweeping but I think there is truth in this, if you have to resort to a mocking framework (such as Typemock the one I use) I think it is vital to ask ‘why am I using this tool?’ I think there are three possible answers:
- I have to mock some back box that is huge and messy that if I don’t mock it will mean any isolated testing is impossible e.g. SharePoint
- I have to mock a complex object, I could write it all by hand, but it is quicker to use an auto-mocking framework. Why do loads of typing when a tool can generate it for me? (the same argument as to why using Refactoring tools are good, they are faster than me typing and make less mistakes)
- My own code is badly designed and the only way to test it is to use a mocking framework to swap out functional units via ‘magic’ at runtime.
If the bit of code I am testing fails into either of the first two categories it is OK, but if it is in third I know must seriously consider some refactoring. Ok this is not always possible for technical or budgetary reasons, but I should at least consider it. Actually you could consider category 1 as a special case of category 3, a better testable design may be possible, but it is out of your control.
So given this I looked at the new Typemock feature with interest, the ability to fake out DateTime.Now. Something you have not been able to do in the past due to the DataTime classes deep location in the .NET framework. OK it is a really cool feature, but that is certainly not a good enough reason to use it. I have to ask if I need to mock this call out does my code stink?
Historically I would have defined an interface for a date services and used it to pass in a test or production implementation using dependency injection e.g.
public class MyApplication
public MyApplication(IDateProvider dateProvider)
// so we use
DateTime date1 = dateProvider.GetCurrentDate();
// as opposed to
DateTime date2 = DateTime.Now
So in the new world with the new Typemock feature I have three options:
- Just call DateTime.Now in my code, because now I know I can use Typemock to intercept the call and return the value I want for test purposes
- Write my own date provider and use dependency injection to swap in different versions (or if I want to be really flexible use a IoC framework like Castle Windsor)
- Write my own date provider class with a static GetDate method, but not use dependency injection, just call the method directory wherever I would have called DateTime.Now and use Typemock to intercept calls to this static method in tests (the old way to get round the limitation that Typemock cannot mock classes from MSCORELIB
I think this bring me back full circle to my first question: does the fact I use the new feature of Typemock to mock out DateTime.Now mean my code stinks? Well after bit of thought I think it does. I would always favour putting in some design patterns to aid testing, so in this case some dependency injection would appear the best option. Like all services it would allow me to centralise all date functions in one place, so a good SOA pattern. With all my date service in one place I can make a sensible choice of how I want to mock it out, manual or via an auto mocking framework.
So in summary, in mocking, like in so many things in life, just because you can do it is no reason why you should do something in a polite society. If you can, it is better to address a code smell with good design as opposed to a clever tool.
DD-SW in Taunton seems to go well, a big thank you to Guy and the rest of the Bristol .NET user group for all their work getting this event up and running. Also it was nice to see new faces, it is certainly a good idea to get the DDD family events out to places beyond Reading, spreading the good works of the community across the country.
Thank you to those who attended my session on Scrum, I hope you all found it useful. You can find a virtually identical set the slides on the Black Marble web site and the actual stack I used will be up on the DD-SW site soon.
I actually managed to attend some sessions this time, as usual this just means more work as I invariably realise I have to spend some time on learning some new technologies This time it was MEF and jQuery, the latter technology I have ignored too long. It was also great to see a truly mind bending session by on Expression trees, we need to see more of these deep dive sessions a community events. I have never checked to see if is it that they are not proposed or that they are not voted for? Can it be true the community just wants level 200 general overviews?
Anyway another great day – a pointer to everyone that if you haven’t been to DDD event you really should.
It is Developer Day South West this weekend where I will be speaking on Scrum. I may also do a lunch time grok talk on SharePoint and Typemock Isolator as I did at Developer Day Scotland.
I think there are still spaces at this event, so if you can make your way down to Taunton on Saturday I think it will be well worth the trip.
Since moving to Windows 7 I have encrypted all my USB pen drives and external USB disk drives with Bitlocker to go. This has been working great for me, I have noticed no performance problems and it give nice piece of mind. The only irritation is when you plug the encrypted drive into an XP or Vista PC you can’t read it, but this is meant to be being addressed.
However, I have seen one issue, this is that there seems to be a timeout if the bitlockered to go device is not accessed for a while (well over an hour is my guess, I have not timed it) it relocks itself and the password has to be re-entered. I can’t find any setting anywhere to control this timeout. I suppose the workaround is to set the bitlockered device to always automatically unlock on my PC, but I do like the security of having to enter the key manually.
The other possibility is that it is not a Bitlocker thing and it that my USB ports are resetting, and in effect reattaching the device. I suppose the effect is the same.
As my external USB pod contains mostly Virtual PC images in effect removing the underlying disk when they are running is a bit of a problem; but as long as you know it might happen you can live with it.
When you setup a dual or multi server TFS installation you need to specify the location of the OLAP Analysis service instance that will be used for the reporting warehouse. As with much of the TFS installation and configuration process there is a test button to confirm your setting will work, these are always worth pressing. If there is a problem you could get a TF255048 error, as the text says this hints the server cannot be found or you have no rights to access it, which may well be the case.
Well there is another thing to consider, the firewall on the SQL server. On my default 2008 SQL install the firewall was not opened to allow incoming connections to the OLAP service. Once the TCP Port 2383 was opened to incoming traffic the test passed and I could move onto the next stage of the configuration
TFS 2010 provides far more options for the configuration of your server than the previous versions. You now can easily make use of any existing server resources you have such as SharePoint farms or Enterprise SQL installations. Today I was looking at one of these ‘less standard’ setups using some of our test lab equipment (hence the somewhat strange mix of 32bit and 64bit hardware) and hit a problem with the TFS beta 1 release configuration tool.
The key point to note here is that in previous versions of TFS the Reporting Services would need to be installed on the AT (though it could still use the a reports DB stored on the DT). With 2010 this is no longer the case, now the Reporting Services instance on the DT can be used directly. However this said it must be remembered that the Reporting Services instance must be dedicated to the TFS install; so in most cases it is more sensible to put it on the TFS AT. You probably don’t want to be dedicating the Reporting Services instance on your enterprise SQL server to TFS alone. Also you probably don’t want to expose your SQL server to web requests by having it host the Reporting Services instance,
But back to the actual problem; when I ran the TFS configuration tool and tried to configure the OLAP source for the Reporting Services I got the error that the Microsoft.AnalysisServices assemblies could not be found.
Note: if I skipped the setup of Reporting Services the configuration tool completed without any issue, it is certainly a huge step forward in the ease of installation for TFS. However beware if you skip the Reporting Services setup in the initial setup via the configuration tool then in the beta 1 you have not way to configure it later.
The answer is to install a single SQL component on the AT – the “Client tools connectivity” feature. Once this is done the right assemblies are in the GAC and you can proceed.
Remember that in general you will not see this issue as the feature is installed when reporting services is installed on the AT.
I have been playing around with some workflow code for a demo I am doing. This has meant creating and deleting a workflow project as I refine the demo. Whilst going through the process of a delete and recreate of the workflow (using the same name for the project/workflow but creating a new project from the VS file menu) I hit the problem that when the new version of the workflow was run I got a “failed to start” error.
After checking the SharePoint log in the 12 hive I found that workflow runtime was trying to load the code beside assembly using the old assembly name name/public key, which obviously it could not fine as the new version was in the GAC. Once I correct and redeployed the workflow.xml file (with the old PublicKeyToken for the CodeBesideAssembly) all was OK
Today the Visual Studio 2010 Team Suite Beta 1 and Visual Studio 2010 Team Foundation Server Beta 1 became available to download for MSDN subscribers and will be available to the general public on Wednesday.