Back in January I did a post How long is my TFS 2010 to 2013 upgrade going to take? I have now done some more work with one of the clients and have more data. Specially the initial trial was 2010 > 2013 RTM on a single tier test VM; we have now done a test upgrade from 2010 > 2013.2 on the same VM and also one to a production quality dual tier system.
The key lessons are
- There a 150 more steps to go from 2013 RTM to 2013.2, it takes a good deal longer.
- The dual tier production hardware is nearly twice as fast to do the upgrade, though the initial steps (step 31, moving the source code) is not that much faster. It is the steps after this that are faster. We put it down to far better SQL throughput.
DDD North is coming to the University of Leeds on Saturday 18 October.
It is now open for Session submission
The ALM Rangers are again producing a list of useful tools and widgets for TFS. It can be found at aka.ms/widgets and should be updated regularly
I am currently involved in moving some TFS TFVC hosted source to a TFS Git repository. The first step was to clone the source for a team project from TFS using the command
git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection ‘$My Project’ localrepo1
and it worked fine. However the next project I tried to move had no space in the source path
git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection ‘$MyProject’ localrepo2
This gave the error
git-tf: A server path must be absolute.
Turns out if the problem was the single quotes. Remove these and the command worked as expected
git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection $MyProject localrepo2
Seems you should only use the quotes when there are spaces in a path name.
Seven whole years ago I wrote about re-reading William Gibson’s Microserfs and how it compared to his then new book JPod. And how they both reflected the IT world at their time. Speculative fiction always says more about the time they are written than the future they predict.
I have just read ‘The Circle’ by Dave Eggers which in many ways is a similar book for our social media, big brother monitored age. I will leave it to you to decide if it a utopian or dystopia but it is well worth a read
After my session Techorama last week I have been asked some questions over how we built our TFS Lab Management infrastructure. Well here is a bit more detail, thanks to Rik for helping correcting what I had misremembered and providing much of the detail.
For SQL we have two physical servers with Intel processors. Each has a pair of mirrored disks for the OS and RAID5 group of disks for data. We use SQL 2012 Enterprise Always On for replication to keep the DBs in sync. The servers are part of a Windows cluster (needed for Always On) and we use a VM to give a third server in the witness role. This is hosted on a production Hyper-V cloud. We have a number of availability groups on this platform, basically one per service we run. This allows us to split the read/write load between the two servers (unless they have failed over to a single box). If we had only one availability group for all the DBs one node would being all the read/write and the other read only, so not that balanced.
SCVMM runs on a physical server with a pair of hardware-mirrored 2Tb disks for 2Tb of storage. That’s split into two partitions, as you can’t use data de-duplication on the OS volume of Windows. This allows us to have something like 5Tb of Lab VM images stored on the SCVMM library share that’s hosted on the SCVMM server. This share is for lab management use only.
We also have two physical servers that make up a Windows Cluster with a Cluster Shared Volume on an iSCSI SANs. This hosts a number of SCVMM libraries for ISO Images, Production VM Images and test stuff. Data de-duplication again is giving us an 80% space saving on the SAN (ISO images of OS’ and VHDs of installed OS’ dedupe _really_ well)
Our Lab cloud currently has three AMD based servers. They use the same disk setup as the SQL boxes, with a mirrored pair for OS and RAID5 for VM storage.
Our Production Hyper-V also has three servers, but this time in a Windows Cluster using a Cluster Shared Volume on our other iSCSI SAN for VM storage so it can do automated failover of VMs.
Each of the SQL servers, SCVMM servers and Lab Hyper-V servers uses Windows Server 2012 R2 NIC teaming to combine 2 x 1Gbit NICs which gives us better throughput and failover. The lab servers have one team for VM traffic and one team for the hyper-v management that is used when deploying VMs. That means we can push VMs around as fast as the disks will push data in either direction, pretty much, without needed expensive 10Gbit Ethernet.
So I hope that answers any questions.