Why In-Place Upgrades?

In a comment on my last post someone asked me why in-place upgrades are better than side by side upgrades. I thought it worth a post, but it’s an opinion piece more than a technical piece.

Honestly, if I was on the committee that said “side by side or in-place?” I don’t know what I would have voted. This is a really hard problem and they didn’t invite me to the meeting. But I will summarize where I think we stand nearly two years later. If you want review, this Scott Hanselman post and the linked Rick Strahl post cover it.

Before I tell you why in-place upgrades are better, let me tell how they are worse. They are scary. Really, really scary. An in-place update means you can get a midnight phone call that your application in production is broken. You can get this because a third party tool throws an error it didn’t use to throw, and only throws that error in an obscure data scenario. And the only thing standing between you and this disaster – is Microsoft’s competence. Yeah, I think that’s very scary.

Scary enough that I spent weeks researching the known breaking changes and summarizing their impact in what I think is a much-watch 45 minutes in my What’s New in .NET 4.5 Pluralsight video. The languages teams are the best at considering, avoiding and communicating about known breaking changes. CLR/Framework ranks a ways down and some ancillary teams like Workflow suck at it.

So why should Microsoft ask you to take such an astounding risk? Why ask you to stake the future of your company on their (Microsoft’s) competence?

Because the world is moving at an astounding pace. The notion of “cadence” (I so hate this use of that word) is that releases will be frequent and small. Not because some big mucky muck at Microsoft declared releases will be frequent and small – but because that is the world that our industry has created. It’s there in Open Source, it’s there with your phone’s OS upgrade, it’s there with response to new security threats, and it’s there because of a world clamoring for new features.

Let’s say Microsoft updates .NET twice a year. In three years, side by side updates would mean there were six copies of .NET on the user’s machines. It would mean third party tools would have to test against six scenarios (or themselves force an update which would be really bad). You’re application libraries would have to be coordinated across six versions – probably meaning that many devices had multiple copies of libraries used by different versions of your different apps. It would mean your libraries had to load side by side too.

And it might not be a cheap little desktop with all the memory and all the hard disk space and inexpensive power you could ever want. It might be a phone, a tablet, or a cloud system you’re paying to access.

And with rapid updates the line quickly blurs between what’s a real release (4.5 or 4.5.1) and a stability release (like the important Jan 2013 release). You have to keep up with all the change to know when to move forward from a technical perspective.

And if all that wasn’t enough, the concept of the .NET framework has fundamentally shifted. Behind the scenes it’s tied up into many, many little satchels. It’s not a small set of three or six frameworks. The ability of Microsoft to test all combination of the presence of different parts of the framework to work across a large number of in the wild releases would be impossible. In-place releases mean things are updated as needed into a finite case of tested scenarios.

And finally, there’s the security implications of needing to keep not just one or two versions tidied up with security releases, but a very large number of branched framework versions.

In the end there must have been a weighing of options. The demand that we trust Microsoft competence to avoid changes that break your app lined up against a nightmare web of multiple side-by-side framework versions. And there was a third choice – not to move so fast. And there are people reading this that think it’s blatantly clear that each of these three options was the only logical one.

Not going to the fast cadence would have doomed .NET to a historical footnote as opposed to the best bet for the next decade of development. Side-by-side releases has no option but chaos – on our hard drives, in our test strategies and in Microsoft’s security and testing scenarios. The path they chose is the one that can have a good outcome. If they prove competent – and the languages and framework teams get passing marks so far – we all get to live in a sane world.

The backward compatibility commitments Microsoft made regarding the in-place upgrades are the most important commitments Microsoft has ever made. We have to hold their feet to the fire and remind them of that every day, every year for the next decade. We best remind them by testing our apps with CTPs and release candidates on test machines.

6 thoughts on “Why In-Place Upgrades?”

  1. A related question is what about in-place upgrades when the newer version runs on less OS versions than the newer version. For example, the 4.0 .NET ran on XP but the 4.5 can’t install there but was still an in-place upgrade.

  2. I do not agree with you about that in-place upgrades are good.
    I think the .net 4.5 and .net 4.5.1 in-place upgrades are the worst thing ever ms choosed in the .net history!
    In-place upgrades are give even bigger chaos than side-by-side releases of the .net framework.
    Even testing are harder and more resource intensive(you need to have different machines which has only .net 4, has .net 4.5, has .net 4.5.1 installed) with in-place upgrades.
    With side-by-side installation of the framework you can test your software on the same machine.
    Side-by-side is the right direction for .NET framework evolution. It was worked greatly with the 3.0, 3.5, 4.0 side-by-side installation. I do not understand MS why they go back to a wrong thing from a good solution.
    In-place upgrade is like “dll-hell” in the past.

  3. Thank you for your comment.

    Differing opinions are good, but you also make the point that I did not that you need to be able to reproduce the machines you think you will see in the field in your test environment, usually in VMs.

  4. “It would mean third party tools would have to test against six scenarios”

    With in place upgrades, that is *exactly* the case.

    With side-by-side, as long as there is Vx untouched and reliable, you don’t need to test for Vx+0.4.3. .net 6 should be .net 6 (give or take careful service packs that maintain backwards compatibilty). .net 6.7 should NOT break .net 6 (or 5 or 4) apps.

  5. Well… a .NET 4.5.1 forced “security” update has just broken several of our .NET 4.0 compiled apps in production, leaving us with several angry customers and the unbeliveable solution of a registry hack to prevent 4.5.1 screwing up the rest of our 3000 kiosk user base.

    Microsoft testing obviously isn’t up to scratch, hence why in-place updates was the wrong choice….

  6. Graham,

    I’d like to try to get you some help to resolve the problem as quickly as possible. Staying open to a vulnerability is a short term solution, at best.

    Can you please email me at Kathleen@mvps.org with contact information I can pass on to Microsoft? If you get a bounce (my public email gets hammered) please reply here and I’ll find a public Microsoft address for you. But letting me pass it on directly may be fastest.

    This is, of course, the nightmare side of in-place updates and I’m sorry you encountered it. But I think at the moment solving the problem is most important.

    Kathleen

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>