So, I’ve submitted my information for re-awarding as an MVP – we’ll see whether I’ve done enough this year to warrant being admitted again into the MVP ranks.
Next week is the MVP Summit, where I visit Microsoft in Bellevue and Redmond for a week of brainwashing and meet-n-greet. I joke about this being a bit of a junket, but in reality, I get more information out of this than from most of the other conferences I’ve attended – perhaps mostly because the content is so tightly targeted.
That’s not always the case, of course – sometimes you’re scheduled to hear a talk that you’ve already heard three different times this year, but for those occasions, my advice would be to find another one that’s going on at the same time that you do want to hear. Talk to other MVPs not in your speciality, and find out what they’re attending. If you feel like you really want to get approval, ask your MVP lead if it’s OK to switch to the other session.
Very rarely a talk will be so strictly NDA-related that you will be blocked from entering, but not often.
Oh, and trade swag with other MVPs. Very frequently your fellow MVPs will be willing to trade swag that they got for their speciality for yours – or across regions. Make friends and talk to people – and don’t assume that the ‘industry luminaries’ aren’t willing to talk to you.
Also this week, comes news that I’ve been recognised for authoring the TechNet Wiki article of the Week, for my post on Microsoft’s excellent Elevation of Privilege Threat Modeling card game. Since that post was made two years ago, I’ve used the deck in a number of environments and with a few different game styles, but the goal each time has remained the same, and been successfully met – to make developers think about the threats that their application designs are subject to, without having to have those developers be security experts or have any significant experience of security issues.
This year is a special one for anniversaries – my 45th birthday, 20 years since I arrived in the USA, 10 years since beating cancer – seems like the perfect time for ISOC to honour me by switching everyone to IPv6.
It’s been quite some time since I wrote about changing passwords on a Windows service, and then provided a simple tool written in Visual Basic to propagate a password among several systems sharing the same account.
I hinted at the time that this was a relatively naïve approach, and that the requirement to bring all the services down at the same time is perhaps not what you want to do.
So now it’s finally time for me to provide a couple of notes about how this operation could be done better.
One complaint I have heard at numerous organisations is this one, or words to this effect:
“We can’t afford to cycle the service on a password rotation once every quarter, because the service has to be up twenty-four hours a day, every day.”
That’s the sort of thing that makes novice service owners feel really important, because their service is, of course, the most valuable thing in their world, and sure enough, they may lose a little in the way of business while the service is down.
So how do you update the service when the software or OS needs patching? How do you fix bugs in your service? What happens when you have to take it down because the password has escaped your grasp? [See my previous post on rotating passwords as a kind of “Business Continuity Drill”, so that you know you can rotate the password in an emergency]
All of these activities require stopping and cycling the service.
Modern computer engineering practices have taken this into consideration, and the simplest solution is to have a ‘failover’ service – when the primary instance of the service is taken offline, the secondary instance starts up and takes over providing service. Then when the primary comes back online, the secondary can go back into slumber.
This is often extended to the idea of having a ‘pool’ of services, all running all the time, and only taking out one instance at a time as you need to make changes, bringing the instance back into operation when the change is complete.
Woah – heady stuff, Mr Jones!
Sure, but in the world of enterprise computing, this is basic scaling, and if your systems of applications can’t be managed this way, you will have problems as you reach large scale.
So, a single instance of a service that you can’t afford to go offline – is a failure from the start, and an indication that you didn’t think the design through.
OK, so that sounds like heresy – if you’ve changed the password on an account, it shouldn’t be possible for the old password to work any more, should it?
Well, yes and no.
Again, in an enterprise world, you have to consider scale.
Changing the password on an account isn’t an instantaneous operation. That password change has to be distributed among the authentication servers you use (in the Windows world, this means domain controllers replicating new password information).
To account for this, and the prospect that you may have a process running that didn’t yet have a chance to pick up the new password, most authentication schemes allow tokens and/or passwords to be valid for some period after a password change.
By default, NTLM tokens are valid for an hour, and Kerberos tickets are valid for ten hours.
This means that if you have a pool or fleet of services whose passwords need to change, you can generally take the simple process of iteratively stopping them, propagating the new password to them, and then re-starting them, without the prospect of killing the overall service that you’re providing (sure, you’ll kill any connections that are specifically tied to that one service instance, but there are other ways to handle that).
Interesting, but I can’t afford the risk that I change the password just before my token / ticket is going to expire.
Very precious of you, I’m sure.
OK, you might have a valid concern that the service startup might not be as robust as you hoped, and that you want to ensure you test the new startup of the service before allowing it to proceed and provide live service.
That’s very ‘enterprise scale’, too. There’s nothing worse than taking down a dozen servers only to find that they won’t start up again, because the startup code requires that they talk to a remote service which is currently down.
You wouldn’t believe how many systems I’ve seen where the running service is working fine, but no more can be started up because startup conditions for the service cannot be replicated any longer.
So, to allow for the prospect that you may fail on restarting your services, here’s what I want you to do:
As you can probably imagine, when you next do this process, you don’t need to create the second user account for the server, because the first account is already there, but disabled. You can use this as the account to switch to.
This way, with the two accounts, every time a password change is required, you can just follow the steps above, and not worry.
You should be able to merge this process into your standard patching process, because the two follow similar routines – bring a service down, make a change, bring it up, check it for continued function, go to the next service, continue until all services are done.
So, with those techniques under your belt – and the necessary design and deployment practices to put them into place – you should be able to handle all requests to rotate passwords, as well as to handle patching of your service while it is live.
Sorry that this doesn’t come with a script to execute this behaviour, but there are some things I’m hoping you’ll be able to do for yourselves here, and the bulk of the process is specific to your environment – since it’s mostly about testing to ensure that the service is correctly functioning.