Itâs been quite some time since I wrote about changing passwords on a Windows service, and then provided a simple tool written in Visual Basic to propagate a password among several systems sharing the same account.
I hinted at the time that this was a relatively naĂŻve approach, and that the requirement to bring all the services down at the same time is perhaps not what you want to do.
So now itâs finally time for me to provide a couple of notes about how this operation could be done better.
One complaint I have heard at numerous organisations is this one, or words to this effect:
âWe canât afford to cycle the service on a password rotation once every quarter, because the service has to be up twenty-four hours a day, every day.â
Thatâs the sort of thing that makes novice service owners feel really important, because their service is, of course, the most valuable thing in their world, and sure enough, they may lose a little in the way of business while the service is down.
So how do you update the service when the software or OS needs patching? How do you fix bugs in your service? What happens when you have to take it down because the password has escaped your grasp? [See my previous post on rotating passwords as a kind of âBusiness Continuity Drillâ, so that you know you can rotate the password in an emergency]
All of these activities require stopping and cycling the service.
Modern computer engineering practices have taken this into consideration, and the simplest solution is to have a âfailoverâ service â when the primary instance of the service is taken offline, the secondary instance starts up and takes over providing service. Then when the primary comes back online, the secondary can go back into slumber.
This is often extended to the idea of having a âpoolâ of services, all running all the time, and only taking out one instance at a time as you need to make changes, bringing the instance back into operation when the change is complete.
Woah â heady stuff, Mr Jones!
Sure, but in the world of enterprise computing, this is basic scaling, and if your systems of applications canât be managed this way, you will have problems as you reach large scale.
So, a single instance of a service that you canât afford to go offline â is a failure from the start, and an indication that you didnât think the design through.
OK, so that sounds like heresy â if youâve changed the password on an account, it shouldnât be possible for the old password to work any more, should it?
Well, yes and no.
Again, in an enterprise world, you have to consider scale.
Changing the password on an account isnât an instantaneous operation. That password change has to be distributed among the authentication servers you use (in the Windows world, this means domain controllers replicating new password information).
To account for this, and the prospect that you may have a process running that didnât yet have a chance to pick up the new password, most authentication schemes allow tokens and/or passwords to be valid for some period after a password change.
By default, NTLM tokens are valid for an hour, and Kerberos tickets are valid for ten hours.
This means that if you have a pool or fleet of services whose passwords need to change, you can generally take the simple process of iteratively stopping them, propagating the new password to them, and then re-starting them, without the prospect of killing the overall service that youâre providing (sure, youâll kill any connections that are specifically tied to that one service instance, but there are other ways to handle that).
Interesting, but I canât afford the risk that I change the password just before my token / ticket is going to expire.
Very precious of you, Iâm sure.
OK, you might have a valid concern that the service startup might not be as robust as you hoped, and that you want to ensure you test the new startup of the service before allowing it to proceed and provide live service.
Thatâs very âenterprise scaleâ, too. Thereâs nothing worse than taking down a dozen servers only to find that they wonât start up again, because the startup code requires that they talk to a remote service which is currently down.
You wouldnât believe how many systems Iâve seen where the running service is working fine, but no more can be started up because startup conditions for the service cannot be replicated any longer.
So, to allow for the prospect that you may fail on restarting your services, hereâs what I want you to do:
As you can probably imagine, when you next do this process, you donât need to create the second user account for the server, because the first account is already there, but disabled. You can use this as the account to switch to.
This way, with the two accounts, every time a password change is required, you can just follow the steps above, and not worry.
You should be able to merge this process into your standard patching process, because the two follow similar routines â bring a service down, make a change, bring it up, check it for continued function, go to the next service, continue until all services are done.
So, with those techniques under your belt â and the necessary design and deployment practices to put them into place â you should be able to handle all requests to rotate passwords, as well as to handle patching of your service while it is live.
Sorry that this doesnât come with a script to execute this behaviour, but there are some things Iâm hoping youâll be able to do for yourselves here, and the bulk of the process is specific to your environment â since itâs mostly about testing to ensure that the service is correctly functioning.
One Response to Changing passwords on a service, part 3