Preparing for TechEd Europe



TechEd Europe will be in Berlin next week, and I’m looking forward delivering three sessions there:

  • SIA301-IS – Under the Hood: What Really Happens During Critical Active Directory Operations
    Wednesday Nov 10, 9:00 – 10:00 AM
    Thursday Nov 11, 4:30 – 5:30 PM

    Come and discuss critical Active Directory-Operations.
    Are you fully aware what “critical” operations in AD really do? In this interactive session we will talk about those operations, understanding what they are doing and how to distinguish whether operations are critical to your environment or not. Ulf has been working in the field for more than 13 years, and has a lot of notes and examples to share. We will talk about how to approach challenges, and study scenarios that show how other companies managed the associated risks and prepared for rollbacks. We have some common scenarios for everyone but please bring your own questions as well, as we want this talk to be as interactive as possible.

    Since this is an interactive session don’t forget that they “live” from discussing opinions in the audience, so the repeat will be different.

  • SIA306 – A Dozen Years AD – Discuss Previous and Future Design Decisions
    Thursday Nov 11, 2:30 – 3:30 PM

    Active Directory has evolved over the years, along with security recommendations and best practices. But has our corporate design changed that much? Is it required? What should we change, and what should we retain? Ulf B. Simon-Weidner is a long standing, internationally recognized expert in Active Directory, and in this session he will discuss Active Directory Designs of the past, present and future.

How to get more Infrastructure Masters in your domain?

Usually we have one Infrastructure Master in the domain who’s responsible to maintain references to objects in other domains – such as users which are members of a group in a different domain – to make sure if the target-object (user) is being renamed, moved or otherwise his distinguishedname has changed it can still be found. He is doing this by creating phantoms (small objects which contain only distinguishedname, SID and GUID).

Actually, making it more complicated but accurate – those group memberships are not maintained by referencing the data directly (a group in the database does not contain the data of it’s members) but by referencing objects by the database-row (like an ID, called DistinguishedNameTag or DNT). So if we add a user to a group, there is a link-table in the database where there will be a new entry with the forward link referencing the DNT or the user and the backward-link referencing the DNT of the group. So the phantoms are also needed that there is a database-row for the target object, otherwise there wouldn’t be a DNT to reference as target.

The second role of the infrastructure master is to be a single machine in the domain, only for the purpose that we need to run an operation against the domain and make sure to hit a specific DC – and always the same if we run it multiple times, the infrastructure is used (e.g. for domainprep, rodcprep,..).

The second role is the reason why we have one IM per application partition, see my post “How many Infrastructure Masters do you have” about it.

So talking about reference update, the primary reason for the IM, this is also the reason why an infrastructure master cannot run on a global catalog – because it is using the GC (who knows about the objects in other domains anyways) to validate his local data against the data of the GC. For more about GCs vs. IM see “Global Catalog vs. Infrastructure Master

But how do we get more Infrastructure Master (for reference update) in the domain?


When you are running all DCs on Windows Server 2008 R2, turn on recycle bin. There you go. This will enable running an reference update task on every DC which is not a GC.

The reason behind this? When the recycle bin is enabled, the objects we knew before as tombstones are now deleted objects with all data maintained. We are able to restore these. Therefore we need to maintain reference updates for deleted objects as well, and those changes on deleted objects are not replicated to other DCs. Additionally we need to maintain links – links who point to or from deleted objects need to be “marked” as deactivated, so that it is possible to activate them when the object is restored.

Actually I will cover the recycle bin among a lot of useful information at TEC – if you are there come to my session:

A DS Geek’s Notes from the Field – Active Directory Recovery Unveiled
Speaker: Ulf Simon-Weidner

You’ve got R2 and enabled Recycle-Bin, so no other actions are necessary to prepare for an AD-Recovery? Or you haven’t yet deployed R2 (or switched to the forest-level)? Are you aware that even with today’s possibilities are not prepared for every scenario? You have to blend in certain features. You also have to manage them and adjust your processes accordingly! This session will give you an insight into experiences and practices from a field perspective about what can go wrong, what should you do to manage and look after AD in a proactive way. In this session, you’ll hear experiences from the field about Active Directory Disaster-prevention and recovery among interesting thoughts, scripts and scenarios. Think beyond and get inspired. This session will distinguish you from the Admins who keep their CV updated in case anything goes wrong to the ones who are prepared instead.

Adjusting the Tombstone Lifetime

I just had a pretty interesting discussion via a mailing list with some other Active Directory MVPs and some members of the Active Directory Product Group in Redmond.

As we know, there is a new default for the tombstone lifetime in Active Directory. The discussion initiated because there is an article on Technet which is incorrect: Currently point 8 states that the tombstone lifetime, if it is <not set>, depends on the version of the Operating System of the first DC in the forest. However this is not correct and the article is already being changed.

If you are not familiar with tombstones, I wrote Some details about Tombstones, Garbage Collection and Whitespace in the AD DB a while ago. Basically, a tombstone is an object which is deleted, however a small part of it is maintained in AD for 60 or 180 days (by default) to make sure that all DCs receive the information that the object needs to be deleted. When the 60 or 180 days are over (this is the tombstone lifetime) every DC will delete the object locally (this is not replicated, the DC simply calculates if “time-of-deletion + tombstone-lifetime < now”, if yes the object is cleaned up. This “cleaning up” is done during garbage collection, which is by default every 12 hours.

The tombstone lifetime therefore is also the limit of the “shelf live” of an backup – if you’d use an backup which is older it would reintroduce objects which were already deleted, so the maximum age of an backup is the same as the tombstone lifetime.

In Windows Server 2003 SP1 Microsoft decided to increase the tombstone lifetime to 180 days, as I wrote in Active Directory Backup? Don’t rush – you’ll get more time. However, in Windows Server 2003 R2 there was a minor slip so this version introduced 60 days again. To clarify, this only changes if you set up a new forest and the value will depend on the level of the operating system of that first DC.

Operating System of first DC tombstoneLifetime (days)
Windows 2000 Server 60
Windows Server 2003 w/o SP 60
Windows Server 2003 SP1/2 180
Windows Server 2003 R2 (SP1) 60
Windows Server 2003 R2 SP2 180
Windows Server 2008 and higher 180


You can verify what your tombstone lifetime is by looking at the Attribute "tombstoneLifetime" of the object cn=directory service,cn=windows,cn=services in the Configuration-Partition.

dsquery * "cn=directory service,cn=windows nt,cn=services,cn=configuration,dc=<forestDN>" –scope base –attr tombstonelifetime

If the attribue has an value, tombstone lifetime is that value in days, if it has no value it is 60 days. What changed the default to 180 is the file schema.ini, which is creating the default objects in a new AD. The version of Windows Server 2003 SP1 and higher (see table above) of schema.ini sets simply the value 180 in the attribute tombstoneLifetime.

Is it recommended to adjust the Tombstone-Lifetime to the new default?

Over the years there were many infrastructures who’s DCs didn’t replicate within 60 days, leading to replication issues and lingering objects. There were many cases within Microsoft PSS and I’ve also seen a couple of infrastructures where I had to fix this. Therefore Microsoft decided to raise the default tombstone lifetime to 180 days, which also extends the lifetime of your backup. It is up to your company to decide whether to change the tombstone lifetime to the new default.

In the E-Mail-Thread we were also discussing if there are any issues with changing the tombstone lifetime.

If you lower the tombstone lifetime, there is no issue. The garbage collection process will be a bit more busy (usually it only needs to clean up changes from a 12 hour timeframe 60 or 180 days ago, but if we go down from 180 to 60 garbage collection needs to clean up the changes of 120 days the next time it is running). However this shouldn’t lead to a performance issue, and if you think it’ll be an issue you can stage it (e.g. moving from 180 to 150, waiting at least for replication + 12 hours, then go from 150 to 120 and so on).

However, if you want to raise the tombstone lifetime, e.g. from 60 to 180 to match the new default, there’s one scenario which needs to be considered:

Lets say we have two DCs, DC-Munich and DC-LA (L.A. because that where The Experts Conference will be in April). On DC-Munich we change the tombstoneLifetime from <not set> (=60) to 180. When garbage collection runs on DC-Munich it is bored – it already cleaned up all changes from 60 days ago but we instructed it to keep everything now to 180 days, so the next 120 days garbage collection does not need to do anything. However a bit later DC-LA (who hasn’t gotten replication with the new tombstoneLifetime yet) runs garbage collection and cleans up everything which happened in the 12h timespan 60 days ago.

In this scenario, DC-Munich has objects (tombstones) which were cleaned up on DC-LA, leading various detection mechanisms to identify them as lingering objects (repadmin will detect them, as well as various update processes which will prevent you from doing operations like schema updates for the next 120 days). This will resolve after 120 days, however is pretty inconvenient.

To increase tombstoneLifetime in big infrastructures, there is only one valid solution:

  • make sure that garbage collection will not run instantly after you changed the attribute, then after changing the attribute force replication and make sure it’s replicated everywhere
  • lower the tombstone lifetime before increasing it. e.g. set it to 55 and make sure it has been replicated everywhere, then wait at least 12 hours or ensure that garbage collection was running on all DCs. This ensures that there are no objects which need to be taken care of garbage collection for the next couple days. Then increase the tombstone lifetime to the value you intended, e.g. 180 days. Make sure that replication works and every DC is getting the update in the next few days, and you are on the safe side
    Thanks to Jesko who discussed this scenario with me – I was wrong – increasing is always causing trouble with lingering objects. Controlling garbage collection is the only way to go.

I think this scenario is very interesting, so I wanted to share it.

Previous Versions in Windows Home Server

Hi there and happy new year!

Last year the server I used at home went dead, and since it was pretty customized it’s also pretty ugly to repair. I’ve used it as virtualization host and file server, with three hard drives – the first for the operating system and stuff I don’t need highly redundant, the other two mirrored with all the data I prefer to keep (fotos, projekt, personal stuff, music I’ve bought). Even my home-drive of my laptops is just a share which is always synchronized for offline usage. Remote Access was possible either using STTP (VPN via SSL, built into Windows Server 2008+ and Vista+) or Remote Desktop Gateway (RDP via SSL, same OS requirements).

So … Server dead … no money … but highly important data on it. So I’ve done some research, and also got recommendations from follow MVPs, and decided to go with Windows Home Server, and got it up the same way (OK, without virtualization and the Windows Server 2008 features, but works for now until budget allows me a virtualization host again, and even then I’ll keep the home server and run the virtualization separatelly – WHS is a great product and base of my home network, data backup and recovery and home media strategy now).

However, to get back to the subject…

Today I’ve consolidated some of the data and made some error and deleted stuff from one share (personal) which was not yet in the project share. However, I’ve implemented Volume Shadow Copies and should be able to get the Files back via the previous versions client. So I went into previous versions, located the files, they were still there, but I was unable to open them / copy them / restore them. I always got the message “Das Gerät ist nicht angeschlossen” which translates to “The device is not connected”. Weird. After searching in some German Home Server Forums, I’ve found the statement that VSS (Volume Shadow Copies, the supporting technology of Previous Versions or Windows Backup or AD-Snapshots) are not working on Windows Home Server but on by default because MS might use it in the future. However, WHS is also keeping your Data redundant across multiple drives, and in the forums it was mentioned that the Data is like Tombstones which points to the real data in other locations.

To make the post not overly long, this is how you get previous versions back on a Windows Home Server:

  • Open up \\servername\d$\DE\shares\ (you also need to go via UNC if you do it from the WHS-console, Windows Server 2003 where WHS is based on only supports previous versions via UNC or mapped, not locally).
  • Navigate to the folders or files and use previous versions there, then copy the files back to \\server\share.

This is because:

  • \\server\share is the location where the tombstones of the data are stored, if you navigate there via previous versions you get the structure but only tombstone files which you can’t access or restore.
  • \\server\d$\DE\shares is one of the location where the real data is stored, might also vary depending on your setup (I’m not sure if it’s always d$ or if it depends how the drives are configured) and across which volumes the data is kept redundant (which is automatically decided by WHS).
  • \\server\c$\FS\<driveletter, e.g. F>\DE\shares would work as well, however VSS/Previous Version apparently has issues with the mount point, so you need to create a “Help Share” e.g. at c:\fs\F\DE\shares and then navigate via the new share [1].

Note: There are some things to consider:

  • WHS automatically decides where to keep the data redundant, so you might have to search across the volumes (d:\de\shares, c:\fs\f\de\shares, c:\fs\g\de\shares …)
  • Shadow copies are using by default 12% of the volumes space. If the “changed data” exceeds this limit the oldest snapshots will be released. Since it is likely that the volumes on your home server have different sizes (which is the default if you have to similar harddrives in your WHS, since the first one has one volume for the OS of 20MB usually), the default storage size for Volume Shadow Copies has different sizes. Therefore it might be that if you can access older data on one of the volumes which is not available on newer ones.
  • Since I don’t know exactly how the “redundancy algorithm” of WHS works (and I don’t need to know, that’s the beauty of WHS) I recommend not to restore the data in the original paths (d:\de, c:\fs\f\de,..) but to copy them to the default shares.

I hope this is valuable information to some WHS-Users out there, it would have been valuable for me earlier today 😉


Happy weekend,



[1] The issue here is apparently that the previous versions client is getting the information whether Volume Shadow Copes are set up or not from the share it accesses. This is not the case on the C-Drive by default. However, even if we enable Previous Versions on the C-Drive, the Previous Versions Client will only show the Volume Shadow Copies of the C-Drive and not from the Mount-Points, so I recommend keeping VSS turned of on the C-Drive (ehm – Volume).

My Value of TechEd

The last day of TechEd Europe has started. It’s been great as usual. I was satisfied about my sessions, I’m satisfied about other sessions I’ve seen. However – what’s my value of TechEd?

  1. TechEd is inspiring: always when you are put together with a clever bunch of folks, it’s inspiring to talk about technologies, there possibilities as well as what’s lacking, and get a lot of good ideas.
  2. TechEd is networking: hard to keep up with all the people you know or you should know, but TechEd is one of the major places where you get so many people who work with the same technologies and share the same interests. Great place to keep in contact and meet new people – only bad thing that it’s to short [;)]
  3. TechEd is geeky: Couple years ago I was complaining that they didn’t have and real 400-Level Sessions at TechEd for IT-Professionals. Then I was able to deliver 400-Level sessions over the years (“A Directory Services Geek’s View on …”), mostly at TechEd EMEA but also at TechEd US. I’m glad to see that especially TechEd Europe is providing in-depth content to IT-Pros (this was actually one thing we’ve heard complains at TechEd US this year, however not at Europe! Hope this still improves). It’s fun to prepare those sessions, it’s fun delivering them, great to get the feedback and great to hear afterwards how happy the attendees are about not getting a marketing session.
  4. TechEd is broadening horizons: Especially when talking with attendees in the Technical Learning Center or after my sessions, or in the evening at parties, it’s broadening my horizons when they are asking questions, tell me about their scenarios and ideas. Even when working as consultant with many companies, I only get to meet a certain amount of customers. However at TechEd I’m meeting so many people every day, so many different scenarios, it’s just great to broaden my horizons and my knowledge!
  5. TechEd is knowledge: Breakout Sessions, Interactive Sessions, Technical Learning Center (Ask the Experts), Hands on Labs, … and about almost all Microsoft technologies – there is only one place where you can lean so much in different ways
  6. TechEd is community: MVPs, MCTs, CLIP, Microsoft employees, colleagues, friends, people who share the same interests, …

… there are lots of more points …

I’m doing multiple conferences a year, and TechEd is boosting knowledge in Microsoft technologies! I love it! To bad it’s the last day today, however I’m also looking forward going home and enjoying the weekend.

Using AD-Powershell to protect OUs from accidental deletion

If you use Active Directory-Users and –Computers from Windows Server 2008 or higher (also ships with the Remote Server Administration Tools in Windows Vista or Windows 7), or the Active Directory Administrative Center in Windows Server 2008 R2 or Win7 RSAT newly created OUs are protected from accidental deletion. However, this does not apply to OUs which were there prior (migrated) or OUs which are created another way.

Therefore, during migrations or when you still run downlevel versions of the administration tools, I recommend to protect OUs from accidental deletion but you need to find another way to do it instead of looking into the Object-Tab of each OU (with Advanced View selected).

Powershell v2 and the new Active Directory Commandlets makes this easy for us:

First you need to import the Active Directory Commandlets:

import-module ActiveDirectory

Then you query all OUs, and pipe them into the set-ADOrganisationalUnit Command and specify to set the “flag” to protect the OUs from accidental deletion:

Get-ADOrganizationalUnit -filter * | Set-ADOrganizationalUnit -ProtectedFromAccidentalDeletion $true

Easy, right?

If you want to put this in a scheduled task, simply use the following commandline (in one line):

powershell.exe -command "&{import-module ActiveDirectory; get-ADOrganizationalUnit –filter * 
| set-ADOrganizationalUnit –ProtectedFromAccidentalDeletion $true}"

djoin.exe not a Powershell command

I’ve heard from a speaker I respect the question whether Microsofts strategies are consequent because they are basing everything on Powershell, however the djoin.exe-command is not a Powershell command.

Interesting one, but also very understandable if you think about it. Djoin.exe is created to provide the following possiblity in Windows Server 2008 R2 and Windows 7:

  • Create a computer account in the directory and store a file to support a offline-join of the computer to the domain
  • Offline join the computer to it’s account using the file created in the prior step

The Active Directory Domain Services product group has created a lot of Powershell Commandlets to support Management of Active Directory on Windows Server 2008 R2, actually you can download the Active Directory Managment Gateway Service to support the Powershell commands running against Windows Server 2003 (R2) or Windows Server 2008 (without R2). The Management Gateway provides the Active Directory WebService, which is used by Powershell and the new Administrative Center. The WebService is automatically there if you install a Windows Server 2008 R2 Domain Controller, therefore you don’t need the Management Gateway there.

The Active Directory Powershell Commandlets are available on Windows Server 2008 R2, or Windows 7 with the Remote Server Administration Tools for Active Directory installed. If a system has not the Active Directory

As I said before, one of the two main responsibilities is to join computers offline to the domain, either in Scenarios with RODCs (e.g. in the DMZ) or mass-creation / joining e.g. if you have your hardware vendor or distributor preinstalling machines for you.

So – would we want to install the Remote Server Administration Tools for Active Directory on Clients or member servers just to join them to the domain? Nope. Would we want to have multiple powershell-modules for AD (e.g. one for server management, one for joining domains, one for directory data management, …)? Nope.

So I guess an exe for this purpose is OK, and I also guess that this is the reason behind.

How to make your session prominent at TechEd Europe

Funny – I arrived at TechEd Europe and many already talked to me about my session – I figured out it’s now popular because it had been rescheduled from Tuesday morning to Wednesday morning, so everyone at TechEd got a separate paper with the session updates and mine was one from the few.

I’ve also heard it’s popular looking at the registrations, so if you plan on coming, come a bit early to make sure to get in. We also do a re-run on Thursday morning.

SIA02-IS: Active Directory: What’s New in R2

Join this interactive and open discussion about Active Directory updates in Windows Server 2008 R2 or other topics that you bring up. Join product group members and an MVP with undoubted Active Directory experience.

It’s an interactive session, so we will be there (Brjann Brekkan, Technical Product Manager for Identity Management and I are presenting the session togehter), listening and talking to you about the questions you have about the new features of Active Directory Domain Services in Windows Server 2008 R2.

The session is scheduled on

  • Wednesday, 9:00, Interactive Theater 4 (green)
  • Thursday, 9:00, Interactive Theatre 6 (pink)


Powershell’s social responsibility

The world is not as polite anymore as it was years ago. People are forgetting what was called “good behavior / manner”. And Powershell is entering the world and starting to monopolize in the world of scripting languages.

I think Powershell should show some level of social responsibility. And today, I’m taking action to change it:

I, Ulf B. Simon-Weidner, propose hereby that Powershell should be forced to show more social responsibility. Therefore I propose two actions:

  1. Any command executed should, by default, set the –whatif parameter
    (This would prevent the commands from executing, it’ll only tell us what it would do)
  2. To really execute a command, the –please Parameter must be used, which will revoke the –whatif parameter.

Wouldn’t this be nice?

Clarifications of a stopped Active Directory

In Windows Server 2008 you are able to stop Active Directory-Domain Services using the services snap-in or by typing

net stop ntds

However, this is for servicing only and not a state where the DC is intended to be kept for a longer period. Stopping AD is intended for servicing NTDS where there is a need of a stopped AD (such as in Directory Services Restore Mode, DSRM) but where is no need of a completely flushed Memory and stopped dependencies. So what you can do are things like offline defragmentation of the database or moving the database a.s.o.

I think, this is a good feature. Yes, it would be great to do other things. Yes, it would be great to restore AD without going in DRSM. There are things which would be nice. However … it’s better than before, and that’s what is important.

I love to do things using scripts. I love to use a toolbox, some script I’ve used before. Imagine – in the past doing offline defrags of the Active Directory database would require to reboot into Directory Service Restore Mode, log on as local admin (=DSRM-admin) then run ntdsutil with the options to do offline defrag into new files, then copy the new files over the old ones, reboot again into full more.

However, in Windows Server 2008 and above it is as easy as stopping NTDS, offline defrag, moving, starting NTDS.

It is urgent that you keep in mind that you can stop NTDS, however it’s not ment to be there for a longer period.

However, three things which made me worry if this feature is not well understood:

  1. It’s not a state to keep for a longer period, not a replacement for recovery-DCs (which are turned off in the closet).
  2. Not a replacement for DSRM when it comes to System State Recovery / Authoritative Restore which a Backup restored. If you need to restore a system state backup, the only supported way is to do it in DSRM.
  3. Authoritative marking object which haven’t been replicated to the DC in question is OK, same goes for file-management operations other than restoring a backup (the content of the dit basically needs to remain the same)
  4. You can’t logon with the DSRM-Admin when NTDS is stopped. This was hitting – in the beta-timeframe – someone who had a single DC, stopped NTDS, speared some time (screen saver kicked in) and couldn’t log on. DSRM-logon is not possible by default with a stopped NTDS when there are not other logon-servers available (if they are, e.g. you have a second DC, they are authenticating you on the DC with the stopped NTDS).
    DSRM-Admin (which equals to local admin on a DC) is only available on Small Business Server (by default) or if you modified the following registry-key:
    Value 0: DSRM-Logon only when in DSRM (default)
    Value 1: DSRM-Logon only when NTDS stopped (or DSRM) (default in
    Value 2: DSRM-Logon always

HTH, Ulf