Archive for the 'Windows Server' Category

“speaking 2.0” at Microsoft TechEd today

Thursday, June 14th, 2012

I’m speaking today about “The Evolution of Active Directory Recovery” at TechEd 2012 US (SIA319, 1pm in Hall N310). The session will also be streamed.

I had a great idea, and I’m looking forward to see how it’s working. And I haven’t seen this before Winking smile:

I’ll be taking questions using Twitter.

If you are in the audience (in the hall or online) and you have any questions, just twitter them using the hashtag #TESIA319 – this enables me to follow up with the answers either in the session, or if we are short on time or have to many sessions I’m following up afterwards. This also enables attendees who are not sitting close to a Microphone, who are watching the streamed version or who feel more comfortable writing than speaking to ask their questions.

Two simple rules: use the #TESIA319 hashtag – I will not monitor anything else during the session, and please ask questions in the areas I’ve covered, so that we can try avoiding to have questions which are covered in the next slides.

Looking forward to the session and hopefully seeing you there!

Ulf B. Simon-Weidner

“Active Directory” SPECIAL EDITION of the IT-Administrator published

Thursday, November 4th, 2010

MVP Florian Frommherz and I wrote a Special Edition of the IT-Administrator: almost 180 pages which provide in-depth information about Active Directory. We are discussing the Evolution of AD, Domain and Forest Strategies, Understanding the Domain/Forest Levels, LDAP Backgrounds and Application Performance testing, AD and DNS, AD Backup and Recovery, Background Information about the AD Recycle Bin, Virtualization of DCs, Replication Across Firewalls, RODCs, Delegation and MSAs, Fine Grained Password Policies and many more.
We are very happy with the result: a huge amount of in-depth information for any AD Admin or Consultant.

Sorry – just in German for now. But an interesting read.

If you got it, feel free to provide feedback!



How to get more Infrastructure Masters in your domain?

Saturday, February 13th, 2010

Usually we have one Infrastructure Master in the domain who’s responsible to maintain references to objects in other domains – such as users which are members of a group in a different domain – to make sure if the target-object (user) is being renamed, moved or otherwise his distinguishedname has changed it can still be found. He is doing this by creating phantoms (small objects which contain only distinguishedname, SID and GUID).

Actually, making it more complicated but accurate – those group memberships are not maintained by referencing the data directly (a group in the database does not contain the data of it’s members) but by referencing objects by the database-row (like an ID, called DistinguishedNameTag or DNT). So if we add a user to a group, there is a link-table in the database where there will be a new entry with the forward link referencing the DNT or the user and the backward-link referencing the DNT of the group. So the phantoms are also needed that there is a database-row for the target object, otherwise there wouldn’t be a DNT to reference as target.

The second role of the infrastructure master is to be a single machine in the domain, only for the purpose that we need to run an operation against the domain and make sure to hit a specific DC – and always the same if we run it multiple times, the infrastructure is used (e.g. for domainprep, rodcprep,..).

The second role is the reason why we have one IM per application partition, see my post “How many Infrastructure Masters do you have” about it.

So talking about reference update, the primary reason for the IM, this is also the reason why an infrastructure master cannot run on a global catalog – because it is using the GC (who knows about the objects in other domains anyways) to validate his local data against the data of the GC. For more about GCs vs. IM see “Global Catalog vs. Infrastructure Master

But how do we get more Infrastructure Master (for reference update) in the domain?


When you are running all DCs on Windows Server 2008 R2, turn on recycle bin. There you go. This will enable running an reference update task on every DC which is not a GC.

The reason behind this? When the recycle bin is enabled, the objects we knew before as tombstones are now deleted objects with all data maintained. We are able to restore these. Therefore we need to maintain reference updates for deleted objects as well, and those changes on deleted objects are not replicated to other DCs. Additionally we need to maintain links – links who point to or from deleted objects need to be “marked” as deactivated, so that it is possible to activate them when the object is restored.

Actually I will cover the recycle bin among a lot of useful information at TEC – if you are there come to my session:

A DS Geek’s Notes from the Field – Active Directory Recovery Unveiled
Speaker: Ulf Simon-Weidner

You’ve got R2 and enabled Recycle-Bin, so no other actions are necessary to prepare for an AD-Recovery? Or you haven’t yet deployed R2 (or switched to the forest-level)? Are you aware that even with today’s possibilities are not prepared for every scenario? You have to blend in certain features. You also have to manage them and adjust your processes accordingly! This session will give you an insight into experiences and practices from a field perspective about what can go wrong, what should you do to manage and look after AD in a proactive way. In this session, you’ll hear experiences from the field about Active Directory Disaster-prevention and recovery among interesting thoughts, scripts and scenarios. Think beyond and get inspired. This session will distinguish you from the Admins who keep their CV updated in case anything goes wrong to the ones who are prepared instead.

Adjusting the Tombstone Lifetime

Wednesday, February 10th, 2010

I just had a pretty interesting discussion via a mailing list with some other Active Directory MVPs and some members of the Active Directory Product Group in Redmond.

As we know, there is a new default for the tombstone lifetime in Active Directory. The discussion initiated because there is an article on Technet which is incorrect: Currently point 8 states that the tombstone lifetime, if it is <not set>, depends on the version of the Operating System of the first DC in the forest. However this is not correct and the article is already being changed.

If you are not familiar with tombstones, I wrote Some details about Tombstones, Garbage Collection and Whitespace in the AD DB a while ago. Basically, a tombstone is an object which is deleted, however a small part of it is maintained in AD for 60 or 180 days (by default) to make sure that all DCs receive the information that the object needs to be deleted. When the 60 or 180 days are over (this is the tombstone lifetime) every DC will delete the object locally (this is not replicated, the DC simply calculates if “time-of-deletion + tombstone-lifetime < now”, if yes the object is cleaned up. This “cleaning up” is done during garbage collection, which is by default every 12 hours.

The tombstone lifetime therefore is also the limit of the “shelf live” of an backup – if you’d use an backup which is older it would reintroduce objects which were already deleted, so the maximum age of an backup is the same as the tombstone lifetime.

In Windows Server 2003 SP1 Microsoft decided to increase the tombstone lifetime to 180 days, as I wrote in Active Directory Backup? Don’t rush – you’ll get more time. However, in Windows Server 2003 R2 there was a minor slip so this version introduced 60 days again. To clarify, this only changes if you set up a new forest and the value will depend on the level of the operating system of that first DC.

Operating System of first DC tombstoneLifetime (days)
Windows 2000 Server 60
Windows Server 2003 w/o SP 60
Windows Server 2003 SP1/2 180
Windows Server 2003 R2 (SP1) 60
Windows Server 2003 R2 SP2 180
Windows Server 2008 and higher 180


You can verify what your tombstone lifetime is by looking at the Attribute "tombstoneLifetime" of the object cn=directory service,cn=windows,cn=services in the Configuration-Partition.

dsquery * "cn=directory service,cn=windows nt,cn=services,cn=configuration,dc=<forestDN>" –scope base –attr tombstonelifetime

If the attribue has an value, tombstone lifetime is that value in days, if it has no value it is 60 days. What changed the default to 180 is the file schema.ini, which is creating the default objects in a new AD. The version of Windows Server 2003 SP1 and higher (see table above) of schema.ini sets simply the value 180 in the attribute tombstoneLifetime.

Is it recommended to adjust the Tombstone-Lifetime to the new default?

Over the years there were many infrastructures who’s DCs didn’t replicate within 60 days, leading to replication issues and lingering objects. There were many cases within Microsoft PSS and I’ve also seen a couple of infrastructures where I had to fix this. Therefore Microsoft decided to raise the default tombstone lifetime to 180 days, which also extends the lifetime of your backup. It is up to your company to decide whether to change the tombstone lifetime to the new default.

In the E-Mail-Thread we were also discussing if there are any issues with changing the tombstone lifetime.

If you lower the tombstone lifetime, there is no issue. The garbage collection process will be a bit more busy (usually it only needs to clean up changes from a 12 hour timeframe 60 or 180 days ago, but if we go down from 180 to 60 garbage collection needs to clean up the changes of 120 days the next time it is running). However this shouldn’t lead to a performance issue, and if you think it’ll be an issue you can stage it (e.g. moving from 180 to 150, waiting at least for replication + 12 hours, then go from 150 to 120 and so on).

However, if you want to raise the tombstone lifetime, e.g. from 60 to 180 to match the new default, there’s one scenario which needs to be considered:

Lets say we have two DCs, DC-Munich and DC-LA (L.A. because that where The Experts Conference will be in April). On DC-Munich we change the tombstoneLifetime from <not set> (=60) to 180. When garbage collection runs on DC-Munich it is bored – it already cleaned up all changes from 60 days ago but we instructed it to keep everything now to 180 days, so the next 120 days garbage collection does not need to do anything. However a bit later DC-LA (who hasn’t gotten replication with the new tombstoneLifetime yet) runs garbage collection and cleans up everything which happened in the 12h timespan 60 days ago.

In this scenario, DC-Munich has objects (tombstones) which were cleaned up on DC-LA, leading various detection mechanisms to identify them as lingering objects (repadmin will detect them, as well as various update processes which will prevent you from doing operations like schema updates for the next 120 days). This will resolve after 120 days, however is pretty inconvenient.

To increase tombstoneLifetime in big infrastructures, there is only one valid solution:

  • make sure that garbage collection will not run instantly after you changed the attribute, then after changing the attribute force replication and make sure it’s replicated everywhere
  • lower the tombstone lifetime before increasing it. e.g. set it to 55 and make sure it has been replicated everywhere, then wait at least 12 hours or ensure that garbage collection was running on all DCs. This ensures that there are no objects which need to be taken care of garbage collection for the next couple days. Then increase the tombstone lifetime to the value you intended, e.g. 180 days. Make sure that replication works and every DC is getting the update in the next few days, and you are on the safe side
    Thanks to Jesko who discussed this scenario with me – I was wrong – increasing is always causing trouble with lingering objects. Controlling garbage collection is the only way to go.

I think this scenario is very interesting, so I wanted to share it.

Previous Versions in Windows Home Server

Saturday, January 9th, 2010

Hi there and happy new year!

Last year the server I used at home went dead, and since it was pretty customized it’s also pretty ugly to repair. I’ve used it as virtualization host and file server, with three hard drives – the first for the operating system and stuff I don’t need highly redundant, the other two mirrored with all the data I prefer to keep (fotos, projekt, personal stuff, music I’ve bought). Even my home-drive of my laptops is just a share which is always synchronized for offline usage. Remote Access was possible either using STTP (VPN via SSL, built into Windows Server 2008+ and Vista+) or Remote Desktop Gateway (RDP via SSL, same OS requirements).

So … Server dead … no money … but highly important data on it. So I’ve done some research, and also got recommendations from follow MVPs, and decided to go with Windows Home Server, and got it up the same way (OK, without virtualization and the Windows Server 2008 features, but works for now until budget allows me a virtualization host again, and even then I’ll keep the home server and run the virtualization separatelly – WHS is a great product and base of my home network, data backup and recovery and home media strategy now).

However, to get back to the subject…

Today I’ve consolidated some of the data and made some error and deleted stuff from one share (personal) which was not yet in the project share. However, I’ve implemented Volume Shadow Copies and should be able to get the Files back via the previous versions client. So I went into previous versions, located the files, they were still there, but I was unable to open them / copy them / restore them. I always got the message “Das Gerät ist nicht angeschlossen” which translates to “The device is not connected”. Weird. After searching in some German Home Server Forums, I’ve found the statement that VSS (Volume Shadow Copies, the supporting technology of Previous Versions or Windows Backup or AD-Snapshots) are not working on Windows Home Server but on by default because MS might use it in the future. However, WHS is also keeping your Data redundant across multiple drives, and in the forums it was mentioned that the Data is like Tombstones which points to the real data in other locations.

To make the post not overly long, this is how you get previous versions back on a Windows Home Server:

  • Open up \\servername\d$\DE\shares\ (you also need to go via UNC if you do it from the WHS-console, Windows Server 2003 where WHS is based on only supports previous versions via UNC or mapped, not locally).
  • Navigate to the folders or files and use previous versions there, then copy the files back to \\server\share.

This is because:

  • \\server\share is the location where the tombstones of the data are stored, if you navigate there via previous versions you get the structure but only tombstone files which you can’t access or restore.
  • \\server\d$\DE\shares is one of the location where the real data is stored, might also vary depending on your setup (I’m not sure if it’s always d$ or if it depends how the drives are configured) and across which volumes the data is kept redundant (which is automatically decided by WHS).
  • \\server\c$\FS\<driveletter, e.g. F>\DE\shares would work as well, however VSS/Previous Version apparently has issues with the mount point, so you need to create a “Help Share” e.g. at c:\fs\F\DE\shares and then navigate via the new share [1].

Note: There are some things to consider:

  • WHS automatically decides where to keep the data redundant, so you might have to search across the volumes (d:\de\shares, c:\fs\f\de\shares, c:\fs\g\de\shares …)
  • Shadow copies are using by default 12% of the volumes space. If the “changed data” exceeds this limit the oldest snapshots will be released. Since it is likely that the volumes on your home server have different sizes (which is the default if you have to similar harddrives in your WHS, since the first one has one volume for the OS of 20MB usually), the default storage size for Volume Shadow Copies has different sizes. Therefore it might be that if you can access older data on one of the volumes which is not available on newer ones.
  • Since I don’t know exactly how the “redundancy algorithm” of WHS works (and I don’t need to know, that’s the beauty of WHS) I recommend not to restore the data in the original paths (d:\de, c:\fs\f\de,..) but to copy them to the default shares.

I hope this is valuable information to some WHS-Users out there, it would have been valuable for me earlier today ;)


Happy weekend,



[1] The issue here is apparently that the previous versions client is getting the information whether Volume Shadow Copes are set up or not from the share it accesses. This is not the case on the C-Drive by default. However, even if we enable Previous Versions on the C-Drive, the Previous Versions Client will only show the Volume Shadow Copies of the C-Drive and not from the Mount-Points, so I recommend keeping VSS turned of on the C-Drive (ehm – Volume).

Using AD-Powershell to protect OUs from accidental deletion

Wednesday, November 11th, 2009

If you use Active Directory-Users and –Computers from Windows Server 2008 or higher (also ships with the Remote Server Administration Tools in Windows Vista or Windows 7), or the Active Directory Administrative Center in Windows Server 2008 R2 or Win7 RSAT newly created OUs are protected from accidental deletion. However, this does not apply to OUs which were there prior (migrated) or OUs which are created another way.

Therefore, during migrations or when you still run downlevel versions of the administration tools, I recommend to protect OUs from accidental deletion but you need to find another way to do it instead of looking into the Object-Tab of each OU (with Advanced View selected).

Powershell v2 and the new Active Directory Commandlets makes this easy for us:

First you need to import the Active Directory Commandlets:

import-module ActiveDirectory

Then you query all OUs, and pipe them into the set-ADOrganisationalUnit Command and specify to set the “flag” to protect the OUs from accidental deletion:

Get-ADOrganizationalUnit -filter * | Set-ADOrganizationalUnit -ProtectedFromAccidentalDeletion $true

Easy, right?

If you want to put this in a scheduled task, simply use the following commandline (in one line):

powershell.exe -command "&{import-module ActiveDirectory; get-ADOrganizationalUnit –filter * 
| set-ADOrganizationalUnit –ProtectedFromAccidentalDeletion $true}"

djoin.exe not a Powershell command

Monday, November 9th, 2009

I’ve heard from a speaker I respect the question whether Microsofts strategies are consequent because they are basing everything on Powershell, however the djoin.exe-command is not a Powershell command.

Interesting one, but also very understandable if you think about it. Djoin.exe is created to provide the following possiblity in Windows Server 2008 R2 and Windows 7:

  • Create a computer account in the directory and store a file to support a offline-join of the computer to the domain
  • Offline join the computer to it’s account using the file created in the prior step

The Active Directory Domain Services product group has created a lot of Powershell Commandlets to support Management of Active Directory on Windows Server 2008 R2, actually you can download the Active Directory Managment Gateway Service to support the Powershell commands running against Windows Server 2003 (R2) or Windows Server 2008 (without R2). The Management Gateway provides the Active Directory WebService, which is used by Powershell and the new Administrative Center. The WebService is automatically there if you install a Windows Server 2008 R2 Domain Controller, therefore you don’t need the Management Gateway there.

The Active Directory Powershell Commandlets are available on Windows Server 2008 R2, or Windows 7 with the Remote Server Administration Tools for Active Directory installed. If a system has not the Active Directory

As I said before, one of the two main responsibilities is to join computers offline to the domain, either in Scenarios with RODCs (e.g. in the DMZ) or mass-creation / joining e.g. if you have your hardware vendor or distributor preinstalling machines for you.

So – would we want to install the Remote Server Administration Tools for Active Directory on Clients or member servers just to join them to the domain? Nope. Would we want to have multiple powershell-modules for AD (e.g. one for server management, one for joining domains, one for directory data management, …)? Nope.

So I guess an exe for this purpose is OK, and I also guess that this is the reason behind.

Clarifications of a stopped Active Directory

Tuesday, September 15th, 2009

In Windows Server 2008 you are able to stop Active Directory-Domain Services using the services snap-in or by typing

net stop ntds

However, this is for servicing only and not a state where the DC is intended to be kept for a longer period. Stopping AD is intended for servicing NTDS where there is a need of a stopped AD (such as in Directory Services Restore Mode, DSRM) but where is no need of a completely flushed Memory and stopped dependencies. So what you can do are things like offline defragmentation of the database or moving the database a.s.o.

I think, this is a good feature. Yes, it would be great to do other things. Yes, it would be great to restore AD without going in DRSM. There are things which would be nice. However … it’s better than before, and that’s what is important.

I love to do things using scripts. I love to use a toolbox, some script I’ve used before. Imagine – in the past doing offline defrags of the Active Directory database would require to reboot into Directory Service Restore Mode, log on as local admin (=DSRM-admin) then run ntdsutil with the options to do offline defrag into new files, then copy the new files over the old ones, reboot again into full more.

However, in Windows Server 2008 and above it is as easy as stopping NTDS, offline defrag, moving, starting NTDS.

It is urgent that you keep in mind that you can stop NTDS, however it’s not ment to be there for a longer period.

However, three things which made me worry if this feature is not well understood:

  1. It’s not a state to keep for a longer period, not a replacement for recovery-DCs (which are turned off in the closet).
  2. Not a replacement for DSRM when it comes to System State Recovery / Authoritative Restore which a Backup restored. If you need to restore a system state backup, the only supported way is to do it in DSRM.
  3. Authoritative marking object which haven’t been replicated to the DC in question is OK, same goes for file-management operations other than restoring a backup (the content of the dit basically needs to remain the same)
  4. You can’t logon with the DSRM-Admin when NTDS is stopped. This was hitting – in the beta-timeframe – someone who had a single DC, stopped NTDS, speared some time (screen saver kicked in) and couldn’t log on. DSRM-logon is not possible by default with a stopped NTDS when there are not other logon-servers available (if they are, e.g. you have a second DC, they are authenticating you on the DC with the stopped NTDS).
    DSRM-Admin (which equals to local admin on a DC) is only available on Small Business Server (by default) or if you modified the following registry-key:
    Value 0: DSRM-Logon only when in DSRM (default)
    Value 1: DSRM-Logon only when NTDS stopped (or DSRM) (default in
    Value 2: DSRM-Logon always

HTH, Ulf

Exchange 2010 RC touches AdminSDHolder

Wednesday, September 9th, 2009

I was just pointed to the blog of David Loder who’s pointing out that the Release Candidate of Exchange 2010 is changing the permissions enforced by AdminSDHolder to critical groups to allow Exchange Organizational Admins to change the group memberships of Enterprise Admins, Schema Admins, Domain Admins a.s.o.

OK, one of Microsoft Program Managers already responded, and I do agree that this is not a released product and pre-release versions are there for finding those bugs.

I’d just like to say:

@Ross: This is not that hard to fix:

  • for existing OUs stamp it on their ACE (preferably top-level – if inheritance is blocked on lower levels check and point to an KB
  • for new OUs change the defaultNtSecurityDescriptor of the OU-Class in the schema
  • don’t touch adminSdHolder ;)

The first one will make sure that existing OUs allow Exchange Admins to control Group memberships (actually I’d even like to discuss if this is necessary – usually group membership administration is not done in the same instance where groups are mail enabled – the first one would be a generic help-desk task, the second a Exchange-Admin task).

I’d also prefer – if OUs are touched – that if the organization decided to block security inheritance at one point that a new version of some software shouldn’t go beyond that point but respect the design but warn them about the consequences.

The second suggestion makes sure that new OUs will get the permissions by default when creating the OU.

The third suggestion makes me think about two things:

  • is there no process for infrastructure critical changes as changing the adminSdHolder (I’d think that the Active Directory Product Group should be involved if something as this is happening, how should they ensure security if other groups are mangling around with their mechanisms)?
  • why is this coming up in RC? If a product is at Release Candidate Level, it’s mostly finished and usually there are not this many changes approved afterwards (unless they are critical). I hope that this will be fixed!

Thanks David for finding this one, very interesting, and I hope it’ll be fixed!

See Davids Blog for his post

See my blog-post about AdminSdHolder


And since I wanted to mention this: if you are in Europe (or want to come), The Experts Conference (TEC) is in Berlin next week and it is THE place for Active Directory and Exchange.

Windows 7 and Windows Server 2008 R2 are finished

Thursday, July 23rd, 2009

Windows Server 2008 R2 and Windows 7 were RTMd yesterday (June 22nd). Products are finished now, the code won’t be changed and the preparations for making them available via Download as well as producing the DVDs has started. Read the announcements on the official blogs:

The products will be available soon, depending on which channels:


  • MSDN/Technet: W7 Aug 6, R2 Aug 14
  • Partner Network: W7 Aug 16, R2 Aug 19
  • Action Pack: Aug 23 (W7 & R2)
  • Volume Licensing (SA): W7 Aug 7, R2 Aug 19

The Evaluation-Version of Windows Server 2008 will be available on August 20th on the following page:

Win7 will be available in stores Oktober 22nd, R2 on September 14th. The pre-order of Win7 is already running for a couple weeks.

Windows 7 will only be available in English on the Dates above, additional languages will follow early October. WS08R2 will be available in a couple languages such as English, German, Spanish,… and additional languages will be made available later in September.


I’m very excited to get the final version of both products, I’ve loved and used them in Production using the Release Candidate and can’t wait to install the final version instead! Congrats Microsoft!!! What a release!