The longer I work, the more I’m aware of the simple fact: even the most routine and mundane thing, technology or tool can have something to learn about it. Like you never know what is a cake you it is made from, unless you try to make it yourself =)
The same stuff can be told about, say, chkdsk. What do you think: do you need to know something more than command line switches about chkdsk? Ok, if you don’t have an inquiring mind then probably not. But probably you just don’t know what impact it can have on your environment. For example, let’s imagine quite a usual situation: your fileserver has been growing with the company unless you finally got your own very special SLA for it. This SLA was negotiated with IT and everyone took into account practically everything:
- time of recovery for any subset of the information (some bits are required ASAP, while others can wait some time)
- time required to recover broken equipment
- and so on and so forth.
But one pretty good day your volume (which stores about 500M small files) was marked as “dirty” and went into chkdsk after reboot… Had you incorporated this 99 hours (!!!) downtime into your SLA? I hadn’t =(
Fortunately, I still have some time for thinking of it and even more because I haven’t yet run into the situation and now, after reading the document named “NTFS Chkdsk Best Practices and Performance”, I have some ideas for my future SLAs
BTW, in Server 2012 there will be some big improvement over described issues. Read and prepare yourself.