Small Business Susan

Expanding drives in HyperV

The nice thing about HyperV is that if you screw up and pick a lousy size for a C drive, you can easily make it bigger


Expanding Virtual Hard Disks with Hyper-V:
http://www.petri.co.il/expanding-virtual-hard-disks-with-hyper-v.htm


When figuring out the size of a parent there’s the C drive of the parent to worry about on the physical disk.  It should be big, but not too big.  After all it’s just going to be Windows OS as a HyperV role.  Nothing else will be installed.  So it’s base OS + patch bloat.


I have a three year old Win2k8 r2 with an 88 gig c drive with 37 gigs free as a guideline of how much you kinda want to have.  50 gigs bloat growth over a three year period.  These days it’s not growing that much, but still it’s kinda worrying that 50 gigs it merely for a parent for the important servers.


If you lay down an SBS 2011 remember you can move SharePoint/Exchange over to another drive.  I think 80-120 gigs comfy on SBS 2011 standard as the C drive with as much moved over to other drives.


For the exact how to expand drives in HyperV follow:


http://www.petri.co.il/expanding-virtual-hard-disks-with-hyper-v.htm


and


http://www.petri.co.il/extend-disk-partition-vista-windows-server-2008.htm


to give yourself more breathing room.




3 comments ↓

  • #   Ben Krause on 09.03.12 at 6:37 am     

    I’m starting to wonder why not just have a single large hard drive nowadays. I know it’s been a best practice to have the OS and DATA partition/drive (and I can’t remember all the reasons why other than it was how you did it) but is it still a best practice and if so why?

    I used to think it was for performance but then my thinking is that if you have a raid x, and split it into two partitions, it seems you will still put the same amount of stress on the drives. Even if you have two different raid x’s on the same controller, you would still be putting the same amount of stress on the controller. The proper way then would be to have two controllers, with two raids to take advantage of performance benefits from two drives. And most configurations I have dealt with, it is too costly to go that route, or there is not enough space for hard drives to do two Raid 10′s. (my prefered raid)

    I think it’s something that should be discussed anyway, but those are my thoughts on things so far.


  • #   Ben Krause on 09.03.12 at 6:48 am     

    I found this discussions about one big drive or multiple drives for servers

    http://community.spiceworks.com/topic/242417-best-raid-setup-for-windows-2008-r2-file-server

    I tend to take Scott Alan Millers advice over others when it comes to raid and storage questions.


  • #   Joe Raby on 09.03.12 at 7:50 pm     

    For small servers, I’ve stuck to RAID 1. RAID 10 gives you additional performance due to the striping, but the cost of SSD’s has come down to a point where you could either use one as a cache drive, or you could just use them as your primaries. I don’t think there is a valid argument for striping SSD’s either. Mirroring though? Absolutely! Or at least if you use it as a caching drive, make sure that the cached data is also on the hard drive in case the SSD fails.

    For RAID 5 or 6, you’ll need a real hardware controller (make sure it has a real processor and RAM…AND a battery backup!) for the parity information because rebuilds will be SLOW with software. For RAID 0 and RAID 1, I’ve found that hard drives are getting to a point where you just can’t trust them in an array unless that are a) the same brand, b) the same model, c) the same firmware, and d) the same manufacturing lot. Running them on a hardware controller is a waste of money – there is no benefit to it. So now you’re looking at whether or not you use a dumb controller (BIOS-level software controller) or Windows Server. Hands down, I say stick with the software in the OS and here’s why: Windows Server’s software mirroring has been shown to be faster than dummy controllers like Intel Matrix Storage (or it’s server version which is usually run on LSI controllers). It’s also far more flexible with variable drive types. Ever have to replace a drive in an array? The drive manufacturer requires you send em all in. I’ve never found a drive fail in a Windows Server mirror array just because it wasn’t the same make and model because Windows Server’s “RAID” understands NTFS. It understands how files and folders are stored, and it can do a good job of recovering data far better than a dumb controller. Dummy controllers only duplicate blocks, and don’t understand the underlying filesystem.

    Just a tip though: in Windows Server, all the way up to 2008 R2, the GUI Disk Management console only supports mirroring up to 2 volumes/partitions per disk. This means that if you, say, use 2 large drives and want to partition them, the GUI stuff won’t work for you. It’s not a problem though – just learn to use DISKPART. It will mirror every partition. What I like to do is mirror EVERYTHING – not just the data or OS partitions, namely including the Boot partition. Why? Say you only mirror the OS and data partitions and then one drive fails. WHOOPS! No more bootloader. Not a problem for DISKPART. If one drive fails completely, it’s not the end of the world – Windows still boots. I still make sure my drives are enterprise-ready so that they don’t lose sync, but it’s actually never happened with Windows Server’s mirroring. Dumb RAID controllers? Several times with long down time to back up to alternate mirrored disks before I get replacements for every drive in the mirror.

    One thing I’m looking forward to is Storage Spaces. It looks extremely interesting. I think that the new “Home Server” is really just going to be a Windows 8 PC with Storage Spaces that’s used as a headless storage, backup, and streaming machine. Isn’t that exactly what Home Server was all about anyway? I think Storage Spaces will make Storage Server also redundant, but I think it’ll help in the mainstream server, erm, space too.