SSDs in servers?

So if you speak french there’s two videos on Dell’s site about SBS 2011 essentials.


http://media.zdnet.fr/partenaire/dell/article_video.php


You’ll have to pardon me as I get back into the swing about blogging on virtualization.  Right now my HyperV test server is in the back of my MINI boot to go back to the office for tomorrow’s Fresno user group meeting on Server 2012 topics.  I rebuilt my HyperV over the Memorial day weekend to be based on Windows 8 server so we could have a hands on server and demo box for the meeting tomorrow night.  Yes I do realize that there’s another version slated to come out next week (according to the OEM Microsoft web site and other reports) so I may be rebuilding that OS once more. 

One thing of interest I found that I had to use a legacy nic for a Server 2003 box.  So we may have support issues with Server 2003’s inside of HyperV 2012 boxes.  We’ll have to see what’s up with that later on with the RC stuff.


… so one the questions that comes up on various listserves is about putting a SSD drive as the bootable C drive.


I don’t know… I have put a SSD into my baby laptop and I guess I’m still old fashioned in that I want a good couple of years burn in and evidence before I’m ready to rip out RAID and redundant drives on servers.  I don’t reboot a server that often, so I remain to be convinced if having SSD drives as the C drive is what we should all be doing.  That said my benchmark is a laptop.  An OLD laptop that can’t even run Windows 8 so I don’t have any real world experience to base my blogging off of, other than just a thought of …can I see the mean life of those drives in a few years and get back to you?


More reading – http://www.storagesearch.com/ssdmyths-endurance.html


http://thecoffeedesk.com/news/index.php/2009/04/26/ssd-in-the-data-center/


http://homeservershow.com/ssd-caching-on-hyper-v.html


 

10 Thoughts on “SSDs in servers?

  1. You get get the benefit of an SSD in Dell servers without losing the capacity or reliability of your existing RAID array. I’ve done it a number of times with excellent results.

    Only a small subset of the Dell PERC controllers support the technology known as CacheCade, but it uses the SSD as a cache for all drives (not just the boot volume) and provides a very decent performance boost (for Exchange in particular)

    I blogged about adding SSD to a Dell PE R510 here:

    http://www.tachytelic.net/2012/02/adding-an-intel-520-series-ssd-as-a-cachecade-drive-to-a-dell-poweredge-r510/

    and here is a post from from StorageReview which shows some of the performance benefits:

    http://www.storagereview.com/lsi_megaraid_cachecade_pro_20_review

    The raid controller is intelligent enough to determine if the SSD is unreliable.

  2. Viorel on May 30, 2012 at 9:31 am said:

    For Hyper-V box seems as a supported scenario to run even from usb flash – http://nullsession.com/2009/09/running-hyper-v-server-2008-r2-from-usb/
    and an informative article from anadtech – http://www.anandtech.com/show/5518/a-look-at-enterprise-performance-of-intel-ssds

  3. Chris on May 30, 2012 at 12:07 pm said:

    Hi Susan,
    Please have a look at your sources before posting links that supposedly provide a more in-depth reading of the matter. The “Are SSDs Ready For The Data Center?” at Coffetalk is so full of errors that it’s not even funny anymore. This is also mentioned in the comments of said article. And the worst part: No benchmarks, long term tests, or real-life experiences to support any of his claims. This article is a rant based on false information, nothing else.

    To come back to your article – I’m also a little hesitant to use SSDs in a server environment yet. However, the Dell T110-machines (awesome small servers otherwise) only support non-registered ECC RAM, which basically means that they’re currently limited to 16GB without paying an unreasonable premium. That’s a bit small for an SBS 2011 with 25+ users. Throw in 2 SSDs (mirrored) for the OS, and the thing just flies without having to replace the whole server.

  4. Indy on May 30, 2012 at 12:33 pm said:

    No server should ever have unregistered RAM. 16GB of memory will have 1-2 parity errors a year on average. This defies the entire point of having a server: reliability of your data.

    The Intel SSDs we have used for 3+ years now have proved themselves: They have never died, under heavy workstation use. As long as you RAID them you should be fine. Be sure to get a Server-specific approved AND SSD approved controller. Be sure to get a controller with several generations of firmware/driver updates that are mature and stable. Read the support forums for the manufacturer gotchyas. You don’t ever want cutting edge (newer than 6 months) for your storage subsystems.

  5. Joe Raby on May 30, 2012 at 12:59 pm said:

    If you’re doing this in a production environment with company data, there are more than a few issues:

    First, the Z68 chipset isn’t designed for servers. It’s a consumer chipset. Next, you need to install iRST which is not designed to run on Windows Server, and I don’t trust the stability of Intel’s consumer control panel applications to run on a production server either (for those that even install).

    Also, if you’re buying a caching drive, skip the ones that are called “Solid State Cache Drive’s” because they use a piece of software called Dataplex which has two qualms: a) it’s a consumer SSD caching software, not for server use, and b) it doesn’t support UEFI or GPT-formatted drives, so it’s useless for >2TB drives (that will have to change for Windows 8, and I contacted them and they say that they will update it “after Windows 8 ships”).

    If you’re looking for a proper drive with the lifespan to last through heavy caching, like for SQL transactions and such, make sure it’s an enterprise SSD. HighPoint has some new cards designed for SSD caching with multiple drives for RAID, and they support Windows Server 2008+: http://www.highpoint-tech.com/USA_new/series_RC3240X8.htm

    Be wary of the software that a lot of caching

  6. Joe Raby on May 30, 2012 at 1:56 pm said:

    My comment got cut off for some reason…

    The last point should have said:

    Be wary of the software that a lot of caching cards use because a lot of them will use web-based interfaces, and often that means that they use MySQL or some other database system to catalog reports and stats. I don’t know about you, but I’d hate to have to install software like at on a server. Hardware cards that don’t require said software are much more ideal IMO. I haven’t used any of the HighPoint RocketCache cards, but I have used the RocketHybrid desktop card (they all use Marvell controllers) and the software stinks! Luckily there’s a workaround: Install the software package so that everything goes on, then just uninstall it. The drivers will remain installed on the system if the card is present, and it still provides caching functionality internally. The extra web interface software that gets removed during uninstall is just for you to monitor usage or override the automatic caching and tell it to cache specific folders. It isn’t necessary. The RocketHybrid is an AHCI controller, but it includes some kind of funky firmware that requires a driver, and that driver is only in the web software package. The RocketCache cards are full Marvell RAID controllers though, so they require an out-of-box driver. I don’t know if they also have that same type of firmware as the RocketHybrid card.

  7. I have to giggle at some of the remarks here. Quite a lot make good sense, but they still seem to contradict each other because they’re talking about different things.

    One server isn’t another server.

    I can imagine some people saying that they want to battle-test their hardware for three years, and that server shouldn’t have unregistered ram. For such servers (or admins) there are very few options available that suit their needs, with SSD’s.

    However, for an SBS server, for a company that has 5-15 workstations, SSD’s are great. Even in combination with “consumer” solutions like Z68 chipsets.

    Personally, I’m not a big fan of the Z68 solution, because it’s a compromise to either save a few bucks, or save having to use an extra harddisk. But for a small server, if you get two intel SSD’s, even 320/330 series, and put them in RAID0, you will have an *immensely* fast system, compared to using 7K2/10K/15K mechanical disks or only a few hunderd bucks extra (compared to 7200rpm disks) or actually saving money (compared to SCSI).

    And for quite a few servers that are small, not meant to be very expensive and for a small office (this would seem where SBS feels at home), a few SSD’s in a server can be just wonderful.

  8. Joe Raby on May 31, 2012 at 12:18 am said:

    …name some solid-state drive manufacturers that are certifying their drives for RAID use. There aren’t many let me tell you.

    If any drives are certified for RAID, it’s going to be the enterprise ones, but check first. You should treat RAID’ed SSD’s like you do hard drives too: don’t RAID them unless they are certified or else you could drop an array pretty easily. Remember that it’s altogether possible that two SSD’s could be made with memory chips with different timings, or different variants of the same controller. ALWAYS match the firmware of RAID’ed SSD’s, and update it to the latest version because updates always fix bugs on these things. Any deviation can cause the array to lose sync, much like standard desktop hard drives.

    I certainly don’t buy the idea of running a hypervisor from an internal USB stick either. To me, that sounds like one of the dumbest ideas that server board manufacturers thought up. A USB stick has a very limited number of reads and writes until the memory chips degrade, and I think to myself: “gee, do I want the most important pieces of code for my consolidated server running on the most unreliable storage medium available today – which basically replaces the obsolete, yet equally buggy click-of-death-prone Zip disk?”.

  9. EricE on June 2, 2012 at 10:21 am said:

    I’ve pondered this and have also decided that SSD is just too unknown at this time to use as a primary drive. Most enterprise use of SSDs are inside of storage arrays where there is redundancy – and I think that’s telling.

    If I were to pursue an SSD with my SBS server I would probably go down the LSI CacheCade Pro path. It allows me to upgrade my existing LSI RAID controller and add up to four SSD drives for caching. My existing storage volumes are untouched, my data still primarily resides on rotating disk – it just gets much faster!

    If the SSD “dies” I’m not out anything. If I only have one, my performance degrades to what it was before I added the SSD to cache. Very safe.

    If I were to go all SSD, it would have to be in some sort of RAID – either internal to the server or as part of a storage array, and that get’s cost prohibitive quickly! I think solutions like LSI’s where you can add SSD cache to an existing RAID array are the best compromise, especially for the SMB market. To me, Intel’s chipset base solution is interesting but since it’s aimed at desktops and not servers I really don’t think it’s relevant. I’d be worried about driver and data reliability issues. Especially with disk IO and SBS, I’ve learned it doesn’t pay to cut corners – stick to hardware and software designed and certified for use with servers!

    I’m not sure where the one poster who expressed concern that web based management consoles often install their own SQL server – all the enterprise tools from Dell, HP and IBM that I have used from general server management interfaces to RAID card management interfaces (even 100% “Hardware” RAID) do similar things and it’s been working fine for decades now…

  10. EricE on June 2, 2012 at 10:23 am said:

    @Joe”I certainly don’t buy the idea of running a hypervisor from an internal USB stick either. To me, that sounds like one of the dumbest ideas that server board manufacturers thought up. A USB stick has a very limited number of reads and writes until the memory chips degrade”

    Er, the whole point of a hypervisor environment is all you do is get it up. Once up, there shouldn’t be ANY writes, and very few if any reads. It’s perfect for this application which is exactly why you see Intel, Microsoft, VMWare and motherboard manufacturers supporting it.

    All of those companies wouldn’t support it if it didn’t make sense. And even if it does fail, the hypervisor isn’t the critical part of your system – the VMs are. Hypervisors can be recreated very quickly from scratch – even more quickly if you have a backup. You do still backup, right?

Post Navigation