Category Archives: 4555

LeftHand Networks – 256 TB LUN

What an awesome way for a geek to spend an evening.


Rod Fournier and I met with LeftHand Networks in Boulder, Colorado last night for about 3 hours reviewing their iSCSI technology. It is easy to manage, it is very fast (but it will be incredible once 10GigE is out) with its multiple pathing capabilities.


I left there a very happy camper, and I didn’t even get any schwag.


Why was I so happy? Well, I am glad you asked that. We found that LeftHand Networks can create and will support a single LUN (yep, one LUN) up to 256 Terabytes in size (yes, it is many, many TB in size). What else made me so happy? Their storage modules include a 2U unit that holds 6 TB of raw storage. Yep, 6 TB in 2U. Yes, really!


So, we ask them to demo the ability to create really large partitions. They created a 40 TB LUN for us, and it took about 5 minutes for it to format and be available. So, I am still happy.


OK, let’s cluster it. Whoa, wait… its over 2 TB, so it is GPT, not an MBR disk. While it says it is a basic disk (which it is), it is GPT, so it can’t be used. In Windows Server 2003 Enterprise (and Datacenter), you can’t use GPT drives for clustering. You can use them in single server implementations, but not as shared (yeah, they are not really shared – it is a shared nothing model) disks in a cluster. Crap. So what next?


Brainstorm time… As we bounced ideas off of each other, I came up with an idea. I am not sure if it will really work, but it makes some sense. We did a quick test by creating a single node cluster using the demo guys notebook (he was running 2003 Enterprise) and it worked there. The idea is to mount the large drive in a smaller (and supported for clustering) drive. The cluster service will control access to the smaller drive and thus control access to the larger drive that is mounted to it. So, we tested it by doing this:



  1. Create a small disk for the cluster of 1 GB just to have something that can be controlled and managed by the cluster service and expose it to both nodes of the clsuter.
  2. Create a drive over 2 TB using GPT (we did a 6 TB) and expose it to both nodes of the cluster.
  3. Mount the 6 TB drive in the 1 GB drive as a folder.
  4. Configure the 1 GB drive in clustering as a physical disk resource.
  5. The result? It worked. You could easily access the 6 TB drive through the new physical disk resource via the cluster service.

The next questions are:



  1. Will it really work with multiple nodes?
  2. Will Microsoft support it?

I can’t see Microsoft supporting this, but we shall see what happens.


 


Update – an answer: Thanks to some great response, we have decided that this just won’t reliably work. After all, at the failover to another node, there is nothing that will make sure the large drive will get its cache properly flushed. So, the new focus will have to be on using a 3rd party app to provide dynamic disks to Microsoft clustering, like Veritas.