Storage Replica is a feature of Windows Server 2016, designed for the disaster recovery purposes. The replication it provides is block-level and volume-based. Storage Replica is targeted at disaster avoidance since the replication of data is done to a remote location.
Before, Windows had replication only at other levels, like file-to-file replication, applications replication, and VM-level replication. There was also block-level replication in the storage market, but vendor lock-in made it too expensive.
Storage Replica provides new opportunities for disaster recovery and disaster preparedness in the industry. With it, the data is synchronously protected at two separated sites, either different buildings, cities, or countries.
Storage Replica provides synchronous and asynchronous replication. With synchronous replication, the data is written to two locations simultaneously before completion of the IO, which is a way to go for mission-critical data. Asynchronous replication provides faster response time to application, so it comes in handy when the sites are too wide apart.
Read the full article here: https://www.starwindsoftware.com/blog/microsoft-storage-replica-in-a-nutshell
The article provides comparison of three leading products of the Software-Defined Storage market: Microsoft Storage Spaces Direct, VMware Virtual SAN and StarWind Virtual SAN. There are several use cases considered, based on the deployment scales and architectures.
Microsoft Storage Spaces Direct and VMware Virtual SAN are a perfect choice for bigger SMBs and entry-level enterprises, because their licensing is reasonable for typical of these businesses infrastructure types. These solutions are not good with smaller SMBs and ROBOs, being too expensive and a performance overkill for them. The per-host licensing is too expensive for hyperconverged environment of very big enterprises. Microsoft can expose only SMB3 reasonably well, while VMware “speaks” iSCSI and NFS, which prevents them from creating single shared storage pool in a multi-tenant environment. For the databases scenarios implementation, Microsoft and VMware have specific financial and technical issues.
StarWind Virtual SAN requires minimalistic two-node setup and provides 24/7 support, which makes it a perfect choice for small SMBs and ROBO. It has flexible licensing, to cover different deployment scenarios. StarWind utilizes majority of industry-standard uplink protocols, so it can work with vSphere and Hyper-V environments simultaneously and provide a single pool of storage instead of separated “islands”. For datacenters, StarWind is good both in its software form as a “data mover” to create virtual shared storage pool, and complete “ready nodes” for HCI or storage-only infrastructure. StarWind supports non-virtualized Windows Server environments, properly supports all possible storage protocols, and can provide high performance shared storage
In general, StarWind Virtual SAN rather complements Microsoft Storage Spaces Direct and VMware Virtual SAN, than competes them. It fills the gaps for Microsoft or VMware-based infrastructures, providing them with the features to fine-tune different types of architectures.
Read the detailed comparison here: https://www.starwindsoftware.com/blog/software-defined-storage-starwind-virtual-san-vs-microsoft-storage-spaces-direct-vs-vmware-virtual-san
We’re making a series of tests, dedicated solely to the work of Resilient File System (ReFS – https://msdn.microsoft.com/en-us/library/windows/desktop/hh848060%28v=vs.85%29.aspx).
It is a relatively new proprietary file system, introduced by Microsoft in Windows Server 2012 as a successor of NTFS. Among its advantages over the predecessor, Microsoft lists enhanced protection from data corruption, common and silent alike, if provided with a redundant storage. The ReFS is also aimed at modern understanding of high capacity and large size. It supports files up to 16 million terabytes and maximum volume size (theoretical) of 1 trillion terabytes.
The tests are associated with workloads typical for virtualization – you can read more about them here. Our purpose is practical, because as of Windows 2012 time, Resilient File System has major issues with virtualization.
In the first part, we are planning to see how ReFS works with FileIntegrity option on and off. This option is responsible for the data repair process, so it’s crucial that it works well under the pressure of random I/O, which dominates among virtualization workloads. You can check out the test here.
The second test is dedicated to performance, because there have been reports of ReFS performance troubles somehow caused by hashsumming. We are about to see if the reports are true and if hashsumming has anything to do with any problems.
Development of any technology requires ideas, which need a lot of testing, before then can actually work. The problem is that testing and POC typically happen before the idea can make any money, relying heavily on free and open-source solutions.
There is no decent free SMB3 fileserver. Those available are critically unreliable or just aren’t working properly. In case you need one right now and there is no other way of acquiring it, there is a way – the Free Microsoft Hyper-V Server 2012R2. However this method violates license agreement, so it’s not in any way a permanent solution for your problems. You may take the risk and try it once in order to see if your idea works, but we still strongly discourage you from repeating this experiment.
In any case, let’s see if you can build an SMB3 fileserver on Free Microsoft Hyper-V Server 2012R2 and if it works, go further and see if we can create a failover fileserver. Check out the following posts and see:
Hyper-V: Free SMB3 File Server
Hyper-V: Free “Shared Nothing” SMB3 Failover File Server
Microsoft has always been targeting mainly SMB and ROBO space and now they’ve decided to aim at Enterprise, namely datacenters, Internet Service Providers, Cloud Hosting, etc. How well do they fare?
Problem was, up to Windows Server 2016, Microsoft had no real Software-Defined Storage, meaning it relied completely on SAS hardware. Not much “software defined” there. Enterprises couldn’t go with poor scalability of SAS with their several feet long cables and a literal headache of stretching the infrastructure as far as the next building.
A shared JBOD Scale-Out File Server (Image Credit: Microsoft)
In Windows Server 2016, Microsoft has designed a technology called Storage Spaces Direct, which is really SDS. What’s really good about it is the fact that the same engine can be utilized for various business sizes. There is no hardware lock-in, no distance or topology limitations. Having dropped hardware lock-in and accepting commodity components, the new technology also cuts down hardware expense and associated management costs as well. Besides, it is much easier for IT specialists to master one technology than dig into multiple different ones. How? Check out the full article here https://blog.starwindsoftware.com/microsoft-storage-spaces-direct/ and see how well S2D fares.
Log-Structured File System is a relatively new idea and the technology is obviously effective. However, it is a tool specially crafted for virtualization workload, thus it only works in certain cases. It won’t work out as a common file system for everyday tasks. The idea came from “transaction logs”, which aggregated small random writes into a log to copy them eventually to the “final destination”. Then ZIL (ZFS Intent Log) adopted the same principle in a file system.
Converged deployment of Storage Spaces Direct for private clouds (Image Credit: Microsoft)
What is good about a Log-Structured File System is that it was literally purpose-built for virtualization. It handles random writes like a marvel, improving performance by order of magnitude on the same hardware configuration. It also helps avoid read-modify-write sequence for parity RAID and offers fast failover recovery.
Most of the problems are associated with sequential reading, garbage collection and free space requirement. However, they all have practical solutions. Log-Structured File System is a good idea all in all, but it should be used with caution, because in terms of tasks and workloads it is not for everyone. Would you like to know more about sky-high performance in virtualization? Check out this article here https://blog.starwindsoftware.com/2015/10/26/heres-what-lsfs-wafl-casl-is-about-where-log-structuring-concept-came-from-what-its-good-for-and-why/ .
RAID 5 was great, until high-capacity HDDs came into play, but SSDs restored its former glory.(+)
New tech becomes obsolete and forgotten at an astonishing speed. However, sometimes it’s enough to develop another approach and all the old technology may become relevant again. This happened to RAID 5, because HDD capacity grew, but spindle speed had mechanical limits, which eventually made RAID5 too volatile.
With modern high-capacity HDDs, RAID 5 became unreliable, because it remains in a failure-prone state for a long time. Seek speed remains the same, while capacity grows, so rebuild time grows as well. This means the risk of double failure and major data loss becomes higher by order of magnitude. Besides, Unrecoverable Read Error chance being 1 bit in 1014-1015, the risk of double failure is roughly 50%, which is a disaster.
Using SSDs renders RAID 5 immune to the reliability issues, because flash is faster, has aptitude for random dada access and usually come in smaller capacity. Interested how utilizing SSD renders RAID5 immune to parity RAID issues? Check out this article here https://blog.starwindsoftware.com/2015/10/26/heres-what-lsfs-wafl-casl-is-about-where-log-structuring-concept-came-from-what-its-good-for-and-why/
It seems that some people still haven’t grasped the concept of web search. Too bad, it helps a lot when picking a name for your business…
LACP and MPIO have the same purpose, but work in different ways. The results cannot be exactly the same, so the idea is to compare them in a certain environment. LACP unifies physical ports into a single bigger channel, while MPIO provides up to 32 paths for the same data. So, which one is better in Microsoft environment with iSCSI Target and Initiator?
We actually tested everything on a suitable setup with checked network throughput: 1х Server Intel core i7 /16Gb RAM/HDD 1TB SATA for OS/ 2x 2 Ports 1Gb NIC for client. 1х Server Intel core i7 /16Gb RAM/HDD 1TB SATA for OS/8x 250Gb SSD for storage RAID 0/2х. 2 Ports 1Gb NIC for server. RAID Controller – LSI MR936I-8i. NIC – INTEL PRO/1000 PT Dual Port.
LACP and MPIO both technologies did their job, but one seemed to “win” the competition.
LACP has a serious drawback, because it still doesn’t support Multiple Connections per Session (MCS) in case with Microsoft, thus it does not scale. Therefore, LACP doesn’t give any performance boost here, if you don’t use MPIO with it as well. Would like to know more? Check out the whole article with testing process here https://blog.starwindsoftware.com/2015/03/31/lacp-vs-mpio-on-windows-platform-which-one-is-better-in-terms-of-redundancy-and-speed-in-this-case/ .
Some “wise guy” said he could store Hyper-V Virtual Machines on an NFS share. Sounds really convenient, but why didn’t anyone thought of that before? The problem is – they did, and they all failed. The abovementioned “wise guy” never actually told anyone how he managed to achieve that. Our curiosity outbalanced everything else, so here we are, trying to do what is thought to be impossible by everyone, but claimed to be real by one “hero”.
We decided to try three different approaches – as many as there are, in fact. The first is just trying to create a VM on the NFS share. The second is moving a previously created elsewhere VM to the NFS share. The third is manual copying a previously created elsewhere VM to the NFS share. If you know another way, contact us, please.
Read this article here https://blog.starwindsoftware.com/2015/01/14/hyper-v-vms-on-nfs-share-why-hasnt-anyone-thought-of-that-earlier-they-did-in-fact/ , see how we failed gloriously, or actually won in terms of our own opinion. NFS share was never meant to store Hyper-V virtual machines and it won’t. If anyone tells you otherwise, demand proof and send it to us. Because, you know, “wise people” talk a lot of their own genius, but often fail to back their words up with facts.