We’re making a series of tests, dedicated solely to the work of Resilient File System (ReFS – https://msdn.microsoft.com/en-us/library/windows/desktop/hh848060%28v=vs.85%29.aspx).
It is a relatively new proprietary file system, introduced by Microsoft in Windows Server 2012 as a successor of NTFS. Among its advantages over the predecessor, Microsoft lists enhanced protection from data corruption, common and silent alike, if provided with a redundant storage. The ReFS is also aimed at modern understanding of high capacity and large size. It supports files up to 16 million terabytes and maximum volume size (theoretical) of 1 trillion terabytes.
The tests are associated with workloads typical for virtualization – you can read more about them here. Our purpose is practical, because as of Windows 2012 time, Resilient File System has major issues with virtualization.
In the first part, we are planning to see how ReFS works with the FileIntegrity option on and off. This option is responsible for the data repair process, so it’s crucial that it works well under the pressure of random I/O, which dominates among virtualization workloads. You can check out the test here.
The second test is dedicated to performance because there have been reports of ReFS performance troubles somehow caused by hashsumming. We are about to see if the reports are true and if hashsumming has anything to do with any problems.
Log-Structured File System is a relatively new idea and the technology is obviously effective. However, it is a tool specially crafted for virtualization workload, thus it only works in certain cases. It won’t work out as a common file system for everyday tasks. The idea came from “transaction logs”, which aggregated small random writes into a log to copy them eventually to the “final destination”. Then ZIL (ZFS Intent Log) adopted the same principle in a file system.
Converged deployment of Storage Spaces Direct for private clouds (Image Credit: Microsoft)
What is good about a Log-Structured File System is that it was literally purpose-built for virtualization. It handles random writes like a marvel, improving performance by order of magnitude on the same hardware configuration. It also helps avoid read-modify-write sequence for parity RAID and offers fast failover recovery.
Most of the problems are associated with sequential reading, garbage collection and free space requirement. However, they all have practical solutions. Log-Structured File System is a good idea all in all, but it should be used with caution, because in terms of tasks and workloads it is not for everyone. Would you like to know more about sky-high performance in virtualization? Check out this article here https://blog.starwindsoftware.com/2015/10/26/heres-what-lsfs-wafl-casl-is-about-where-log-structuring-concept-came-from-what-its-good-for-and-why/ .