ReFS V2 Research Overview (Windows Server 2016)

We’re making a series of tests, dedicated solely to the work of Resilient File System (ReFS –
It is a relatively new proprietary file system, introduced by Microsoft in Windows Server 2012 as a successor of NTFS. Among its advantages over the predecessor, Microsoft lists enhanced protection from data corruption, common and silent alike, if provided with a redundant storage. The ReFS is also aimed at modern understanding of high capacity and large size. It supports files up to 16 million terabytes and maximum volume size (theoretical) of 1 trillion terabytes.

The tests are associated with workloads typical for virtualization – you can read more about them here. Our purpose is practical, because as of Windows 2012 time, Resilient File System has major issues with virtualization.

In the first part, we are planning to see how ReFS works with the FileIntegrity option on and off. This option is responsible for the data repair process, so it’s crucial that it works well under the pressure of random I/O, which dominates among virtualization workloads. You can check out the test here.

The second test is dedicated to performance because there have been reports of ReFS performance troubles somehow caused by hashsumming. We are about to see if the reports are true and if hashsumming has anything to do with any problems.