What's New in File and Storage Services in Windows Server 2016 Technical Preview

 

Updated: December 7, 2015

Applies To: Windows Server Technical Preview

This topic explains the new and changed functionality in Storage Services in Windows Server 2016 Technical Preview.

  • Storage Spaces Direct

  • Storage Replica

  • Storage Quality of Service

  • Deduplication

Storage Spaces Direct

Storage Spaces Direct enables building highly available (HA) storage systems with local storage. It simplifies the deployment and management of software-defined storage systems and also unlocks use of new classes of disk devices, such as SATA and NVMe disk devices, that were previously not possible with clustered Storage Spaces with shared disks.

What value does this change add?

With Windows Server 2016 Technical Preview Storage Spaces Direct, you can now build HA Storage Systems using storage nodes with only local storage, which is either disk devices that are internal to each storage node, or disk devices in JBODs where each JBOD is only connected to a single storage node. This completely eliminates the need for a shared SAS fabric and its complexities, but also enables using devices such as SATA disks, which can help further reduce cost or improve performance.

For more information, see the Storage Spaces Direct in Windows Server 2016 Technical Preview.

What works differently?

This capability is new in Windows Server 2016 Technical Preview.

Storage Replica

Storage Replica (SR) is a new feature that enables storage-agnostic, block-level, synchronous replication between servers or clusters for disaster recovery, as well as stretching of a failover cluster between sites. Synchronous replication enables mirroring of data in physical sites with crash-consistent volumes to ensure zero data loss at the file-system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of data loss.

What value does this change add?

Storage Replication enables you to do the following:

  • Provide a single vendor disaster recovery solution for planned and unplanned outages of mission critical workloads.

  • Use SMB3 transport with proven reliability, scalability, and performance.

  • Stretch Windows failover clusters to metropolitan distances.

  • Use Microsoft software end to end for storage and clustering, such as Hyper-V, Storage Replica, Storage Spaces, Cluster, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS.

  • Help reduce cost and complexity as follows:

    • Is hardware agnostic, with no requirement for a specific storage configuration like DAS or SAN.

    • Allows commodity storage and networking technologies.

    • Features ease of graphical management for individual nodes and clusters through Failover Cluster Manager.

    • Includes comprehensive, large-scale scripting options through Windows PowerShell.

  • Help reduce downtime, and increase reliability and productivity intrinsic to Windows.

  • Provide supportability, performance metrics, and diagnostic capabilities.

For more information, see the Storage Replica in Windows Server 2016 Technical Preview. .

What works differently?

This capability is new in Windows Server 2016 Technical Preview.

Storage Quality of Service

You can now use storage quality of service (QoS) to centrally monitor end-to-end storage performance and create policies using Hyper-V and Scale-Out File Servers in Windows Server 2016 Technical Preview.

What value does this change add?

You can now create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks on Hyper-V virtual machines. Storage performance is automatically readjusted to meet policies as the storage load fluctuates.

  • Each policy specifies a reserve (minimum) and a limit (maximum) to be applied to a collection of data flows, such as a virtual hard disk, a single virtual machine or a group of virtual machines, a service, or a tenant.

  • Using Windows PowerShell or WMI, you can perform the following tasks:

    • Create policies on a Scale-Out File Server.

    • Enumerate policies available on a Scale-Out File Server.

    • Assign a policy to a virtual hard disk on a server running Hyper-V.

    • Monitor the performance of each flow and status within the policy.

  • If multiple virtual hard disks share the same policy, performance is fairly distributed to meet demand within the policy minimum and maximum. Therefore, a policy can be used to represent a virtual machine, multiple virtual machines comprising a service, or all virtual machines owned by a tenant.

What works differently?

This capability is new in Windows Server 2016 Technical Preview. It was not possible to configure centralized policies for storage QoS in previous releases of Windows Server.

For more information, see Storage Quality of Service 

Deduplication

This section describes changes in the Data Deduplication feature in Windows Server 2016 Technical Preview, including integrated support for virtualized backup workloads and major performance improvements to better support large scale deployments. For more information about Data Duplication, see Data Deduplication Overview.

The following table describes the changes in Data Deduplication functionality in Windows Server 2016 Technical Preview:

Functionality

New or Updated

Description

Integrated support for virtualized backup workloads

New

Virtualized backup has been added as a new usage type with integrated, tuned configuration settings to simplify deployment.

Support for Cluster Rolling Upgrades

New

Deduplication can run in a cluster with different nodes running a mix of Windows Server 2012 R2 and Windows Server 2016 Technical Preview, providing full access to deduplicated volumes during a cluster rolling upgrade.

Support for Nano Server

New

Deduplication is now supported in the Nano server installation option for Windows Server Technical Preview

Improved optimization throughput for large volumes

Updated

Deduplication optimization processing is now multithreaded and able to utilize multiple CPU’s per volume to increase optimization throughput rates on volume sizes up to 64 TB

Improved performance of very large files

Updated

Deduplication large file sizes (100GBs up to 1 TB) vastly improved: faster optimization throughput, better performance access, the ability to resume optimization of large files (rather than restart) after failover.

Integrated support for virtualized backup workloads

What value does this change add?

Integrating virtualized backup workload support with deduplication allows administrators to easily deploy deduplicated volumes for saving backup data, enabling considerable storage cost savings by storing highly optimized deduplicated versions of their backup data.

What works differently?

In Windows Server 2012 R2, the officially supported workloads for deduplication were general file server and Virtual Desktop Infrastructure (VDI) workloads types. Starting with the Windows Server 2012 R2 November 2014 update support for deduplication of virtualized backup workloads was added, but required manual configuration and tuning. The Windows Server 2016 Technical Preview integrates virtualized backup as a new usage type automating the configuration and tuning to allow for greatly simplified deployment via the Server Manager GUI.

Support for Cluster Rolling Upgrades

What value does this change add?

Windows Failover Clusters running deduplication can have a mix of nodes running Windows Server 2012 R2 versions of deduplication alongside nodes running Windows Server 2016 Technical Preview versions of deduplication. This enhancement provides full data access to all deduplicated volumes during a cluster rolling upgrade, allowing for the gradual rollout of the new version of deduplication on an existing Windows Server 2012 R2 cluster without incurring downtime to upgrade all nodes at once.

What works differently?

With previous versions of Windows Server, a Windows Failover Cluster required all nodes in the cluster to be at the exact same Windows Server version. Starting with the Windows Server 2016 Technical Preview, the cluster rolling upgrade functionality allows a cluster to run in a mixed-mode. Deduplication supports this new mixed-mode cluster configuration to enable full data access during a cluster rolling upgrade.

Note

Although both Windows versions of deduplication can access the optimized data, the optimization jobs will only run on the Windows Server 2012 R2 nodes and be blocked from running on the Windows Server 2016 Technical Preview nodes until the cluster rolling upgrade is complete.

Support for Nano Server

What value does this change add?

Nano Server is a new installation option in Windows Server Technical Preview that provides a cloud-optimized Windows Server environment. Data deduplication is fully supported in Nano Server. For more information about Nano Server, see Getting Started with Nano Server.

What works differently?

Nano Server is a headless deployment option for Windows Server 2016 Technical Preview providing a deeply refactored and reduced environment optimized for cloud deployments. Data deduplication has been tuned and validated to operate in the Nano Server environment.

Note

Currently, in Windows Server 2016 Technical Preview, deduplication support for Nano Server has the following restrictions:

  • The support for Nano server has only been validated in non-clustered configurations.

  • A deuplication job can only be cancelled manually using the Stop-DedupJob PowerShell command)

Improved optimization throughput for large volumes

What value does this change add?

With a number of improvements in the optimization algorithm, deduplication can now leverage multiple processors to optimize data at a much faster rate on a single volume, reducing the time required to optimize data changes. This enables the use of very large volume sizes (up to 64TB).

Supporting very large volumes allows administrators to simplify deduplication volume management by allowing consolidation of many small deduplication volumes into a single large one.

What works differently?

In Windows Server 2012 R2, deduplication optimization jobs were limited to using a single processor per volume. With Windows Server 2016 Technical Preview, multiple processors are used to optimize data on deduplication-enabled volumes.

Improved performance of very large files

What value does this change add?

Improving performance on very large files allows administrators to apply deduplication savings to a larger range of workloads. For example, enabling deduplication of very large files normally associated with backup workloads.

What works differently?

In Windows Server 2016 Technical Preview, deduplication large file sizes (100GBs up to 1 TB) vastly improved: faster optimization throughput, better performance access, the ability to resume optimization of large files (rather than restart) after failover.

For more information about what's new in deduplication, see Data Deduplication in Windows Server Technical Preview 4.

See also

What's New in Windows Server 2016 Technical Preview 4