What's New in File and Storage Services in Windows Server Technical Preview

更新时间: 2014年10月

应用到: Windows Server Technical Preview



This topic explains the new and changed functionality in Storage Services in Windows Server Technical Preview.

You can now use storage quality of service (QoS) to centrally monitor end-to-end storage performance and create policies using Hyper-V and Scale-Out File Servers in Windows Server Technical Preview.

What value does this change add?

You can now create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks on Hyper-V virtual machines. Storage performance is automatically readjusted to meet policies as the storage load fluctuates.

  • Each policy specifies a reserve (minimum) and a limit (maximum) to be applied to a collection of data flows, such as a virtual hard disk, a single virtual machine or a group of virtual machines, a service, or a tenant.

  • Using Windows PowerShell or WMI, you can perform the following tasks:

    • Create policies on a Scale-Out File Server.

    • Enumerate policies available on a Scale-Out File Server.

    • Assign a policy to a virtual hard disk on a server running Hyper-V.

    • Monitor the performance of each flow and status within the policy.

  • If multiple virtual hard disks share the same policy, performance is fairly distributed to meet demand within the policy minimum and maximum. Therefore, a policy can be used to represent a virtual machine, multiple virtual machines comprising a service, or all virtual machines owned by a tenant.

What works differently?

This capability is new in Windows Server Technical Preview. It was not possible to configure centralized policies for storage QoS in previous releases of Windows Server.

For more information, see Windows Server Technical Preview Step-by-Step Guide for Storage Quality of Service

Storage Replica (SR) is a new feature that enables storage-agnostic, block-level, synchronous replication between servers for disaster recovery, as well as stretching of a failover cluster for high availability. Synchronous replication enables mirroring of data in physical sites with crash-consistent volumes ensuring zero data loss at the file system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of data loss.

What value does this change add?

Storage Replication enables you to do the following:

  • Provide an all-Microsoft disaster recovery solution for planned and unplanned outages of mission-critical workloads.

  • Use SMB3 transport with proven reliability, scalability, and performance.

  • Stretch clusters to metropolitan distances.

  • Use Microsoft software end to end for storage and clustering, such as Hyper-V, Storage Replica, Storage Spaces, Cluster, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS.

  • Help reduce cost and complexity as follows:

    • Is hardware agnostic, with no requirement for a specific storage configuration like DAS or SAN.

    • Allows commodity storage and networking technologies.

    • Features ease of graphical management for individual nodes and clusters through Failover Cluster Manager.

    • Includes comprehensive, large-scale scripting options through Windows PowerShell.

  • Helps reduce downtime, and increase reliability and productivity intrinsic to Windows.

  • Provide supportability, performance metrics, and diagnostic capabilities.

For more information, see the Windows Server Technical Preview Step-by-Step Guide for Storage Replica.

What works differently?

This capability is new in Windows Server Technical Preview.

This section describes changes in the Data Deduplication feature in Windows Server Technical Preview, including integrated support for virtualized backup workloads and major performance improvements to better support large scale deployments.

The following table describes the changes in Data Deduplication functionality in Windows Server Technical Preview:

 

Functionality New or Updated Description

Integrated support for virtualized backup workloads

New

Virtualized backup has been added as a new usage type with integrated, tuned configuration settings to simplify deployment.

Support for Cluster Rolling Upgrades

New

Deduplication can run in a cluster with different nodes running a mix of Windows Server 2012 R2 and Windows Server Technical Preview, providing full access to deduplicated volumes during a cluster rolling upgrade.

Improved optimization throughput for large volumes

Updated

Deduplication optimization processing is now multithreaded and able to utilize multiple CPU’s per volume to increase optimization throughput rates on volume sizes up to 64 TB

Improved performance of very large files

Updated

Deduplication large file sizes (100GBs up to 1 TB) vastly improved: faster optimization throughput, better performance access, the ability to resume optimization of large files (rather than restart) after failover.

What change does this value add?

Integrating virtualized backup workload support with deduplication allows administrators to easily deploy deduplicated volumes for saving backup data, enabling considerable storage cost savings by storing highly optimized deduplicated versions of their backup data.

What works differently?

In Windows Server 2012 R2, the officially supported workloads for deduplication were general file server and Virtual Desktop Infrastructure (VDI) workloads types. Starting with the Windows Server 2012 R2 November 2014 update support for deduplication of virtualized backup workloads was added, but required manual configuration and tuning. The Windows Server Technical Preview integrates virtualized backup as a new usage type automating the configuration and tuning to allow for greatly simplified deployment via the Server Manager GUI.

What change does this value add?

Windows Failover Clusters running deduplication can have a mix of nodes running Windows Server 2012 R2 versions of deduplication alongside nodes running Windows Server Technical Preview versions of deduplication. This enhancement provides full data access to all deduplicated volumes during a cluster rolling upgrade, allowing for the gradual rollout of the new version of deduplication on an existing Windows Server 2012 R2 cluster without incurring downtime to upgrade all nodes at once.

What works differently?

With previous versions of Windows Server, a Windows Failover Cluster required all nodes in the cluster to be at the exact same Windows Server version. Starting with the Windows Server Technical Preview, the cluster rolling upgrade functionality allows a cluster to run in a mixed-mode. Deduplication supports this new mixed-mode cluster configuration to enable full data access during a cluster rolling upgrade.

note注意
Although both Windows versions of deduplication can access the optimized data, the optimization jobs will only run on the Windows Server 2012 R2 nodes and be blocked from running on the Windows Server Technical Preview nodes until the cluster rolling upgrade is complete.

What change does this value add?

With a number of improvements in the optimization algorithm, deduplication can now leverage multiple processors to optimize data at a much faster rate on a single volume, reducing the time required to optimize data changes. This enables the use of very large volume sizes (up to 64TB).

Supporting very large volumes allows administrators to simplify deduplication volume management by allowing consolidation of many small deduplication volumes into a single large one.

What works differently?

In Windows Server 2012 R2, deduplication optimization jobs were limited to using a single processor per volume. With Windows Server Technical Preview, multiple processors are used to optimize data on deduplication-enabled volumes.

What change does this value add?

Improving performance on very large files allows administrators to apply deduplication savings to a larger range of workloads. For example, enabling deduplication of very large files normally associated with backup workloads.

What works differently?

In Windows Server Technical Preview, deduplication large file sizes (100GBs up to 1 TB) vastly improved: faster optimization throughput, better performance access, the ability to resume optimization of large files (rather than restart) after failover.

社区附加资源

显示: