Hyper-V Quality of Service (QoS)

Updated: October 17, 2012

Applies To: Windows Server 2012

Windows Server 2012 includes new Quality of Service (QoS) bandwidth management features that enable cloud hosting providers and enterprises to provide services that deliver predictable network performance to virtual machines on a server running Hyper-V. In Windows Server 2012, QoS supports the management of upper-allowed and lower-allowed bandwidth limits, referred to in this document as maximum bandwidth and minimum bandwidth. Windows Server 2012 also takes advantage of data center bridging (DCB)-capable hardware to converge multiple types of network traffic on a single network adapter with a guaranteed level of service to each type. With Windows PowerShell, you can configure all these new features manually or enable automation in a script to manage a group of servers, regardless of whether they stand alone or are joined to a domain.

For example, cloud hosting providers want to use servers running Hyper-V to host customers and still guarantee a specific level of performance based on service level agreements (SLAs). They want to ensure that no customer is impacted or compromised by other customers on their shared infrastructure, which includes computing, storage, and network resources. Likewise, enterprises have similar requirements. They want to run multiple application servers on a server running Hyper-V and be confident that each application server delivers predictable performance. Lack of performance predictability often drives administrators to put fewer virtual machines on a capable server or simply avoid virtualization, causing them to spend more money on physical equipment and infrastructure.

Furthermore, most cloud hosting providers and enterprises today use a dedicated network adapter and a dedicated subnet for a specific type of workload such as storage or live migration to achieve network performance isolation on a server running Hyper-V. Although this deployment strategy works for those using 1-gigabit Ethernet network adapters, it becomes impractical for those who are using or plan to use 10-gigabit Ethernet network adapters. Not only does one 10-gigabit Ethernet network adapter (or two for high availability) already provide sufficient bandwidth for all the workloads on a server running Hyper-V in most deployments, but 10-gigabit Ethernet network adapters and switches are considerably more expensive than their 1-gigabit Ethernet counterparts. To best utilize 10-gigabit Ethernet hardware, a server running Hyper-V requires new capabilities to manage bandwidth.

Requirements

Every version of Windows Server 2012 includes the new QoS functionality. The minimum bandwidth that is enforced by the packet scheduler can always be enabled, but it works best on 1-gigabit Ethernet network adapters or 10-gigabit Ethernet network adapters. We do not recommend that you enable QoS in Windows Server 2012 when it is running as a virtual machine within a virtualized environment. QoS is designed for traffic management on physical networks, rather than virtual networks.

For hardware-based minimum bandwidth, you must use a network adapter that supports DCB, and the miniport driver of the network adapter must implement the Network Driver Interface Specification (NDIS) of the QoS application programming interfaces (APIs). With these requirements met, you can use the new Windows PowerShell cmdlets to configure the network adapter to provide bandwidth guarantees to multiple types of network traffic.

DCB is a suite of technologies that help converge multiple subnets in a data center (such as your data and storage networks) onto a single subnet. DCB consists of the following:

  • 802.1Qaz Enhanced Transmission Selection (ETS) to support the allocation of bandwidth among various types of traffic.

  • 802.1Qbb Priority-based Flow Control (PFC) to enable flow control for a specify type of traffic.

  • 802.1Qau Congestion Notification to support congestion management of long-lived data flows within a data center.

If a network adapter supports iSCSI offload or remote direct memory access (RDMA) over Converged Ethernet (RoCE), and it is used in a data center, the network adapter must also support ETS to provide bandwidth allocation to the offload traffic. In addition, RoCE requires a lossless transport. Because Ethernet does not guarantee packet delivery, the network adapter and the corresponding switch must support PFC. For these reasons, a network adapter must support ETS and PFC to pass the NDIS QoS logo test for Windows Server 2012 certification. 802.1Qau Congestion Notification is not required to obtain the logo. Furthermore, the ETS specifications from the IEEE include a software protocol called Data Center Bridging Exchange (DCBX) to allow a network adapter and a switch to exchange DCB configurations. DCBX is also not required to obtain the logo.

Technical overview

In Windows Server 2008 R2, QoS supports the enforcement of maximum bandwidth. Consider a typical server running Hyper-V in which there are four types of network traffic that share a single 10-gigabit Ethernet network adapter:

  • Traffic between virtual machines and resources on other servers (virtual machine data)

  • Traffic to and from storage (storage)

  • Traffic for live migration of virtual machines between servers running Hyper-V (live migration)

  • Traffic to and from a cluster shared volume (CSV) (cluster heartbeat)

If virtual machine data is rate-limited to 3 Gbps, this means the sum of the virtual machine data throughputs cannot exceed 3 Gbps at any time, even if the other network traffic types do not use the remaining 7 Gbps of bandwidth. However, this also means that the other types of traffic can reduce the actual amount of bandwidth that is available for virtual machine data to unacceptable levels, depending on whether or how their maximum bandwidths are defined.

In Windows Server 2012, QoS introduces a new bandwidth management feature: Minimum bandwidth. In contrast to maximum bandwidth, minimum bandwidth guarantees a specific amount of bandwidth to a specific type of traffic. Figure 1 provides an example of how minimum bandwidth works for each of the four types of network traffic flow in three different time periods: T1, T2, and T3.

Figure 1 How minimum bandwidth works

Figure 1   How minimum bandwidth works

The left table shows the minimum amount of bandwidth that is reserved for a specific type of network traffic and the estimated amount of bandwidth that it needs in the three time periods. For example, storage is configured to have at least 40% of the bandwidth (4 Gbps of a 10-gigabit Ethernet network adapter) at any time. In T1 and T2, it has 5 Gbps worth of data to transmit; and in T3, it has 6 Gbps worth of data. The right table shows the actual amount of bandwidth each type of network traffic gets in T1, T2, and T3. In this example, storage is sent at 5 Gbps, 4 Gbps, and 6 Gbps, respectively, in the three periods.

The characteristics of minimum bandwidth can be summarized as follows:

  • In the event of congestion, when the demand for network bandwidth exceeds the available bandwidth (such as in T2 period in the example), minimum bandwidth ensures each type of network traffic gets up to its assigned bandwidth. For this reason, minimum bandwidth is also known as fair sharing. This characteristic is essential to converge multiple types of network traffic on a single network adapter.

  • If there is no congestion (that is, when there is sufficient bandwidth to accommodate all network traffic, such as in the T1 and T3 periods), each type of network traffic can exceed its quota and consume as much bandwidth as is available. This characteristic makes minimum bandwidth superior to maximum bandwidth in utilizing available bandwidth.

Windows Server 2012 offers two mechanisms to enforce minimum bandwidth:

  • Through the newly enhanced packet scheduler

  • Through network adapters that support data center bridging (DCB)

In both cases, network traffic needs to be classified first. Windows Server 2012 classifies a packet or gives instructions to a network adapter to classify it. The results of classification are that a number of traffic flows are being managed and a specific packet can belong to only one of them.

For example, a traffic flow can be a live migration connection, a file transfer between a server and a client computer, or a remote desktop connection. Based on how the bandwidth policies are configured, the packet scheduler in Windows Server 2012 or the network adapter sends the packets that are included in a specific traffic flow at a rate equal to or higher than the minimum bandwidth that is configured for the traffic flow.

The two mechanisms have advantages and disadvantages:

  • The packet scheduler in Windows Server 2012 provides a fine level of detail for classification. It is a better choice if you have many traffic flows that require minimum bandwidth enforcement. A typical example is a server running Hyper-V that is hosting many virtual machines, where each virtual machine is classified as a traffic flow.

  • DCB support on the network adapter supports fewer traffic flows. However, it can classify network traffic that does not originate from the networking stack. A typical scenario involves a special network adapter called a converged network adapter (CNA) that supports iSCSI offload, in which iSCSI traffic bypasses the networking stack and is framed and transmitted directly by the CNA. Because the packet scheduler in the networking stack does not process this offloaded traffic, DCB is the only way to enforce minimum bandwidth.

Both mechanisms can be employed on the same server. However, do not enable both mechanisms at the same time for a specific type of network traffic. Enabling both mechanisms at the same time for the same types of network traffic reduces performance.

In Windows Server 2012, QoS policies and settings are managed by using Windows PowerShell. The new Windows PowerShell cmdlets for QoS support the QoS functionalities that are available in Windows Server 2008 R2 (such as maximum bandwidth and priority tagging) and the new features such as minimum bandwidth. Although you can only manually configure QoS policies by using Group Policy snap-ins (gpedit.msc or gpmc.msc) in Windows Server 2008 R2, you can program or enable the automation of QoS policies by using Windows PowerShell in Windows Server 2012. Windows Server 2012 facilitates static and dynamic configuration and enables you to manage virtualized servers that are connected to a converged network in your data center. Because Windows PowerShell has remote computer management capability, you can manage QoS policies for a group of servers at one time, even if these servers are not joined to a domain.