Export (0) Print
Expand All

Building Your Cloud Infrastructure: Converged Data Center without Dedicated Storage Nodes

Published: April 18, 2012

Updated: December 18, 2012

Applies To: Windows Server 2012



This document contains the instructions that you need to follow to create a private or public cloud configuration that uses:

  • A converged network infrastructure for live migration, cluster, storage, management, and tenant traffic

  • All network traffic moves through the Hyper-V virtual switch

  • Hyper-V Virtual Switch Quality of Service (QoS)

  • Hyper-V Virtual Switch port ACLs and 802.1q VLAN tagging

  • NIC Teaming for network bandwidth aggregation and failover

  • Well-connected storage using SAS JBOD enclosures

The design pattern discussed in this document is one of three design patterns we suggest for building the core cloud network, compute and storage infrastructure. For information about the other two cloud infrastructure design patterns, please see:

The Converged Data Center without Dedicated Storage Nodes cloud infrastructure design patterns focuses on the following key requirements in the areas of networking, compute and storage:

  • You prefer that network traffic to and from both the host operating system and the guest operating systems running on the host move through a single network adapter team. This requirement is met by using Windows Server 2012 NIC Teaming (LBFO) and passing all traffic through the Hyper-V virtual switch.

  • You require that live migration, cluster, storage, management and tenant traffic all receive guaranteed levels of bandwidth. The requirement is met by using Hyper-V virtual switch QoS policies.

  • You require that infrastructure traffic (which includes Live Migration, cluster, storage and management traffic) and tenant traffic be isolated from each other. This requirement is met by using Hyper-V virtual switch port ACLs and 802.1q VLAN tagging.

  • You prefer to scale your cloud infrastructure by adding scale units consisting of compute and storage capacity together. This requirement is met by connecting the Hyper-V servers directly to SAS storage, without having dedicated file servers.

  • You require cost-effective storage. This requirement is met by using SAS disks in shared JBOD enclosures managed through Storage Spaces.

  • You require a resilient storage solution. This requirement is met by having multiple Hyper-V servers configured as a failover cluster, and having a well-connected (shared JBODs) storage so that all members of the failover cluster are directly connected to storage, and by having Storage Spaces configured in as a mirrored space to guarantee against data loss in the case of disk failures

  • You require that each member of the Hyper-V failover cluster be able to access the shared storage where the VHDs are located. This requirement is met by using Windows Server 2012 Failover Clustering and Cluster Shared Volumes Version 2 (CSV v2) volumes to store virtual machine files and metadata.

  • You require that the virtual machines will be continuously available and resilient to hardware failures. This requirement can be met by using Windows Server 2012 Failover Clustering together with the Hyper-V Server Role.

  • You require the highest number of virtual machines possible per host server (i.e. increased density). This requirement is met by using processor offload technologies, such as Remote Direct Memory Access (RDMA), Receive Side Scaling, Receive Side Coalescing (RSC), and Datacenter Bridging (DCB). Please note that in the default configuration presented here (without a dedicated storage access NIC), RDMA and DCB cannot be used because these technologies require direct access to the hardware and must bypass much of the virtual networking stack. This is similar to the situation with Single Root I/O Virtualization (SR-IOV). For optimal performance, especially in the context of network access to storage, a separate NIC team would be required to support these hardware offload acceleration technologies.

A Windows Server® 2012 cloud infrastructure is a high-performing and highly available Hyper-V cluster that hosts virtual machines that can be managed to create private or public clouds using the Converged Data Center without Dedicated Storage Nodes infrastructure design pattern. This document explains how to configure the basic building blocks for such a cloud. It does not cover the System Center or other management software aspects of deployments; the focus is on configuring the core Windows Server hosts that are used to build cloud infrastructure.

For background information on creating clouds using Windows Server 2012, see Building Infrastructure as a Service Clouds using Windows Server "8".

This cloud configuration consists of the following:

  • Multiple computers in a Hyper-V failover cluster.

    A Hyper-V cluster is created using the Windows Server 2012 Failover Cluster feature. The Windows Server 2012 Failover Clustering feature set is tightly integrated with the Hyper-V server role and enables a high level of availability from a compute and networking perspective. In addition, Windows Server 2012 Failover Clustering enhances virtual machine mobility which is critical in a cloud environment. For example, Live Migration is enhanced when performed in a failover cluster deployment because the cluster can automatically evaluate which node in the cluster is optimal for migrated virtual machine placement.

  • A converged networking infrastructure that supports multiple cloud traffic profiles.

    Each computer in the Hyper-V failover cluster should have at least two network adapters that will be used for the converged network. This converged network will host all traffic to and from the server, which includes both host system traffic and guest/tenant traffic. The network adapters will be teamed by using Windows Server 2012 Load Balancing and Failover (LBFO) NIC Teaming. The NICs can be either two or more 10 GbE or 1 GbE network adapters. These NICs will be used for live migration, cluster, storage, management (together referred to as "infrastructure" traffic) and tenant traffic.

  • The appropriate networking hardware to connect all of the computers in the Hyper-V cluster to each other and to a larger network from which the hosted virtual machines are available.

The following figure provides a high-level view of the scenario architecture. The teamed network adapters on each member of the failover cluster are connected to what will be referred to as a converged subnet in this document. We use the term converged subnet to make it clear that all traffic to and from the Hyper-V cluster members and the tenant virtual machines on each cluster member must flow through the teamed converged subnet network adapter. Both the host operating system and the tenants connect to the network through the Hyper-V virtual switch. The figure also shows an optional network adapter that is RDMA-capable that can be used for storage traffic, such as when storage is being hosted on a share on a remote file server. This document does not discuss this optional configuration option. For more information about this storage option, please see the document on the Converged Data Center with File Server Storage design pattern at http://technet.microsoft.com/en-us/library/hh831738.

High level overview of cluster member networking

Figure 1 High level overview of cluster member networking configuration

noteNote
At least one Active Directory Domain Services (AD DS) domain controller is needed for centralized security and management of the cluster member computers (not shown). It must be reachable by all of the cluster member computers, including the members of the shared storage cluster. DNS services are also required and are not depicted.

Figure 2 provides an overview of traffic flows on each member of the Hyper-V cluster. The figure calls out the following significant issues in the configuration:

  • Each cluster node member uses a virtual network adapter to connect to the Hyper-V Extensible Switch, which connects it to the physical network.

  • Each tenant virtual machine is also connected to the Hyper-V Extensible Switch using a virtual network adapter.

  • Network adapters named ConvergedNet1 and ConvergedNet2 participate in a teamed physical network adapter configuration using the Windows Server 2012 Failover and Load Balancing feature.

  • Windows Server 2012 Hyper-V virtual switch QoS is used to assure that each traffic type (such as live migration, cluster, management and tenant) has a predictable amount of bandwidth available.

  • Traffic isolation is enabled by 802.1q VLAN tagging so that host traffic is not visible to the tenants.

  • Windows Server 2012 Hyper-V virtual switch port ACLs can also be used for more granular access control at the network level.

It is important to note that Remote Direct Memory Access (RDMA) cannot be used on the converged network because it does not work together with the Hyper-V virtual switch. This will be an issue if you prefer to use high performance SMB 3 connectivity to file server based storage for virtual machine disk and configuration files. In the file server storage scenario, you can introduce addition RDMA capable adapters to connect to storage.

noteNote
Virtual local area networks (VLANs) are not assigned to each tenant because VLAN-based network isolation is not a scalable solution and is not compatible with Windows Server 2012 network virtualization. VLANs are used to isolate infrastructure traffic from tenant traffic in this scenario.

Overview of cluster member traffic flows

Figure 2 Overview of cluster member traffic flows

This configuration highlights the following technologies and features of Windows Server 2012:

  • Load Balancing and Failover (LBFO): Load Balancing and Failover logically combines multiple network adapters to provide bandwidth aggregation and traffic failover to prevent connectivity loss in the event of a network component failure. Load Balancing with Failover is also known as NIC Teaming in Windows Server 2012.

  • Hyper-V Virtual Switch Quality of Service (QoS): In Windows Server 2012, QoS includes new bandwidth management features that let you provide predictable network performance to virtual machines on a server running Hyper-V.

  • Hyper-V Virtual Switch Quality of Service (QoS): In Windows Server 2012 the Hyper-V virtual switch includes new capabilities that enhance the security of the cloud infrastructure. You can now use Port Access Control Lists (Port ACLs) and VLAN support to get network isolation similar to what you find when using physical network isolation.

  • Storage Spaces: Storage Spaces makes it possible for you to create cost-effective disk pools that present themselves as a single mass storage location on which virtual disks or volumes can created and formatted.

noteNote
Although this configuration uses local SAS storage to meet the cost-effective storage requirement, you can easily choose to use other types of storage, such as SAN storage. You can find more information about storage configuration for a non-SAS scenario in the document Building Your Cloud Infrastructure: Non-Converged Enterprise Configuration, which describes how to configure the SAN storage.

The following sections describe how to set up this cloud configuration using UI-based tools and Windows PowerShell.

After the cloud is built, you can validate the configuration by doing the following:

  • Install and configure virtual machines

  • Migrate running virtual machines between servers in the Hyper-V cluster (live migration)

In this section, we will cover the step by step of how to configure the cloud infrastructure scale unit described in this document.

Creating this cloud infrastructure configuration consists of the following steps:

  • Step 1: Initial node configuration

  • Step 2: Initial network configuration

  • Step 3: Initial storage configuration

  • Step 4: Failover cluster setup

  • Step 5: Configure Hyper-V settings

  • Step 6: Cloud validation

The following table summarizes the steps that this document describes:

 

Step

Task

Target

Tasks

1

Initial Node Configuration

All Nodes

  • 1.1-Add appropriate VLANs to interface ports on the physical switch for each traffic type:

    • Management (untagged, default)

    • Tenants (tagged)

    • Live migration (tagged)

    • Cluster/cluster shared volumes (CSV) (tagged)

  • 1.2-Enable BIOS settings required for Hyper-V

  • 1.3-Perform a clean operating system installation

  • 1.4-Perform post installation tasks:

    • Set Windows PowerShell execution policy

    • Enable Windows PowerShell remoting

    • Enable Remote Desktop Protocol and Firewall rule

    • Join the domain

  • 1.5-Install roles and features using default settings, rebooting as needed

    • Hyper-V (plus management tools)

    • Storage Services

    • Failover clustering (plus management tools)

    • File Sharing and storage management tools

2

Initial Network Configuration

All Nodes

  • 2.1-Disable unused and disconnected interfaces and rename active connections

  • 2.2-Create the converged network adapter team (rename as necessary) and assign IP addresses or configure DHCP as appropriate.

  • 2.3-Create the Hyper-V vSwitch and Management virtual network adapter (PowerShell)

  • 2.4-Rename Management virtual network adapter (optional)

  • 2.5-Create additional virtual network adapters and assign VLAN IDs (PowerShell)

    • Live migration

    • Cluster

  • 2.6-Rename the virtual network adapters

  • 2.7-Assign static IPs as necessary

  • 2.8-Configure QoS for different traffic types and configure the default minimum bandwidth for the switch

3

Initial Storage Configuration

Single Node

  • 3.1-Present all shared storage to relevant nodes

  • 3.2-For multipath scenarios, install and configure multipath I/O (MPIO) as necessary

  • 3.3-All shared disks: Wipe, bring online and initialize

4

Failover Cluster Setup

Single Node

  • 4.1-Run through the Cluster Validation Wizard

  • 4.2-Address any indicated warnings and/or errors

  • 4.3-Complete the Create Cluster Wizard (setting name and IP but do not add eligible storage)

  • 4.4-Create the clustered storage pool.

  • 4.5-Create the quorum virtual disk

  • 4.6-Create the virtual machine storage virtual disk.

  • 4.7-Add the virtual machine storage virtual disk to cluster shared volumes.

  • 4.8-Add folders to the cluster shared volume.

  • 4.9-Configure quorum settings

  • 4.10-Configure cluster networks to prioritize traffic.

5

Hyper-V Configuration

All Nodes

  • 5.1-Change default file locations, mapping to CSV volumes

6

Cloud Validation

Single Node

  • 6.1-Create a virtual machine, attaching an existing operating system VHD and tagging to the appropriate VLAN

  • 6.2-Test network connectivity from the virtual machine.

  • 6.3-Perform a Live Migration

  • 6.4-Perform a quick migration

In step 1, you will perform the following steps on all nodes of the Hyper-V cluster:

  • 1.1 Add appropriate VLANS to the interface ports on the physical switch.

  • 1.2 Enable BIOS settings required for Hyper-V.

  • 1.3 Perform a clean operating system installation.

  • 1.4 Perform post-installation tasks.

  • 1.5 Install roles and features using the default settings.

Cluster nodes will be configured to use different VLAN tags for the following traffic types:

  • Management traffic – untagged/default

  • Tenant traffic – tagged

  • Live migration traffic – tagged

  • Cluster and CSV traffic – tagged

VLANs are configured to enable traffic isolation and quality of service policies. Define the VLAN tag numbers for each traffic type and then configure your switch with the appropriate VLAN port numbers. The procedures for doing this will vary with the switch make and model. Please refer to your switch documentation for more information. Note that management traffic is typically not tagged because it can interfere with a number of core host system activities. While you can tag the management traffic, you may run into problem with issues such as PXE boot. Therefore, we recommend that you do not tag the management traffic.

You will need to enable virtualization support in the BIOS of each cluster member prior to installing the Hyper-V server role. The procedure for enabling processor virtualization support will vary with your processors' make and model and the system BIOS. Please refer to your hardware documentation for the appropriate procedures.

Install Windows Server 2012 using the Full Installation option.

There are several tasks you need to complete on each node after the operating system installation is complete. These include:

  • Join each node to the domain

  • Enable remote access to each node via the Remote Desktop Protocol.

  • Set the Windows PowerShell execution policy.

  • Enable Windows PowerShell remoting.

Perform the following steps to join each node to the domain:

  1. Press the Windows Key on the keyboard and then press R. Type Control Panel and then click OK.

  2. In the Control Panel window, click System and Security, and then click System.

  3. In the System window under Computer name, domain, and workgroup settings, click Change settings.

  4. In the System Properties dialog box, click Change.

  5. Under Member of, click Domain, type the name of the domain, and then click OK.

Run the following Windows PowerShell commands on each node to enable remote access using the Remote Desktop Protocol, to enable PowerShell execution policy and enable PowerShell Remoting:

(Get-WmiObject Win32_TerminalServiceSetting -Namespace root\cimv2\terminalservices).SetAllowTsConnections(1,1)
Set-ExecutionPolicy Unrestricted –Force
Enable-PSRemoting –Force

The following roles and features will be installed on each node of the cluster:

  • Hyper-V and Hyper-V management Tools

  • Failover cluster and failover cluster management tools

  • Storage management tools

Perform the following steps on each node in the cluster to install the required roles and features:

  1. In Server Manager, click Dashboard in the console tree.

  2. In Welcome to Server Manager, click 2 Add roles and features, and then click Next.

  3. On the Before You Begin page of the Add Roles and Features Wizard, click Next.

  4. On the Installation Type page, click Next.

  5. On the Server Selection page, click Next.

  6. On the Server Roles page, select Hyper-V from the Roles list. In the Add Roles and Features Wizard dialog box, click Add Features. Click Next.

  7. On the Features page, select Failover Clustering from the Features list. In the Add Roles and Features Wizard dialog box, click Add Features. Expand Remote Server Administrator Tools and then expand Role Administration Tools. Expand File Services Tools. Select Share and Storage Management Tool. Click Next.

    noteNote
    If you plan to use Multipath I/O for your storage solution, select the Multipath I/O feature while performing step 7.

  8. On the Hyper-V page, click Next.

  9. On the Virtual Switches page, click Next.

  10. On the Migration page, click Next.

  11. On the Default Stores page, click Next.

  12. On the Confirmation page, put a checkmark in the Restart the destination server automatically if required checkbox and then in the Add Roles and Features dialog box click Yes, then click Install.

  13. On the Installation progress page, click Close after the installation has succeeded.

  14. Restart the computer. This process might require restarting the computer twice. If so, the installer will trigger the multiple restarts automatically.

After you restart the server, open Server Manager and confirm that the installation completed successfully. Click Close on the Installation Progress page.

The network configuration on each node in the cluster needs to be configured to support the converged networking scenario where all traffic, including infrastructure and tenant traffic, moves through the Hyper-V virtual switch. You will perform the following procedures on each of the nodes in the cluster to complete the initial network configuration:

  • 2.1 Disable unused and disconnected interfaces and rename active connections.

  • 2.2 Create a converged network adapter team and configure IP addressing information.

  • 2.3 Create the Hyper-V virtual switch and management virtual network adapter.

  • 2.4 Rename the management virtual network adapter (optional).

  • 2.5 Create additional virtual network adapters and assign VLAN IDs.

  • 2.6 Rename virtual network adapters (optional).

  • 2.7 Assign static IP addresses to the virtual network adapters.

  • 2.8 Configure QoS for different traffic types and configure the default minimum bandwidth for the virtual switch.

You can simplify the configuration and avoid errors when running the wizards and running PowerShell commands by disabling all network interfaces that are either unused or disconnected. You can disable these network interfaces in the Network Connections window.

For the remaining network adapters, do the following:

  1. Connect them to the converged network switch ports.

  2. To help you more easily recognize the active network adapters, rename them with names that indicate their use or their connection to the intranet or Internet (for example, ConvergedNet1 and ConvergedNet2). You can do this in the Network Connections window.

Network Load Balancing and Failover (LBFO) enables bandwidth aggregation and network adapter failover to prevent connectivity loss in the event of a network card or port failure. This feature is often referred to as "NIC Teaming". In this scenario you will create one team that will be connected to the ConvergedNet subnet.

To configure the network adapter teams by using Server Manager, do the following on each computer in the cluster:

noteNote
Several steps in the following procedure will temporarily interrupt network connectivity. We recommend that all servers be accessible over a keyboard, video, and mouse (KVM) switch so that you can check on the status of these machines if network connectivity is unavailable for more than five minutes.

  1. From Server Manager, click Local Server in the console tree.

  2. In Properties, click Disabled, which you'll find next to Network adapter teaming.

  3. In the NIC Teaming window, click the name of the server computer in Servers.

  4. In Teams, click Tasks, and then click New Team.

  5. In the New Team window, in the Team Name text box, enter the name of the network adapter team for the converged traffic subnet (example: ConvergedNet Team).

  6. In the Member adapters list select the two network adapters connected to the converged traffic subnet (in this example, ConvergedNet1 and ConvergedNet2), and then click OK. Note that there may be a delay of several minutes before connectivity is restored after making this change. To ensure that you see the latest state of the configuration, right click your server name in the Servers section in the NIC Teaming window and click Refresh Now. There may be a delay before the connection displays as Active. You may need to refresh several times before seeing the status change.

  7. Close the NIC Teaming window.

Configure a static IPv4 addressing configuration for the new network adapter team connected to the converged traffic subnet (example: ConvergedNet Team). This IP address is the one that you will use when connecting to the host system for management purposes. You can do this in the Properties of the team in the Network Connections window. You will see a new adapter where the name of the teamed network adapter is the name you assigned in step 5. You will lose connectivity for a few moments after assigning the new IP addressing information.

noteNote
You might need to manually refresh the display of the NIC Teaming window to show the new team and there may be a delay in connectivity as the network adapter team is created. If you are managing this server remotely, you might temporarily lose connectivity to the server.

In this scenario, all traffic will flow through the Hyper-V virtual switch. This includes the host operating system traffic (cluster/CSV, management and live migration) and guest/tenant traffic. You will create the virtual switch in Windows PowerShell instead of using the Hyper-V console. The reason for this is that when you create the virtual switch in the Hyper-V console, you are unable to specify the Minimum Bandwidth Mode, which defaults to Absolute (requiring a bits per second value) as opposed to Weighted mode (configurable via Windows PowerShell) which allows for a relative range from 1 to 100. For more information on the New-VMSwitch cmdlet, please see New-VMSwitch.

Run the following Windows PowerShell command on each member of the cluster to create the Hyper-V virtual switch and the management traffic virtual network adapter:

New-VMSwitch "ConvergedNetSwitch" -MinimumBandwidthMode weight -NetAdapterName "ConvergedNetTeam" -AllowManagementOS 1

Please note that during this step you created both the Hyper-V virtual switch and the management virtual NIC.

Note that if you are performing this action of an RDP connection, the connection may drop for a few moments.

The management virtual network adapter that was created when you created the virtual switch now appears in the Network Connections window and it was assigned a generic name such as vEthernet (ConvergedNetSwitch). You should rename this virtual network adapter to make it easier to identify in subsequent operations. Right-click the new virtual network adapter and click Rename and assign the virtual network adapter a new name (for example, Management).

The Hyper-V virtual switch now has a single virtual network adapter that will be used for hosting operating system management traffic. You now will create two additional virtual network adapters: one for live migration traffic and one for cluster traffic.

Run the following Windows PowerShell commands to create the live migration traffic virtual network adapter, the cluster traffic virtual network adapter, assign the live migration virtual network adapter a VLAN ID and assign the cluster virtual network adapter a VLAN ID:

Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "ConvergedNetSwitch"
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "ConvergedNetSwitch"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName LiveMigration -Access -VlanId 2160
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName Cluster -Access -VlanId 2161

In the preceding example, the –VMNetworkAdapterName represents the name of the virtual network adapter.

The live migration and cluster virtual network adapters you created in the previous step now appear in the Network Connections window and they were assigned default names, such as vEthernet (Cluster). You should rename these virtual network adapters to make them easier to identify in subsequent operations. Right-click each of these virtual network adapters, click Rename and assign the virtual network adapter a new name (for example, live migration and cluster)

You now need to assign IP addresses to your virtual network adapters. This can be done through DHCP or you can assign static addresses. Make sure that each of the virtual network adapters is assigned an IP address on a different network ID – this will become important later when you configure your cluster networking configuration. You can use the Networking Control Panel applet or Windows PowerShell to assign IP addressing information to the virtual network adapters. For example:

Set-NetIPInterface -InterfaceAlias "LiveMigration" -dhcp Disabled; new-NetIPAddress -PrefixLength 8 -InterfaceAlias "LiveMigration" -IPv4Address 11.0.0.x

In this step you will configure QoS weightings that define the minimal share of bandwidth assigned to each of the virtual network adapters. You can determine the percentage of bandwidth that can be allocated to a particular virtual network adapter by adding all the weight values together and then dividing the individual weight assigned to a virtual network adapter by the total.

Perform the following steps to add weight values to the cluster, management and live migration virtual network adapters:

Run the following Windows PowerShell commands to assign a weight value to the cluster virtual network adapter, assign a weight value to the management virtual network adapter, assign a weight value to the live migration virtual network adapter and assign a default weight value for any future virtual network adapters you create:

Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 40
Set-VMNetworkAdapter -ManagementOS -Name ConvergedNetSwitch -MinimumBandwidthWeight 5
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 20
Set-VMSwitch "ConvergedNetSwitch" -DefaultFlowMinimumBandwidthWeight 10
noteNote
VMNetworkAdapter name is listed under 'Device Name' in the Network Connections user interface.

With the initial cluster node configuration complete, you are ready to perform initial storage configuration tasks on all nodes of the cluster. Initial storage configuration tasks include:

  • 3.1 Present all shared storage to relevant nodes.

  • 3.2 Install and configure MPIO as necessary for multipath scenarios.

  • 3.3 Wipe, bring online, and initialize all shared disks.

In a SAS scenario, connect the SAS adapters to each storage device. Each cluster node should have two adapters in them if high availability to storage access is required.

If you have multiple data paths to storage (for example, two SAS cards) make sure to install the Microsoft® Multipath I/O (MPIO) on each node. This step might require you to restart the system. For more information about MPIO, see What's New in Microsoft Multipath I/O.

To prevent issues with the storage configuration procedures that are detailed later is this document; confirm that the disks in your storage solution have not been previously provisioned. The disks should have no partitions or volumes. They should also be initialized so that there is a master book record (MBR) or GUID partition table (GPT) on the disks, and then brought online. You can use the Disk Management console or Windows PowerShell to accomplish this task. This task must be completed on each node in the cluster.

To discover disks that can participate in a pool you can use the PowerShell command Get-PhysicalDisk | ? BusType -Eq "SAS". All disks in the CanPool column that read TRUE are eligible for pooling.

noteNote
If you have previously configured these disks with Windows Server 2012 Storage Spaces pools, you will need to delete these storage pools prior to proceeding with the storage configuration described in this document. Please refer to the TechNet Wiki Article, How to Delete Storage Pools and Virtual Disks Using PowerShell.

You are now ready to complete the failover cluster settings. Failover cluster setup includes the following steps:

  • 4.1 Run through the Cluster Validation Wizard.

  • 4.2 Address any indicated warnings and/or errors.

  • 4.3 Complete the Create Failover Cluster Wizard.

  • 4.4 Create the clustered storage pool.

  • 4.5 Create the quorum virtual disk.

  • 4.6 Create the virtual machine storage virtual disk.

  • 4.7 Add the virtual machine storage virtual disk to Cluster Shared Volumes.

  • 4.8 Add folders to the cluster shared volume.

  • 4.9 Configure Quorum Settings.

  • 4.10 Configure cluster networks to prioritize traffic.

The Cluster Validation Wizard will query multiple components in the intended cluster hosts and confirm that the hardware and software is ready to support failover clustering. On one of the nodes in the server cluster, perform the following steps to run the Cluster Validation Wizard:

  1. In the Server Manager, click Tools, and then click Failover Cluster Manager.

  2. In the Failover Cluster Manager console, in the Management section, click Validate Configuration.

  3. On the Before You Begin page of the Validate a Configuration Wizard, click Next.

  4. On the Select Servers or a Cluster page, type the name of the local server, and then click Add. After the name appears in the Selected servers list, type the name of another Hyper-V cluster member computer, and then click Add. Repeat this step for all computers in the Hyper-V cluster. When all of the servers of the Hyper-V cluster appear in the Selected servers list, click Next.

  5. On the Testing Options page, click Next.

  6. On the Confirmation page, click Next. The time to complete the validation process will vary with the number of nodes in the cluster and can take some time to complete.

  7. On the Summary page, the summary text will indicate that the configuration is suitable for clustering. Confirm that there is a checkmark in the Create the cluster now using the validated nodes... checkbox.

Click the Reports button to see the results of the Cluster Validation. Address any issues that have led to cluster validation failure. After correcting the problems, run the Cluster Validation Wizard again. After the cluster passes validation, then proceed to the next step. Note that you may see errors regarding disk storage. You may see this if you haven't yet initialized the disks. Click Finish.

After passing cluster validation, you are ready to complete the cluster configuration.

Perform the following steps to complete the cluster configuration:

  1. On the Before You Begin page of the Create Cluster Wizard, click Next.

  2. On the Access Point for Administering the Cluster page, enter a valid NetBIOS name for the cluster, and then select the network you want the cluster on and then type in a static IP address for the cluster, and then click Next. In this example, the network you would select is the Management Network. Unselect all other networks that appear here.

  3. On the Confirmation page, clear Add all eligible storage to the cluster checkbox and then click Next.

  4. On the Creating New Cluster page you will see a progress bar as the cluster is created.

  5. On the Summary page, click Finish.

  6. In the console tree of the Failover Cluster Manager snap-in, open the Networks node under the cluster name.

  7. Right-click the cluster network that corresponds to the management network adapter network ID (subnet), and then click Properties. On the General tab, confirm that Allow cluster communications on this network is selected and that Allow clients to connect through this network is enabled. In the Name text box, enter a friendly name for this network (for example, ManagmentNet), and then click OK.

  8. Right-click the cluster network that corresponds to the Cluster network adapter network ID (subnet) and then click Properties. On the General tab, confirm that Allow cluster communications on this network is selected and that Allow clients to connect through this network is not enabled. In the Name text box, enter a friendly name for this network (for example, ClusterNet), and then click OK.

  9. Right-click the cluster network that corresponds to the live migration network adapter network ID (subnet) and then click Properties. On the General tab, confirm that Allow cluster communications on this network is selected and that Allow clients to connect through this network is not enabled. In the Name text box, enter a friendly name for this network (for example, LiveMigrationNet), and then click OK.

Perform the following steps on one of the members of the cluster to create the storage pool:

  1. In the left pane of the Failover Cluster Manager, expand the server name and then expand the Storage node. Click Storage Pools.

  2. In the Actions pane, click New Storage Pool.

  3. On the Before You Begin page, click Next.

  4. On the Storage Pool Name page, enter a name for the storage pool in the Name text box. Enter an optional description for the storage pool in the Description text box. In the Select the group of available disks (also known as a primordial pool) that you want to use list, select the name you assigned to the cluster (this is the NetBIOS name you assigned to the cluster when you created the cluster). Click Next.

  5. On the Physical Drives page, select the drives that you want to participate in the storage pool. Then click Next.

  6. On the Confirmation page, confirm the settings and click Create.

  7. On the Results page, you should receive the message You have successfully completed the New Storage Pool Wizard. Remove the checkmark from the Create a virtual disk when the wizard closes checkbox. Then click Close.

Now that you have created the storage pool, you can create virtual disks within that storage pool. A virtual disk is sometimes called a logical unit number or LUN and it represents a collection of one or more physical disks from the previously created storage pool. The layout of data across the physical discs can increase the reliability and performance of the physical disk.

You will need to create at least two virtual disks:

  • A virtual disk that can be used as a quorum witness disks. This disk can be configured as a 1 GB virtual disk.

  • A virtual disk that will be assigned to a cluster shared volume.

Perform the following steps to create the quorum virtual disk:

  1. In the Failover Cluster Manager console, expand the Storage node in the left pane of the console. Right click Pools and click Add Disk.

  2. In the New Virtual Disk Wizard on the Before You Begin Page, click Next.

  3. On the Storage Pool page, select your server name in the Server section and then select the storage pool you created earlier in the Storage pool section. Click Next.

  4. On the Virtual Disk Name page, enter a name for the virtual disk in the Name text box. You can also enter an optional description in the Description text box. Click Next.

  5. On the Storage Layout page, in the Layout section, select Mirror. Click Next.

  6. On the Resiliency Settings select Two-way mirror and click Next.

  7. On the Size page, in the Virtual disk size text box, enter a size for the new virtual disk, which in this example will be 1 GB. Use the drop down box to select GB. Also, you can put a checkmark in the Create the largest virtual disk possible, up to the specified size checkbox, but this is not required or desired when creating a witness disk. When this option is selected, it allows the wizard to calculate the largest size virtual disk you can create given the disks you have assigned to the pool, regardless of the number you put in the Virtual disk size text box. Click Next.

  8. On the Confirmation page, review your settings and click Create.

  9. On the Results page, put a checkmark in the Create a volume when this wizard closes checkbox. Click Close.

  10. On the Before You Begin page of the New Volume Wizard, click Next.

  11. On the Server and Disk page, select the name of the cluster from the Server list. In the Disk section, select the virtual disk you just created. You can identify this disk by looking in the Virtual Disk column, where you will see the name of the virtual disk you created. Click Next.

  12. On the Size page, accept the default volume size, and click Next.

  13. On the Drive Letter or Folder page, select Drive letter and select a drive letter. Click Next.

  14. On the File System Settings page, from the File system drop down list, select NTFS. Use the default setting in the Allocation unit size list. Click Next.

  15. On the Confirmation page, click Create.

  16. On the Results page, click Close.

Perform the following steps to create the virtual machine storage virtual disk:

  1. In the Failover Cluster Manager console, expand the Storage node in the left pane of the console. Right click Pools and click Add Disk.

  2. In the New Virtual Disk Wizard on the Before You Begin page, click Next.

  3. On the Storage Pool page, select your server name in the Server section and then select the storage pool you created earlier in the Storage pool section. Click Next.

  4. On the Virtual Disk Name page, enter a name for the virtual disk in the Name text box. You can also enter an optional description in the Description text box. Click Next.

  5. On the Storage Layout page, in the Layout section, select Mirror. Click Next.

  6. On the Resiliency Settings select Two-way mirror and click Next.

  7. On the Size page, in the Virtual disk size text box, enter a size for the new virtual disk. Use the drop down box to select MB, GB or TB. Also, you can put a checkmark in the Create the largest virtual disk possible, up to the specified size checkbox. When this option is selected, it allows the wizard to calculate the largest size virtual disk you can create given the disks you have assigned to the pool, regardless of the number you put in the Virtual disk size text box. Click Next.

  8. On the Confirmation page, review your settings and click Create.

  9. On the Results page, put a checkmark in the Create a volume when this wizard closes checkbox. Click Close.

  10. On the Before You Begin page of the New Volume Wizard, click Next.

  11. On the Server and Disk page, select the name of the cluster from the Server list. In the Disk section, select the virtual disk you just created. You can identify this disk by looking in the Virtual Disk column, where you will see the name of the virtual disk you created. Click Next.

  12. On the Size page, accept the default volume size, and click Next.

  13. On the Drive Letter or Folder page, select the Don't Assign to a drive letter or folder and select a drive letter. Click Next.

  14. On the File System Settings page, from the File system drop down list, select NTFS. Use the default setting in the Allocation unit size list. Note that ReFS is not supported in a Cluster Shared Volume configuration. Click Next.

  15. On the Confirmation page, click Create.

  16. On the Results page, click Close.

The virtual disk you created for virtual machine storage is now ready to be added to a Cluster Shared Volume. Perform the following steps to add the virtual disk to a Cluster Shared Volume.

  1. In the Failover Cluster Manager, in the left pane of the console, expand the Storage node and click Disks. In the middle pane of the console, in the Disks section, right click the virtual disk you created in the previous step and then click Add to Cluster Shared Volumes.

  2. Proceed to the next step.

Now you need to create the folders on the virtual disk located on the Cluster Shared Volume to store the virtual machine files and the virtual machine data files.

Perform the following steps to create a file share to store the running VMs of the Hyper-V cluster:

  1. Open Windows Explorer and navigate to the C: drive and then double-click Cluster Storage and then double-click Volume 1.

  2. Create two folders in Volume 1. One of the folders will contain the .vhd files for the virtual machines (for example, VHDdisks) and one folder will contain the virtual machine configuration files (for example, VHDsettings)

Perform the following steps to configure quorum settings for the cluster:

  1. In the left pane of the Failover Cluster Manager console, right click on the name of the cluster and click More Actions and click Configure Cluster Quorum Settings.

  2. On the Before You Begin page, click Next.

  3. On the Quorum Configuration Option page, select Use typical settings (recommended) and click Next.

  4. On the Confirmation page, click Next.

The cluster will use the network with the lowest metric for CSV traffic and the second lowest metric for live migration. Windows PowerShell® is the only method available to prescriptively specify the CSV network. You can set the live migration network via the Hyper-V management console, which you will do in Step 5: Configure Hyper-V settings.

Run the following Windows PowerShell commands on one node of the failover cluster to set the metric for the cluster network traffic, set the metric for the live migration network traffic and set the metric for the management network traffic:

(Get-ClusterNetwork "ClusterNet" ).Metric = 100
(Get-ClusterNetwork "LiveMigrationNet" ).Metric = 500
(Get-ClusterNetwork "ManagementNet" ).Metric = 1000

To finalize the Hyper-V configuration, you will need to take the following step:

  • 5.1 Change default file locations for virtual machine files.

On each Hyper-V cluster member, perform the following steps on to change the default file locations for virtual machine files:

  1. In Server Manager, click Tools, then click Hyper-V Manager.

  2. From the console tree of the Hyper-V Manager, right-click the name of the Hyper-V server, and then click Hyper-V Settings.

  3. In the Hyper-V Settings dialog box, click Virtual Hard Disks under Server, type the file share location in Specify the default folder to store virtual hard disk files, and then click Apply. For example, c:\clusterstorage\volume1\VHDdisks.

  4. Click Virtual Machines under Server, type the file folder location in Specify the default folder to store virtual machine configuration files, and then click OK For example, c:\clusterstorage\volume1\VHDsettings.

To verify the configuration of your cloud environment, perform the following operations.

  • 6.1 Create a new virtual machine.

  • 6.2 Test network connectivity from the virtual machine.

  • 6.3 Perform a live migration.

  • 6.4 Perform a quick migration.

To create a new virtual machine in the cluster environment, perform the following steps.

  1. Open Failover Cluster Manager, click Roles under the cluster name, click Virtual Machines under the Actions pane, and then click New Virtual Machine.

  2. On the New Virtual Machine page, select the cluster node where you want to create the virtual machine, and then click OK.

  3. On the Before you Begin page of the New Virtual Machine Wizard, click Next.

  4. On the Specify Name and Location page, enter a friendly name for this virtual machine and then click Next.

  5. On the Assign Memory page, enter the amount of memory that will be used for this virtual machine (minimum for this lab is 1024 MB RAM) and then click Next.

  6. On the Configuring Networking page, click Next.

  7. On the Connect Virtual Hard Disk page, leave the default options selected and click Next.

  8. On the Installation Options page, select Install an operating system from a boot CD/DVD-ROM and then select the location where the CD/DVD is located. If you are installing the new operating system based on an ISO file, make sure to select the option Image file (.iso) and browse for the file location. If you prefer to PXE boot, that option will be described in later steps. After you select the appropriate option for your scenario, click Next.

  9. On the Completing the New Virtual Machine Wizard page, review the options, and then click Finish.

  10. The virtual machine creation process starts. After it is finished, you will see the Summary page, where you can access the report created by the wizard. If the virtual machine was created successfully, click Finish.

  11. If you want to PXE boot the virtual machine, you will need to create a Legacy Network Adapter. Right click the new virtual machine and click settings.

  12. In the Settings dialog box, select the Legacy Network Adapter option and click Add.

  13. In the Legacy Network Adapter dialog box, connect it to the virtual switch (such as ConvergedNetSwitch) and enable virtual LAN identification and assign the appropriate network identifier.

noteNote
If the virtual machine continues to use the legacy network adapter it will not be able to leverage many of the features available in the Hyper-V virtual switch. You may want to replace the legacy network adapter after the operating system is installed.

At this point your virtual machine is created and you should use the Failover Cluster Manager to start the virtual machine and perform the operating system installation according to the operating system that you choose. For the purpose of this validation, the guest operating system can be any Windows Server version.

Once you finish installing the operating system in the virtual machine you should log on and verify if this virtual machine was able to obtain IP address from the enterprise network. Assuming that in this network you have a DHCP server, this virtual machine should be able to obtain the IP address. To perform the basic network connectivity test use the following approach.

  • Use ping command for a reachable IP address in the same subnet.

  • Use ping command for the same destination but now using the full qualified domain name for the destination host. The goal here is to test basic name resolution.

noteNote
If you installed Windows "8" Developer Preview in this virtual machine you need to open Windows Firewall with Advanced Security and create a new rule to allow Internet Control Message Protocol (ICMP) before performing the previous tests. This may be true for other hosts you want to ping − confirm that the host-based firewall on the target allows for ICMP Echo Requests.

After you confirm that this basic test is working properly, leave a command prompt window open and enter the command, ping <Destination_IP_Address_or_FQDN> -t. The goal here is to have a continuous test while you perform the live migration to the second node.

noteNote
If you prefer to work with PowerShell, instead of the ping command you can use the Test-Connection command. This cmdlet provides you a number of connectivity testing options that exceed what is available with the simple ping command.

To perform a live migration of this virtual machine from the current cluster node to the other node in the cluster, perform the following steps.

  1. In the Failover Cluster Manager, click Roles under the cluster name. On the Roles pane, right click the virtual machine that you created, click Move, click Live Migration, and then click Select Node.

  2. On the Move Virtual Machine page, select the node that you want to move the virtual machine to and click OK.

You will notice in the Status column when the live migration starts, it will take some time for the Information column to update the current state of the migration. While the migration is taking place you can go back to the virtual machine that has the ping running and observe if there is any packet loss.

To perform the quick migration of this virtual machine from the current node to the other one, perform the following steps.

  1. On the Failover Cluster Manager, click Roles under the cluster name. In the Roles pane, right-click the virtual machine that you created, click Move, click Quick Migration and then click Select Node.

  2. On the Move Virtual Machine window, select the node that you want to move the virtual machine to, and then click OK.

You will notice in the status that the quick migration will start faster than the live migration did. While the migration is taking place you can go back to the virtual machine that has the ping running and observe if there is any packet loss.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback

Community Additions

ADD
Show:
© 2014 Microsoft