Creating a Hyper-V Host Cluster in VMM Prerequisites
Updated: November 1, 2013
Applies To: System Center 2012 - Virtual Machine Manager, System Center 2012 R2 Virtual Machine Manager, System Center 2012 SP1 - Virtual Machine Manager
Before you run the Create Cluster Wizard in Virtual Machine Manager (VMM) to create a Hyper-V host cluster, there are several prerequisites that must be met. These include prerequisites for host configuration and for fabric configuration.
Make sure that the hosts that you want to cluster meet the following prerequisites:
You must have two or more stand-alone Hyper-V hosts that are managed by VMM. For more information, see How to Add Trusted Hyper-V Hosts and Host Clusters in VMM.
The Hyper-V hosts must meet the requirements for failover clustering and must be running a supported operating system.
For more information, see the followings:
About supported operating systems, see System Requirements: Hyper-V Hosts
About hardware requirements for Windows Server 2008 R2, see Understanding Requirements for Failover Clusters.
About hardware requirements for Windows Server 2012, see Failover Clustering Hardware Requirements and Storage Options.
Important If the cluster will have three or more nodes, and the nodes are running Windows Server 2008 R2 with SP1, you must install the hotfix that is described in the article Validate SCSI Device Vital Product Data (VPD) test fails after you install Windows Server 2008 R2 SP1. Install the hotfix on each node before you run the Create Cluster Wizard. Otherwise, cluster validation may fail.
- About supported operating systems, see System Requirements: Hyper-V Hosts
The Hyper-V hosts that you want to add as cluster nodes must be located in the same Active Directory domain. The domain must be trusted by the domain of the VMM management server.
The Hyper-V hosts must belong to the same host group in VMM.
Make sure that fabric configuration meets the following prerequisites:
To use shared storage that is under VMM management, storage must already be discovered and classified in the Fabric workspace of the VMM console. Additionally, logical units that you want to use as available or shared storage must be created and allocated to the host group or parent host group where the Hyper-V hosts are located. The logical units must not be assigned to any host.
Note For information about how to discover, classify and allocate storage, and the specific hardware and storage provider requirements, see the Configuring Storage in VMM section.
To use shared storage that is not under VMM management, disks must be available to all nodes in the cluster before you can add them. Therefore, you must provision one or more logical units to all hosts that you want to cluster, and mount and format the storage disks on one of the hosts.
Important VMM is agnostic regarding the use of asymmetric storage, where a workload can use disks that are shared between a subset of the cluster nodes. VMM does not support or block this storage configuration. Note that to work correctly with VMM, each cluster node must be a possible owner of the cluster disk. (Support for asymmetric storage was introduced in Windows Server 2008 R2 Service Pack 1.)
Each host that you want to cluster must have access to the storage array.
The Multipath I/O (MPIO) feature must be added on each host that will access the Fibre Channel or iSCSI storage array. You can add the MPIO feature through Server Manager. If the MPIO feature is already enabled before you add a host to VMM management, VMM will automatically enable MPIO for supported storage arrays by using the Microsoft provided Device Specific Module (DSM). If you already installed vendor-specific DSMs for supported storage arrays, and then add the host to VMM management, the vendor-specific MPIO settings will be used to communicate with those arrays.
If you add a host to VMM management before you add the MPIO feature, you must add the MPIO feature, and then manually configure MPIO to add the discovered device hardware IDs. Or, you can install vendor-specific DSMs.
If you are using a Fibre Channel storage array network (SAN), each host must have a host bus adapter (HBA) installed, and zoning must be correctly configured. For more information, see your storage array vendor’s documentation.
If you are using an iSCSI SAN, make sure that iSCSI portals have been added and that the iSCSI initiator is logged into the array. Additionally, make sure that the Microsoft iSCSI Initiator Service on each host is started and set to Automatic. For more information about how to create an iSCSI session on a host when storage is managed through VMM, see How to Configure Storage on a Hyper-V Host in VMM.
Important By default, when VMM manages the assignment of logical units, VMM creates one storage group per host. In a cluster configuration, VMM creates one storage group per cluster node. A storage group can contain one or more of the host’s initiator IDs (iSCSI Qualified Name (IQN) or a World Wide Name (WWN)). For some storage arrays, it is preferable to use one storage group for the entire cluster, where host initiators for all cluster nodes are contained in a single storage group. To support this configuration, you must set the
$trueby using the
Set-SCStorageArraycmdlet in the VMM command shell. In VMM, a storage group is defined as an object that binds together host initiators, target ports and logical units. A storage group has one or more host initiators, one or more target ports and one or more logical units. Logical units are exposed to the host initiators through the target ports.
- The Multipath I/O (MPIO) feature must be added on each host that will access the Fibre Channel or iSCSI storage array. You can add the MPIO feature through Server Manager. If the MPIO feature is already enabled before you add a host to VMM management, VMM will automatically enable MPIO for supported storage arrays by using the Microsoft provided Device Specific Module (DSM). If you already installed vendor-specific DSMs for supported storage arrays, and then add the host to VMM management, the vendor-specific MPIO settings will be used to communicate with those arrays.
For all Hyper-V hosts that you want to cluster, if the hosts are configured to use static IP addresses, make sure that the IP addresses on all hosts are in the same subnet.
One or more logical networks that are common across all of the Hyper-V hosts that you want to cluster must be configured in the Fabric workspace of the VMM console. If a logical network has associated network sites, a network site must be scoped to the host group where the host cluster will reside. Additionally, the logical networks must be associated with physical network adapters on each Hyper-V host.
You do not have to create external virtual networks on the Hyper-V hosts beforehand. When you run the Create Cluster Wizard, you can configure the external virtual networks that VMM will automatically create on all cluster nodes. You can also configure virtual network settings for the cluster after cluster creation. For more information, see Configuring Hyper-V Host Cluster Properties in VMM.
For information about how to create logical networks, see How to Create a Logical Network in VMM.
For information about how to assign logical networks to physical network adapters, see How to Configure Network Settings on a Hyper-V Host in VMM.
Important If the external virtual networks that you want to use for the cluster are already defined on each host, make sure that the names of the virtual networks are identical, and that the logical networks that are associated with each physical network adapters are identical. Otherwise, the virtual network will not be considered highly available by VMM.
- For information about how to create logical networks, see How to Create a Logical Network in VMM.
For additional resources, see Information and Support for System Center 2012.
Tip: Use this query to find online documentation in the TechNet Library for System Center 2012. For instructions and examples, see Search the System Center 2012 Documentation Library.