Virtual Server Host Clustering Step-by-Step Guide for Virtual Server 2005 R2

This document provides an introduction to the methods and concepts of Virtual Server host clustering. With Virtual Server host clustering, you can provide a wide variety of services through a small number of physical servers and, at the same time, maintain availability of the services you provide. If one server requires scheduled or unscheduled downtime, another server is ready to quickly begin supporting services. Users experience minimal disruptions in service.

Virtual Server host clustering is a way of combining Microsoft® Virtual Server 2005 R2 with the server cluster feature in Microsoft Windows Server™ 2003. This document describes a simple configuration in which you use Microsoft Virtual Server 2005 R2 to configure one guest operating system, and configure a server cluster that has two servers (nodes), either of which can support the guest if the other server is down. You can create this configuration and then, by carefully following the pattern of the configuration, develop a host cluster with additional guests or additional nodes.

What is Virtual Server Host Clustering?

Virtual Server host clustering is a way of combining two technologies, Virtual Server 2005 R2 and the server cluster feature in Windows Server 2003, so that you can consolidate servers onto one physical host server without causing that host server to become a single point of failure. To give an example, suppose you had two physical servers providing client services as follows:

  • Microsoft Windows Server 2003, Standard Edition, used as a Web server
  • Microsoft Windows NT® Server 4.0 with Service Pack 6a (SP6a), with a specialized application used in your organization

By using a configuration like the scenario in this document, you could consolidate these servers into one and, at the same time, maintain availability of services if that consolidated server failed or required scheduled maintenance. To do this, you would run each service listed above as a guest (also known as a virtual machine) on a physical server. You would also configure this server as one node in a server cluster, meaning that a second server would be ready to support the guests. If the first server failed or required scheduled maintenance, the second server would take over support of the services. You could perform any necessary work on the first server and then, as needed, have it once again resume support of the services.

The following figure shows a simple Virtual Server host cluster:

Virtual Server host cluster with 2 guests

It is important to understand that with Virtual Server host clustering, you are clustering the physical hosts, not the applications running on a physical host. Failure of a physical host would cause a second physical host to take over support of a guest, but failure of an application within a guest would not. For more information about what Virtual Server host clustering protects and what it does not protect, see Appendix A, Comparing Host Clustering to Other Types of Clustering.

Understanding Common Terms in Virtual Server Host Clustering

The following terms are important for understanding Virtual Server host clustering:

  • Host: a physical server on which a version of Virtual Server 2005 is running. For the configuration in this document, Virtual Server 2005 R2 is used.
  • Guest: an operating system running as a virtual machine in Virtual Server 2005. Multiple guests can run on one host. Each guest can run one or more applications.
  • Node: a computer system that is an active or inactive member of a cluster. In this document, a node is also a host.
  • Failover: the process of taking a group of clustered resources (such as a disk on which data is stored, plus an associated script) offline on one node and bringing them online on another node. The Cluster service ensures that this is done in a predefined, orderly fashion, so that users experience minimal disruptions in service.
  • Failback: The process of moving resources back to their preferred node after the node has failed and come back online.
  • Cluster storage: Storage that is attached to all nodes of the cluster. Each disk on the cluster storage is owned by only one node of the cluster at a time. The ownership of disks moves from one node to another during failover or when the administrator moves a group of resources to another node.

What's New in the Combination of Virtual Server 2005 R2 and Server Clustering

Two recent offerings have made it easier to configure Virtual Server 2005 R2 in a Virtual Server host cluster:

  • The availability of iSCSI for use with a server cluster. With iSCSI in a server cluster, you do not need all the specialized hardware previously required for a server cluster. You only need additional network adapters to connect the storage to the cluster nodes, along with a storage unit that uses iSCSI. For more information about using iSCSI, see "Using iSCSI with Virtual Server 2005 R2" on the Microsoft Web site(https://go.microsoft.com/fwlink/?LinkId=55646).
  • A script that ensures that the guest functions correctly in the clustered environment. This script has been tested in server clusters to confirm its functionality. You must copy the script to each of the clustered nodes. When you configure the script as a Generic Script resource in the cluster, it ensures that the guest functions correctly when a failover or other cluster-related process occurs. The script also triggers restart of the guest if the guest stops running.

Who Should Read This Guide?

This guide is targeted at the following audiences:

  • IT planners and analysts who are evaluating the use of Virtual Server host clusters.
  • Enterprise IT planners and designers.
  • Server architects who are responsible for server consolidation and high availability.

Benefits of Virtual Server Host Clustering

The benefits of Virtual Server host clustering are a combination of certain benefits of Virtual Server 2005 R2 and the server cluster feature in Windows Server 2003:

  • Server consolidation, regardless of operating systems: With Virtual Server 2005 R2, you can consolidate multiple servers that might have been difficult to track and maintain, and place them all on one server, even if the original servers ran different operating systems.
    For more information about the benefits of Virtual Server 2005 R2, see "Microsoft Virtual Server" on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55574).
  • Increasing the availability of a consolidated server: With the server cluster feature in Windows Server 2003 and a configuration you develop by following the guidelines in this document, you can increase the availability of the guests on a consolidated server. When a failure or scheduled downtime occurs, a second server will quickly begin providing support. Although users experience some disruptions in service, particularly if the first server had an unexpected power loss or other sudden failure in which the state of the guests could not be saved, the disruptions can be kept to a minimum.

Increasing Service Availability in Specific Situations

The preceding benefits combine to increase the availability of services that you provide through Virtual Server 2005 R2, especially in the following situations:

  • During scheduled maintenance of host hardware: Before performing hardware maintenance, you can move guests to another physical host. Alternatively, you can simply shut down the server that needs maintenance, and the guests will move to the other host automatically.

  • During scheduled updates to host software: Before you apply software updates (including service packs), you can move guests to another physical host. Alternatively, you can apply software updates and restart the server, at which point the guests will move to the other host automatically.

  • For operating systems or applications that could not previously have been run in a cluster: With Virtual Server host clustering, you can place almost any operating system or application within the context of a guest running in a server cluster. If a physical host fails or requires scheduled downtime, a second server will quickly begin providing support. This means you can increase availability of the guests even if they use legacy operating systems or run applications that are not cluster-aware, that is, applications that are not designed to work in a coordinated way with server cluster components.

    Important

    Host clustering monitors the state of the physical host server, but does not monitor applications running in guests. In other words, the clustering software does not respond if an application stops running while the host continues to run. If you use host clustering, you must find a different way to respond to application problems, such as by monitoring the application itself. For more information, see Appendix A, Comparing Host Clustering to Other Types of Clustering.

In This Guide

Scenario for a Simple Host Clustering Configuration

This document describes a simple host clustering configuration. You can create this configuration and then, by carefully following the pattern of the configuration, develop a host cluster with additional guests or additional nodes.

The scenario described in this document has the following characteristics:

  • Uses a two-node cluster. This is fewer than the maximum number of nodes possible, which is eight. For details, see "Maximum number of supported nodes in a cluster" on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55482).
  • Uses cluster storage (shared storage) connected to the nodes by SCSI, Fibre Channel, or iSCSI.
  • Has one guest operating system, configured as a resource group in the cluster. Because the guest is configured as a resource group in the cluster, it can fail over from one node to the other.
    In a production environment, it is likely you would want more than one guest operating system. However, a scenario with one guest provides the foundation for understanding a scenario with additional guests. If you wanted each guest to be able to fail over separately, you would configure each guest in its own resource group. If, however, you knew that you would always be moving certain guests together, you could configure them all in the same resource group.
  • Uses copies of the provided script. When you configure the script as a Generic Script resource in the cluster, it ensures that the guest functions correctly when a failover or other cluster-related process occurs. The script also triggers restart of the guest if the guest stops running.
  • Is configured with specific resource dependencies, as shown in the section "Dependencies Between Cluster Resources in Virtual Server Host Clusters" later in this document. The resource dependencies are also described in the procedures in this document. If one or more resources depend on another resource, that resource is brought online first and taken offline last. This timing coordination provides a predictable environment for each resource as it comes online and in doing so, ensures that failover happens smoothly.

The following figures show a clustered host containing a guest operating system before and after a failure on the host.

Host cluster with Guest 1 running on Node 1Host cluster with Guest 1 running on Node 2

We recommend that you work in your test lab to set up the scenario exactly as described in this document. Then, by carefully following the pattern of the configuration, you can develop and implement a host cluster configuration with additional guests or additional nodes.

Dependencies Between Cluster Resources in Virtual Server Host Clusters

It is very important to specify the correct resource dependencies in any kind of server cluster, including a Virtual Server host cluster. When you specify a dependency, you ensure that the Cluster service starts resources in the correct order. Each dependent resource is started after the resources that it depends on.

Before considering the dependencies, it can be useful to review the two types of resources used in a Virtual Server host cluster: Generic Script resources, each representing a script that ensures smooth functioning of a guest in the cluster, and Physical Disk resources, each representing a disk used by a guest.

The two principles in specifying the correct dependencies are as follows:

  • For guests with multiple Physical Disk resources, the principle is "operating system disk depends on data disk": The Physical Disk resource that contains the guest's operating system must depend on any Physical Disk resources that contains the guest's data.
    This ensures that all the resources associated with data are online before the guest's operating system attempts to access data on them.
  • For all guests, the principle is "script depends on disk": The Generic Script resource used for a guest must depend on the Physical Disk resource used for that guest. If the guest has more than one Physical Disk resource, the Generic Script resource must depend on the Physical Disk resource that contains the guest's operating system.
    This ensures that all of the guest's Physical Disk resources are up and available before any line of the script is run.

The following figure shows the dependency between the clustered resources in the scenario in this document.

Dependency of script resource on disk resource

If, after trying the configuration in this document, you decide to create a configuration with multiple guests in the same resource group (meaning the guests would always move together, never separately), begin by understanding the principles listed in this section. Then plan your resource dependencies, building them up into an orderly chain or tree, keeping in mind that each dependent resource is started after the resources that it depends on.

Prerequisites for Virtual Server Host Clustering

To use Virtual Server host clustering, you will need the following:

  • Servers: Two or more identical server cluster computers listed in Windows Server Catalog. The servers must have identical components, including identical processors of the same brand, model, and version. For more information, see Windows Server Catalog (https://go.microsoft.com/fwlink/?LinkId=4303).

    Important

    The complete set of hardware that you use must be listed in Windows Server Catalog as a qualified cluster solution for Windows Server 2003.

  • Software for host servers: Operating systems and other software for your clustered host servers. The servers should be running:

    • Windows Server 2003 with Service Pack 1 (SP1), Enterprise Edition, Windows Server 2003 with Service Pack 1 (SP1), Datacenter Edition, Windows Server 2003 R2, Enterprise Edition, or Windows Server 2003 R2, Datacenter Edition
      Either the 32-bit version or the 64-bit version of these operating systems can be used.
    • Microsoft Virtual Server 2005 R2.
      In addition, on the computer from which you will manage Virtual Server 2005 R2, you must install the Internet Information Services (IIS) component of the operating system before installing the Virtual Server management tool. To install components in Windows Server 2003 or Windows XP, in Control Panel, open Add or Remove Programs and then click Add/Remove Windows Components. In Windows Server 2003, IIS is listed under Application Server.
    • If iSCSI is used, Microsoft iSCSI Software Initiator 2.0 or higher.
      Whether you use network adapters or host bus adapters with iSCSI, you must install the Microsoft iSCSI initiator service included in the Microsoft iSCSI Software Initiator download package, which you can find on the Microsoft Web site (https://go.microsoft.com/fwlink/?linkid=44352). For a host bus adapter, you can install just the service by selecting Initiator Service at the beginning of setup.
  • Network adapters and cable (for network communication): In each clustered node, at least two network adapters dedicated to network communication (separate from any network adapter you might use for iSCSI).
    At least one adapter in each server connects server-to-server (private network) and at least one connects both server-to-server and server-to-clients (mixed network). The network adapters connected together into one network must be identical to one another. The mixed network can use teamed network adapters, but the private network cannot.

  • For the storage, device controllers or appropriate adapters:

    • For SCSI or Fibre Channel: If you are using SCSI or Fibre Channel, each node must have a mass-storage device controller dedicated to the cluster storage (in addition to the device controller for the local disk).
    • For iSCSI: If you are using iSCSI, each node must have either a network adapter or a host bus adapter dedicated to the cluster storage. If you use a network adapter, it must be dedicated to iSCSI. You cannot use one of your other network adapters for this purpose. The network adapters used for iSCSI should be identical, and we recommend that the adapters be Gigabit Ethernet.
  • Storage: Shared storage, listed in Windows Server Catalog as part of a qualified cluster solution.
    For this scenario, the storage should contain at least two separate volumes, that is, two separate logical unit numbers (LUNs). One volume will function as the quorum (disk containing configuration information needed for the cluster), and one will contain the virtual disk for the guest.
    If you want to configure additional guests, keep in mind that if each guest has its own volume (or volumes) in the storage, the guests can fail over as separate units. Otherwise, they must fail over as one unit.

  • Software for guests: Licensed copies of the operating system and other software that you will run on the guests. Review your Virtual Server documentation to find out which operating systems you can use for guests.

    Important

    After you install the operating system on a guest, you must also install Virtual Machine Additions on the guest. Virtual Machine Additions is included in Virtual Server 2005 R2. For more information, see "Setting up operating systems for virtual machines" on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55573).

    Note that you are limited to one processor for each guest, so a guest generally cannot support large workloads that require symmetric multi-processing (SMP) capabilities, for example, large enterprise class databases. For more information, see Limitations with Virtual Server Host Clustering.

  • Script: The script, Havm.vbs. You will copy this script to each node of the cluster. For a copy of the script, see Appendix B, Script for Virtual Server Host Clustering.

  • Domain controller: An additional server that acts as the domain controller in the domain that contains your Virtual Server host cluster.

  • Clients: If you wish, you can connect one or more networked clients to the Virtual Server host cluster you create with this document, and observe the effect on a client when you move or fail over the guest.

Required Network Information, Accounts, and Administrative Credentials

In order to set up a cluster, you will need the following network information and accounts:

  • An account that you will log on to when you are configuring and managing the cluster. This account must be a member of the local Administrators group on all nodes.
  • A static IP address for the cluster.
  • One static IP address for each network adapter in the nodes. Set the addresses for each linked pair of network adapters (linked node-to-node) to be on the same subnet. DHCP can be used for these addresses, but we do not recommend this.
    If you use iSCSI with network adapters in the nodes, you will also need a static IP address for each of these adapters.
  • A computer account for each cluster node.
  • A user account for the Cluster service. Do not use this account for other purposes. The New Server Cluster Wizard gives this account the necessary permissions when you set up the cluster. For more information about the permissions necessary for the Cluster service account, see the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55161).
  • A name for the cluster, that is, the name administrators will use for connections to the cluster. (The actual applications running on the cluster can have different network names.)

Limitations with Virtual Server Host Clustering

When deploying Virtual Server host clustering, keep in mind the following limitations:

  • With any deployment of Virtual Server 2005 R2, you must review the capacity of each physical host server carefully to make sure that it can accommodate the requirements of the guests (virtual machines) you plan to create on that host.
  • Although running a Virtual Server host cluster minimizes interruptions to users, if a cluster node that is supporting guests suddenly fails (from a power failure, for example), some loss of state is unavoidable. Information that existed only in the memory of the failed server will be lost. Another node will respond quickly, but will not be able to recapture the complete state that the guests were in when the first node failed.
    In contrast, if an administrator performs a normal shutdown on a node, or moves a guest from one host (node) to another for planned maintenance or other reasons, Virtual Server saves the state of the guest before it is moved.
  • Although you can run Virtual Server on a multiprocessor computer, each virtual machine can use a maximum of one processor (although it can share the processor it is using with other virtual machines). This means that enterprise-class applications designed to use multiprocessor hardware may not provide adequate performance if you run them on a virtual machine. When deciding whether to run an application on a virtual machine, consider the virtual machine’s physical counterpart. In other words, would you run this application on a physical computer that has one processor?
  • Virtual Server affects performance. That is, in itself it requires some CPU, network, and I/O capacity. The performance you see when using Virtual Server host clustering will not be the same as on a server that does not use virtualization. To a much lesser extent, running a server as a node in a server cluster also has some effect on performance. Observe performance in a test environment before making capacity decisions for a production environment.

Steps for Configuring Virtual Server Host Clustering

For easy reference, the following table lists the names used in the procedures in this section. The procedures provide additional details about how to use the names.

Item to configure Name used in this document

Disk on shared storage

Disk X

Folder on disk where configuration files for the guest are placed

X:\Guest1

Clustered resource group

Guest1Group

Clustered Physical Disk Resource

DiskResourceX

Clustered Generic Script resource

Guest1Script

Configuring Virtual Server Host Clustering

To configure a Virtual Server host cluster, you must complete the following tasks:

  1. Set up the cluster and install Virtual Server 2005 R2 on each node.
  2. Configure a shutdown script on each cluster node.
  3. Configure the disk resource, resource group, and guest control script.
  4. Create Guest1 on one of the hosts.
  5. Complete the configuration of Guest1 so it can fail over.

After configuring the host cluster, you can test failover.

Optionally, you can configure the action the cluster takes if a guest's operating system stops responding. The cluster can immediately fail the guest over to another node, or it can restart the guest a specified number of times on the same node before attempting failover.

These tasks are described in detail in the following procedures.

Note

This document describes a simple host clustering configuration. You can create this configuration and then, by carefully following the pattern of the configuration, develop a host cluster with additional guests or additional nodes.

To set up the cluster and install Virtual Server 2005 R2 on each node

  1. Find and review appropriate instructions for setting up a cluster. One set of instructions can be found on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55162).

    Important

    Before beginning the process of setting up the cluster, ensure that when you turn on the computer and start the operating system, only one node will have access to the cluster storage. Otherwise, the cluster disks can become corrupted.

  2. As you review your instructions for setting up a cluster, be sure they cover the following requirements:

    • On each cluster node, in Network Connections, label each adapter with the same name as the corresponding adapter on the other node or nodes. We recommend you use names that show the function of each adapter, for example, "Private," "Public," and (if appropriate) "iSCSI."
    • On each cluster node, in Network Connections, you must configure the private and public adapters to use identical speed and duplex settings.
  3. Set up a two-node cluster running Windows Server 2003 with SP1 (Enterprise Edition or Datacenter Edition). On the cluster storage, make sure that in addition to the quorum disk, there is at least one additional volume with a specific drive letter or mount point assigned.

    In this procedure, the drive letter for this additional volume on the shared storage will be represented as X.

  4. If you use the Microsoft iSCSI Software Initiator, configure settings for all the clustered volumes so that they perform correctly during failover:

    1. On the storage device, use the software interface provided with your storage solution to view information about each clustered volume and to ensure that each of those clustered volumes is mounted.
    2. On one of the clustered nodes, click Start, click Programs or All Programs, click Microsoft iSCSI Initiator, and then click Microsoft iSCSI Initiator.
    3. On the Targets tab, locate one of the clustered volumes, and then click Log On.
      If Log On is unavailable, click Details, click Log off, click OK, and then click Log On.
    4. In Log On to Target, make sure that Automatically restore this connection when the system boots is selected, and then click OK.
    5. In Explorer, right-click My Computer and then click Manage. In Disk Management, make sure a drive letter has been assigned to the volume.
    6. If the Microsoft iSCSI Initiator is not still open, click Start, click Programs or All Programs, click Microsoft iSCSI Initiator, and then click Microsoft iSCSI Initiator.
    7. Click the Bound Volumes/Devices tab.
    8. Click Add and then type the drive letter of the volume, followed by a colon.
    9. Repeat this process for the other clustered volumes.
  5. Review information about installing Virtual Server 2005, available on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55163). Decide whether to install the management interface, which is called the Virtual Server Administration Website, on the cluster nodes or on another computer.

  6. On the computer or computers on which you plan to install the Virtual Server Administration Website, make sure that the Internet Information Services (IIS) component of the operating system is installed. To install components in Windows Server 2003 or Windows XP, in Control Panel, open Add or Remove Programs and then click Add/Remove Windows Components. In Windows Server 2003, IIS is listed under Application Server.

  7. On one cluster node, stop the Cluster service and install Virtual Server 2005 R2. If, in step 5, you decided that you do not want to install the Virtual Server management interface on the cluster nodes, choose the Custom installation option and install only the Virtual Server Service on the node, and then install the Virtual Server Web Application on a different computer. After the installation on the cluster node is complete, restart the Cluster service.

    The Cluster service must be stopped to avoid network problems that can occur on the node during the time that Virtual Server 2005 R2 is installing its network driver.

  8. On the other cluster node, stop the Cluster service and install Virtual Server 2005 R2, as described in step 7. Then restart the Cluster service.

To configure shutdown on the cluster nodes

  1. Complete the previous procedure.

  2. In the root directory of the local hard disk on each node, create a batch file and call it Stop_clussvc_script.cmd.

  3. In the batch file, type the following line:

    net stop clussvc

  4. Add the batch file as a shutdown script on each node. To do this, follow these steps:

    1. Click Start, click Run, type gpedit.msc, and then press ENTER.
    2. In the left pane, click Local Computer Policy, click Computer Configuration, click Windows Settings, and then click Scripts (Startup/Shutdown).
    3. In the right pane, double-click Shutdown.
    4. In the Shutdown Properties dialog box, click Add. For the Script Name, type:
      c:\Stop_clussvc_script.cmd

To configure the disk resource, resource group, and guest control script

  1. Complete the previous procedures.

  2. On a computer that contains Cluster Administrator, click Start, click Control Panel, click Administrative Tools, and then click Cluster Administrator. View the cluster that you created in an earlier procedure.

  3. In Cluster Administrator, create a new resource group and name it Guest1Group. If you want to specify a Preferred Owner for the group, specify the node on which you want the guest to run most of the time.

    The name that you give the group is for administrative purposes only. It is not the same as the name that clients would use to connect to the group.

  4. In Cluster Administrator, create a new disk resource, or use the appropriate disk resource if it has already been created. Specify the following properties for the resource:

    • Call it DiskResourceX.

    • Make sure it is a Physical Disk resource.

    • Assign the resource to Guest1Group.

    • For Possible Owners, make sure both cluster nodes are listed.

    • Do not specify any dependencies.

      Note

      In a more complex configuration, you might have multiple disk resources for each guest. If you did this, you would have to make the Physical Disk resource associated with the guest's operating system dependent on the Physical Disk resource or resources associated with the guest's data. This would ensure that all the resources associated with data are online before the guest's operating system attempts to access data on them.

    • For the disk, choose disk X.

  5. Ensure that Guest1Group is online on a node. Then, on that node, use Explorer to create a folder on disk X: called \Guest1.

  6. On each node's local disk, in the systemroot**\Cluster** folder, copy the script called Havm.vbs, available in Appendix B, Script for Virtual Server Host Clustering.

    Important

    The script must be copied to the correct folder on each node's local hard disk, not to a disk in the cluster storage.

To create Guest1 on one of the hosts

  1. Complete the previous procedures.

  2. On the computer that contains the management tool for Virtual Server 2005 R2, click Start, click Programs or All Programs, click Microsoft Virtual Server, and then click Virtual Server Administration Website. View the cluster node that currently owns DiskResourceX (which is in Guest1Group).

  3. In the navigation pane, under Virtual Networks, click Create.

  4. In Virtual network name, type ClusterNetwork.

  5. In Network adapter on physical computer, select the network adapter associated with the public network (not the private network) and then click OK.

  6. In the navigation pane, under Virtual Networks, click Configure, and then click View All.

  7. Point to the virtual network you just created, and then click Edit Configuration. In the line labeled .vnc file, select the path, and then open a text editor such as Notepad and copy and paste the path into a file for later use.

  8. In the Virtual Server Administration Website, click Back.

  9. Point to the virtual network you just created, and then click Remove.

    The purpose of this step is not to undo the creation of the virtual network, but to clear Virtual Server of information that will prevent you from moving the configuration file for the virtual network (the .vnc file) to the cluster storage.

  10. On the cluster node on which you created the .vnc file, open Explorer, and then navigate to the path that you copied into a text file in step 7.

  11. Right-click ClusterNetwork.vnc, and then click Cut.

    Note

    You must cut and paste the file, not copy it.

  12. Navigate to X:\Guest1 and paste the .vnc file, ClusterNetwork.vnc.

  13. In the Virtual Server Administration Website, under Virtual Networks, click Add.

  14. In the box next to Existing configuration (.vnc) file, type:

    X:\Guest1\ClusterNetwork.vnc

  15. Click Add.

  16. In the navigation pane, under Virtual Machines, click Create.

  17. In Virtual machine name, instead of simply typing the name, type the following path, which not only names the virtual machine Guest1, but places the virtual machine's configuration file on the cluster storage:

    X:\Guest1\Guest1.vmc

  18. In Memory, type a value in megabytes for the amount of RAM used by the virtual machine.

    If you plan to create other virtual machines on this physical host, be sure to use only part of the physical RAM for Guest1.

  19. In Virtual hard disk, select Create a new virtual hard disk. To set the size of the virtual hard disk, specify a value in Size, and then select either MB for megabytes or GB for gigabytes.

    This size must be smaller than or equal to the size of disk X:.

  20. In Virtual network adapter, select ClusterNetwork.

  21. Click Create.

To complete the configuration of Guest1 so it can fail over

  1. Complete the previous procedures.

  2. In Cluster Administrator, move Guest1Group to the other node (not the node on which you were working in the previous procedure).

  3. For the cluster node on which Guest1Group is currently located, open the Virtual Server Administration Website.

  4. In the navigation pane, under Virtual Networks, click Add.

  5. In the box next to Existing configuration (.vnc) file, type:

    X:\Guest1\ClusterNetwork.vnc

  6. Click Add.

  7. In the navigation pane, under Virtual Machines, click Add.

  8. In Fully qualified path to file, type:

    X:\Guest1\Guest1.vmc

  9. Click Add.

  10. On either cluster node, in Cluster Administrator, create a new script resource with the properties in the following list.

    Note

    Do not bring this new resource online until you have completed step 8.

    • Call it Guest1Script.
    • Make it a Generic Script resource.
    • Assign the resource to Guest1Group.
    • For Possible Owners, make sure both cluster nodes are listed.
    • Add DiskResourceX to the list of resource dependencies.
    • For the Script filepath, specify the following, typing the percent character (%) as shown:
      %windir%\Cluster\Havm.vbs
  11. With Guest1Script in the Offline state, on the same node as in the previous step, click Start, click Run, type the following command, and then press ENTER:

    cluster res "Guest1Script" /priv VirtualMachineName=Guest1

    This command associates the Guest1Script resource with the guest named Guest1.

  12. In Cluster Administrator, bring Guest1Group online.

    If you use the Virtual Server Administration Website to view the node that is the owner of Guest1Group, in Master Status, Guest1 will now have a status of Running.

After completing the configuration of Guest1 so it can fail over, you can install an operating system on Guest1. After installing the operating system, you must install Virtual Machine Additions on the guest. Virtual Machine Additions is included in Virtual Server 2005 R2. For more information, see "Setting up operating systems for virtual machines" on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=55573).

To test failover

  1. Complete the previous procedures.

  2. In Cluster Administrator, make sure that Guest1Group is online.

  3. In Cluster Administrator, right-click Guest1Group and then click Move Group. In the display for Guest1Group, you should see Owner change to the other node.

  4. If you want, conduct further tests. For example, you could conduct tests with an application installed in the guest, clients connected to the application, and a simulated power failure of the node that is currently the owner of Guest1Group.

Note

The following procedure is an optional configuration step.

To configure the action the cluster takes if a guest's operating system stops responding

  1. Complete the previous procedures.

  2. In Cluster Administrator, right-click Guest1Script, click Properties, and then click the Advanced tab.

  3. Configure the action the cluster takes if a guest's operating system stops responding:

    • To cause the guest to immediately fail over to another node if the guest's operating system stops responding, select Restart and Affect the group and then, in Threshold, specify 0.
    • To cause the guest's operating system to be restarted on the same node a specified number of times, then fail over, select Restart and Affect the group. Then, in Threshold, specify the number of times you want the guest to be restarted on the same node. Finally, in Period, specify the amount of time in which you want the threshold number of restart attempts to take place.
      For example, suppose you decided that a guest's operating system could stop and be restarted on the same node three times, but if it stopped once more within a period of 15 minutes (900 seconds), it should fail over. You would configure this by selecting Restart and Affect the group, then specifying a Threshold of 3 and a Period of 900.
    • To cause the guest to be restarted on the same node every time that the guest's operating system stops responding, clear the Affect the group check box. Note that with this setting, the guest still fails over if the physical host fails or requires scheduled downtime.

Additional Resources

The following resources provide additional information about Virtual Server host clustering: