Using Microsoft Virtual Server 2005 to Create and Configure a Two-Node Microsoft Windows Server 2003 Cluster

By Robert Larson

Abstract

Microsoft® Virtual Server 2005 enables use of virtual machines for sophisticated computing configurations like clustering. Clustering is valuable to businesses as it provides high availability for mission critical business applications and computing processes. Using virtual machines to cluster server computers has the added advantage of allowing a server to take advantage of its full computing power by running multiple virtual machines on a single hardware device, thus providing redundancy without requiring the quantity of computer hardware and associated cost that could be required for conventional clustering configurations. This guide provides step-by-step instructions for creating and configuring a typical, single quorum device, two-node server cluster. The configuration uses a shared disk on servers with Microsoft Windows Server™ 2003 Enterprise Edition installed in virtual machines on Microsoft Virtual Server 2005.

On This Page

Introduction
Glossary of Terms
Microsoft Virtual Server 2005 Overview
Whitepaper Scenario
Checklists for Microsoft Virtual Server 2005 Configuration:
Checklists for Cluster Node Virtual Machine Configuration:
Virtual Server 2005 Configuration
Creating a Parent Virtual Hard Disk
Creating the Domain Controller - ClusterDC
Creating the Cluster Node Virtual Machines
Cluster Node Configuration
Cluster Installation
Creating a Cluster User Account
Setting up Shared Disks
Configuring the Cluster Service
Post-Installation Configuration
Test Installation
Appendix A
Related Links

Introduction

A server cluster is a group of independent servers working collectively via clustering software such as the Microsoft Cluster Service (MSCS). Server clusters provide high availability, failback, scalability, and manageability for resources and applications. Thus, server clusters facilitate uninterrupted client access to applications and server resources in the event of failures and planned outages. If one of the servers in the cluster is unavailable because of a failure or other downtime, clients utilize resources and applications from other available cluster nodes.

Windows clustering solutions use the term “high availability” rather than “fault tolerant” because fault-tolerance implies a high degree of hardware redundancy plus specialized software resulting in near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a Windows clustering solution because organizations must buy redundant hardware that runs in an idle state in anticipation of a fault.

Server clusters do not guarantee non-stop operation, but they usually provide sufficient availability for most mission-critical applications. The cluster service can monitor applications and resources and automatically recognize and recover from many failure conditions. Automatic failure recognition and recovery provides flexibility in managing the workload within a cluster. It also improves overall system availability.

There are additional benefits when you use virtual machines for clustering. Virtual machines allow multiple operating systems to run on one server computer, enabling disparate clustering solutions to share physical hardware decreasing the total hardware requirement to create clustering solutions.

Microsoft Virtual Server 2005 allows for two-node clustering of virtual machines. This document provides instructions for creating and configuring a Windows Server 2003 Enterprise Edition server cluster with servers implemented as virtual machines connected to a virtual shared cluster storage device using Virtual Server 2005. The instructions are based on a defined scenario documented below where the domain controller and both nodes of the cluster are implemented as virtual machines. This document is intended to guide you through the process of installing a 2-node cluster using virtual machines. This document does not explain how to install clustered applications.

Glossary of Terms

The following is a list of terms and abbreviations that are used in this paper.

Term

Description

Virtual Machine

The virtual hardware environment provided by Virtual Server 2005 that provides a complete emulation of a physical computer (motherboard, BIOS, ports, memory, disk subsystem, network interface card, etc.)

Host Operating System

The operating system that is installed on the physical computer on which Virtual Server 2005 is installed.

Physical Computer

The physical hardware that is being used to host Virtual Server 2005 and virtual machines.

Guest Operating System

The operating system software that is installed in a virtual machine

Virtual Network

An emulated network segment implemented in software that can share the physical computers Network Interface Card (NIC) to allow communications between computers (virtual or physical).

Virtual Machine Additions

Software loaded on the Guest operating system that provides functionality and performance enhancements.

Virtual CDROM

An emulated CDROM device implemented in software that can share the physical computer CDROM device or handle ISO image files like a CD.

Administration Web Site

The Web interface from which all Virtual Server administration is performed.

Master Status Page

The administration interface that lists registered virtual machines.

VMRC

Virtual Machine Remote Control (VMRC) is the remote management protocol that enables access to a virtual machine user interface.

VMRC Client

The client application that provides a stand-alone interface to access virtual machines.

VHD

The Virtual Hard Disk (VHD) is a file, stored on the host’s hard disk, which a virtual machine sees as a hard disk and uses to perform storage functions including essential disk read and write activities.

VMC

The Virtual Machine Configuration (VMC) is an XML file containing virtual machine settings including memory settings, display resolution, VHD location, default shut down options, and more.

Microsoft Virtual Server 2005 Overview

Virtual machines enable customers to run multiple operating systems concurrently on a single physical server, providing for much more effective utilization of server hardware. Microsoft Virtual Server 2005 is optimized to provide this capability on a Windows Server 2003 operating system platform. Virtual Server 2005 is the most cost effective virtual machine solution for Windows Server 2003 designed to improve operational efficiency in four key developer and server administrator scenarios including: software testing and development, legacy application re-hosting, server consolidation and testing of distributed server applications on a single server.

Virtual server will be available to customers in two editions:

  • Microsoft Virtual Server 2005 Standard Edition

  • Microsoft Virtual Server 2005 Enterprise Edition.

Microsoft Virtual Server 2005 Standard Edition will support up to four processors and Microsoft Virtual Server 2005 Enterprise Edition will support up to 32 physical processors. Otherwise, features across the two Editions are the same.

The diagram below illustrates the basic architecture of Microsoft's virtual machine technology. Starting from the bottom of the logical stack:

  • The host operating system—Windows Server 2003—manages the host system.

  • Virtual Server 2005 provides a Virtual Machine Monitor virtualization layer that manages virtual machines, providing the software infrastructure for hardware emulation. The Virtual Machine Monitor can be relocated.

  • Each virtual machine consists of a set of virtualized devices, the virtual hardware for each virtual machine.

  • A guest operating system and applications run in the virtual machine—unaware, for example, that the network interface card (NIC) that it interacts with via Virtual Server is only a software simulation of a physical Ethernet device. When a guest operating system is running, the special-purpose Virtual Machine Monitor kernel takes mediated control over the CPU and hardware during virtual machine operations, creating an isolated environment in which the guest operating system and applications run close to the hardware at the highest possible performance.

    Note: Virtual Server Clustering is intended for test and development purposes only and is not supported in a production environment. 

    Figure 1 Virtual Server Architecture

    Figure 1. Virtual Server Architecture

The Virtual Server 2005 Technical Overview white paper contains a detailed discussion of the Virtual Server 2005 architecture including specifications regarding emulated hardware. You can find the paper on the Microsoft Web site at the following link. https://go.microsoft.com/fwlink/?LinkId=34734.

Whitepaper Scenario

In order to provide the most complete set of reproducible instructions, this guide focuses on a specific scenario in order to create a 2-node cluster of virtual machines on a single Virtual Server 2005 host. In this document we perform the following tasks:

  • Create a virtual network in Virtual Server for client communication with the cluster called PUBLIC using the 10.10.10.0/24 subnet.

  • Create a virtual network in Virtual Server for cluster heartbeat communications called PRIVATE using the 192.168.1.0/24 subnet.

  • Create the quorum disk as a fixed 500 MB NTFS formatted virtual hard disk.

  • Perform a base install of Windows Server 2003 Enterprise Edition that will be sysprep’ed. This disk will be used as the read-only parent to create differencing disks for the domain controller and both nodes of the cluster.

  • Create the Domain Controller, ClusterDC, to create a test forest with a single domain called Contoso.com.

  • Create Node1 of the cluster, configuring it for a cluster node, joining it to the Contoso domain and creating the cluster MyCluster.

  • Create Node2 of the cluster, configuring it for a cluster node, joining it to the Contoso domain, and adding this as the second node of the cluster.

  • Test the cluster after installation is complete.

You may adapt this scenario and instructions by replacing server names and IP addresses with real names and IP addresses that conform to your corporate taxonomy. You do not have to implement the domain controller as a virtual machine, but we recommend that a domain controller be available on the same subnet as the Virtual Server host.

Figure 2 below shows the configuration of the virtual networks and virtual machines as a network diagram.

Figure 2. Logical Network diagram of the 2-node cluster scenario

Figure 2. Logical Network diagram of the 2-node cluster scenario

Assumptions

Since this guide will not be an introduction of how to use Virtual Server 2005, we assume that you have knowledge and experience installing and using Virtual Server 2005. Please refer to the product and administration documentation that is available online and installed with Virtual Server 2005. In addition, other resources are available at https://www.microsoft.com/virtualserver. For more information, please see the Microsoft Virtual Server 2005 Technical Overview on the Virtual Server 2005 Web site at the following location: https://www.microsoft.com/virtualserver/.

Note: This guide assumes that you are using the default Start menu. The steps may be slightly different if you use the classic Start menu.

Checklists for Microsoft Virtual Server 2005 Configuration:

This checklist will help you prepare the physical computer to install the cluster in virtual machines using Virtual Server 2005. The requirements listed below are for the physical host computer.

Software Requirements

  • Microsoft Windows Server 2003 Standard, Enterprise, or Datacenter Edition installed as the host operating system. (Windows XP can be used as the host operating system in development and test environments.)

  • IIS installed with default permissions.

  • Microsoft Virtual Server 2005 installed on the host using port 1024 for the Virtual Server Administration Website.

Hardware Requirements

  • Processor: 2Ghz or faster

  • RAM: 1GB or more (Virtual Server 2005 will only use non-paged memory)

  • Network interface card connected to a network with Internet access

  • CDROM\DVD drive installed on the host

  • Internal hard disk with 5Gb of free disk space for virtual hard disk (VHD) storage

Checklists for Cluster Node Virtual Machine Configuration:

This checklist helps you prepare for installation and creation of a 2-node cluster of virtual machines using Microsoft Virtual Server 2005. The requirements below are for virtual machine creation.  

Virtual Machine Software Requirements (for all virtual machines)

  • Microsoft Windows Server 2003 Enterprise Edition

  • Sysprep for Windows Server 2003

ClusterDC Virtual Machine Specifications

  • Dynamically expanding virtual hard disk for the guest operating system

  • One virtual NIC connected to the Public virtual network

Node1 Virtual Machine Specifications

  • One SCSI Controller configured for Shared Bus mode, SCSI ID=7

  • Dynamically expanding virtual hard disk for the guest operating system with an attached IDE controller as the boot disk

  • Shared, fixed-size virtual hard disk for cluster quorum drive, NTFS formatted, attached to a SCSI controller as SCSI ID 0

  • One virtual NIC connected to the Public virtual network

  • One virtual NIC connected to the Private virtual network

Node2 Virtual Machine Specifications

  • One SCSI Controller configured for Shared Bus mode, SCSI ID=6

  • Dynamically expanding virtual hard disk for the guest operating system, attached to IDE controller as the boot disk

  • Shared, fixed-size virtual hard disk for cluster quorum drive, NTFS formatted, attached to SCSI controller SCSI ID 0

  • One virtual NIC connected to the Public virtual network

  • One virtual NIC connected to the Private virtual network

Virtual Server 2005 Configuration

The two node scenario used here requires that two virtual networks be created to provide isolated communications. The first, a network named Public, is for communications between client machines and the cluster. The second network, named Private, is for communications between the cluster nodes and, specifically, the cluster heartbeat.  

The following section provides the steps for creating the Public and Private virtual networks in Virtual Server 2005. We assume that Virtual Server 2005 is already installed on the host system and the user has the appropriate permissions to manage the instance of Virtual Server 2005 as a local administrator.  

In addition, you will be creating a local directory to store all of the virtual hard disks and guest configuration files.

Network Configurations

  • One virtual network called Private that is used for all cluster node-to-node communications

  • One virtual network called Public that is used for all client-to-cluster communications

Storage Configurations

  • One directory for virtual disk storage

  • One directory for virtual machine configuration file storage

  • Virtual Server Search Path pointing to the virtual disk storage directory

  • Virtual Server configuration file pointing to the virtual machine configuration storage directory

Creating the Private Virtual network

We recommend that the Private network be on a dedicated private network.

  1. On the host machine, click Start, point to All Programs, click Microsoft Virtual Server, then select Virtual Server Administration Website. (If you are prompted for credentials, enter the local administrator user name and password.) You should now see the Master Status Page. 

  2. Point to the Virtual Networks menu and click Create.

    Clstvs03.gif

  3. Enter Private as the virtual network name.

  4. Select None (Guests Only) for the physical network adapter.

  5. Type Private network for cluster communications (Heartbeat) in the textbox, and then click OK.

Creating the Public Virtual network

We recommend that the Public network be on a dedicated local area network and that it be configured to allow all cluster clients to access the system. In this scenario, we will be configuring the Public network to allow traffic external to the host utilizing a host physical network adapter. This will allow us to load the latest security and driver patches.

  1. From the Virtual Server Administration Website Master Status Page, find the Virtual Networks menu on the left hand side.

  2. Point to the Virtual Networks menu, and then click Create.

    Clstvs04.gif

  3. Enter Public as the virtual network name.

  4. Select an available physical network adapter from the physical computer network adapter combo box that is connected to the physical network. Client machines will use this to access the cluster. This can be a LAN or Wireless NIC.

  5. Type Public network for cluster communications in the textbox, and then click OK.

Creating and configuring the virtual hard disk storage directory

For this scenario, we recommend that virtual machines be stored in the same directory for easy Virtual Server configuration. The following procedure will help you create two subdirectories – one for storing the virtual hard disks and one for storing the virtual machine configuration files, and it will help you configure Virtual Server to use these values as the default storage locations.

  1. On the host machine, create a subdirectory on the root of the disk called C:\VirtualDisks to store virtual hard disks and a subdirectory C:\VirtualConfigs to store virtual configuration files.

  2. From the Virtual Server Administration Website Master Status Page, find the Virtual Server menu.

  3. From the Virtual Server menu click Server Properties.

    Clstvs05.gif

  4. From the properties page, click Search Paths

    Clstvs06.gif

  5. Change the Default virtual machine configuration folder to C:\VirtualConfig.

  6. Type C:\VirtualDisks in the search path text box (separating by a semicolon if existing paths are configured).

  7. Click OK

You have now configured the default storage location for configuration files and the default search paths so that all of the pull down selection boxes throughout the Virtual Server Administration Website will show resources in the configured directories.

Creating a Shared Quorum Cluster Disk

The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster and maintain consistency. The following quorum disk procedures are recommended for this test scenario:

  • Create a fixed size virtual hard disk of 500 MB formatted as NTFS to be used as a quorum disk and dedicate it as a quorum resource.

The quorum resource plays a crucial role in the operation of the cluster. In every cluster, a single resource is designated as the quorum resource. In this 2-node cluster of virtual machines, the quorum resource will be a fixed-size disk 500 MB in size (dynamically expanding disks are not supported).  

The following instructions provide the steps for creating the quorum disk.

  1. From the Virtual Server Administration Website Master Status Page, locate the Virtual Disks menu.

  2. From the Virtual Disks menu, point to Create, and then select Fixed Size Virtual Hard Disk.

    Clstvs07.gif

  3. Type C:\VirtualDisks\Quorum.vhd for the virtual hard disk name.

  4. Select MB for disk capacity.

  5. Enter 500 as the size.

  6. Click Create. 

Creating a Parent Virtual Hard Disk

For this scenario, differencing disks are used to simplify the creation of the virtual machines and minimize the amount of disk space required. The parent virtual hard disk is a standard dynamically expanding virtual hard disk. Once the parent virtual hard disk has been loaded and sysprep’ed, it will be used for the creation of the domain controller and each of the cluster nodes.  

Creating a Blank Virtual Hard Disk

Use the following procedures to create the parent virtual hard disk.

  1. From the Virtual Server Administration Website Master Status Page, locate the Virtual Disks menu.

  2. Under the Virtual Disks menu, point to Create, and then select Dynamically Expanding Virtual Hard Disk.

    Clstvs08.gif

  3. Type C:\VirtualDisks\ParentVM.vhd in the virtual hard disk file name text box.

  4. Accept the default disk size of 16GB. 

  5. Click Create.

    NOTE: This procedure uses IDE virtual hard disks for the boot device for simplicity of the procedures. Microsoft recommends that you use SCSI virtual hard disks as the boot device to obtain the best performance. To do this you must create the virtual hard disk in advance and attach it as SCSI ID 0 to a non-shared SCSI controller in the virtual machine.

Creating the Temporary Virtual Machine

You will create the virtual machine that will be used as the parent for the rest of the virtual machines in this scenario using the previously created dynamically expanding hard disk called ParentVM.vhd.

Use the following procedure to create the parent virtual hard disk.

  1. Under the Virtual Machines menu, click Create.

  2. Enter C:\VirtualConfig\ParentVM.vmc for the virtual machine name.

    Clstvs09.gif

  3. Enter 256 MB for the memory.

  4. Select Use an existing virtual hard disk and enter C:\VirtualDisks\ParentVM.vhd in the File name (.vhd) text box (or select it from the pulldown box).

  5. Select Public as the virtual network adapter that will be used by the cluster client machines to communicate to the cluster.

  6. Click Create.

Loading and Syspreping the Parent Virtual Hard Disk

In this section you will power on the virtual machine, load the Windows Server 2003 Enterprise Edition operating system, and prepare the machine for use as a parent differencing disk.  

Note: We recommend that you patch the system prior to placing it on a production network. In this scenario all machines are configured to run on the local virtual network of the Virtual Server host and do not have access to the Internet to retrieve patches.

This document assumes that you are familiar with installing Windows Server 2003, so only a general description of steps is provided.

  1. From the Virtual Server Administration Website Master Status Page you should see a virtual machine list that contains the ParentVM virtual machine.

  2. Insert the Windows Server 2003 Enterprise Edition CDROM in the physical host CDROM drive.

  3. Edit the ParentVM configuration and capture the physical CDROM to the virtual CDROM.

  4. Click the ParentVM thumbnail on the Master Status Page to power-on the virtual machine.

  5. Install Windows Server 2003 Enterprise Edition with the following settings:

    1. Member Server

    2. Computer name = ParentVM

    3. Password = blank (required for sysprep)

    4. Typical networking settings

  6. Once the guest operating system restarts, log in as Administrator using no password

  7. Wait while the system performs plug and play detection and then install the Virtual Server Additions.

  8. Apply all Windows Server 2003 critical security patches from Windows Update.

  9. Now you should create your sysprep.inf file. There is a sample sysprep.inf file in Appendix A that can be used to sysprep ParentVM. When you create your sysprep.inf file, leave the computer name blank so that you will be prompted during mini-setup for a new computer name.

  10. Sysprep the machine using the Reseal option and select Shutdown mode. This will shutdown the virtual machine after syspreping.

  11. Make the C:\VirtualDisks\ParentVM.vhd file Read-only. Now the C:\VirtualDisks\ParentVM.vhd virtual hard disk can be used as the read-only parent disk in a differencing disk configuration.

Creating the Domain Controller - ClusterDC

For the guide scenario, a new test domain named Contoso.com will be created in a new forest. It will be created as a virtual machine so that the entire scenario is implemented with locally available resources. The domain controller will be named ClusterDC and will be built from the parent differencing disk.

Creating the Domain Controller virtual machine involves the following steps:

  1. Create the virtual hard disk as a differencing disk

  2. Create the virtual machine

  3. Configure the virtual machine

  4. Promote the machine to a domain controller using DCPromo

In this section you will create the virtual machines that will become the nodes of the cluster. Each node of the cluster will be a “child” differencing disk. A child differencing disk must be associated with a pre-existing “parent” virtual hard disk. The parent is the read-only source for the child. The child differencing disk provides an ongoing way to save changes without altering the parent disk. You can use the differencing disk to store changes indefinitely, as long as there is enough space on the physical disk where the differencing disk is stored. The differencing disk expands dynamically as data is written to it and can grow as large as the maximum size allocated for the parent disk when the parent disk was created.

Creating the ClusterDC Virtual Hard Disk

In order to assign a differencing disk to a virtual machine, the Virtual Disk Wizard must be used prior to creating the virtual machine. Follow these steps to create the differencing virtual hard disk:

  1. From the Virtual Server Administration website Master Status Page, locate the Virtual Disks menu panel.

  2. Under the Virtual Disks menu panel, click Create.

    clstvs10.gif

  3. Select Differencing Virtual Hard Disk.

  4. Type C:\VirtualDisks\ClusterDC.vhd in the virtual hard disk file name box.

  5. Type C:\VirtualDisks\ParentVM.vhd for the parent hard disk file name box.

  6. Press Create.

Create ClusterDC Virtual machine

  1. From the Virtual Server Administration website Master Status Page, locate the Virtual Machine menu panel.

  2. Under the Virtual Machine menu panel, click Create.

  3. Enter C:\VirtualConfig\ClusterDC.vmc as the virtual machine name.

  4. Allocate 256 MB of memory.

  5. Select Use an existing virtual hard disk and enter C:\VirtualDisks\ClusterDC.vhd in the .vhd file box.

  6. Select Public for the virtual network adapter.

  7. Click Create.

Loading Domain Controller Virtual machine

In this section you are going to turn on the ClusterDC domain controller virtual machine, configure the unique settings that the sysprep mini setup wizard will request, and DCPromo the machine to create a domain controller. You should have some familiarity with these steps so a high level set of instructions will be provided.

  1. From the Virtual Server Administration website Master Status Page, you will see a virtual machine list that contains the ClusterDC virtual machine.

    Clstvs11.GIF

  2. Click the thumbnail of the virtual machine under the Remote View column to turn on the virtual machine.

  3. Once the Master Status Page refreshes, click the thumbnail of the virtual machine again to remotely control the ClusterDC.

  4. The Sysprep Mini-Setup Wizard will appear and prompt you for a computer name and an admin password, enter the following credentials:

    Computer Name: ClusterDC

    Password: Pass@word1

  5. Once the machine finishes loading, log in as Administrator with Pass@word1 as the password.

  6. Go to the LAN connection and assign the following parameters to TCP/IP

    IP Address: 10.10.10.1

    Subnet Mask: 255.255.255.0

    DNS: 10.10.10.1

  7. Attach a Windows Server 2003 Enterprise Edition install CD to the ClusterDC virtual CDROM.

    1. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

    2. Under the Virtual Machines menu panel, click Configure

    3. Select the ClusterDC virtual machine

    4. Select CD\DVD

    5. Enable the Physical CD\DVD drive and select the correct drive letter

    6. Press OK

  8. Run DCPromo:

    1. Create a new Domain

    2. Create a new Forest

    3. DNS name: Contoso.com

    4. Select to Load DNS

    5. Finish the DCPromo process

  9. Reboot the server.

  10. Verify that the ClusterDC is operating correctly by checking the event logs for any errors.

    Now the Domain Controller is configured and ready for you to complete the rest of the procedures.

Creating the Cluster Node Virtual Machines

Cluster Node Configurations

  • One virtual machine called NODE1. This is the first node in the cluster.

  • One virtual machine called NODE2. This is the second node in the cluster.

Hard disks for both nodes will use the parent virtual hard disk created in the previous procedure called C:\Virtual Disks\ParentVM.vhd. The parent differencing disk is a Windows Server 2003 Enterprise Edition member server that has been sysprep’ed.

Create Cluster Node1 Virtual Machine

Perform the following steps to create the Node1 virtual hard disk and virtual machine.

  1. From the Virtual Server Administration website Master Status Page, locate the Virtual Disks menu panel.

  2. Under the Virtual Disks menu panel, click Create.

  3. Select Differencing Virtual Hard Disk.

  4. Type C:\VirtualDisks\NODE1.vhd for the hard disk name.

  5. Type C:\VirtualDisks\ParentVM.vhd for the name of the parent hard disk.

  6. Click Create.

  7. Under the Virtual Machine menu panel, click Create.

  8. Type C:\VirtualConfig\Node1.vmc as the virtual machine name.

  9. Allocate 256 MB for the memory.

  10. Select Use an existing virtual hard disk and type C:\VirtualDisks\Node1.vhd in the .vhd file box.

  11. Select Public for the Virtual Network Adapter.

  12. Click Create.

Create Cluster Node2 Virtual Machine

Use the following steps to create the Node2 virtual hard disk and virtual machine.

  1. From the Virtual Server Administration website Master Status Page, locate the Virtual Disks menu panel.

  2. Under the Virtual Disks menu panel, click Create.

  3. Select Differencing Virtual Hard Disk.

  4. Type C:\VirtualDisks\NODE2.vhd for the hard disk name.

  5. Type C:\VirtualDisks\ParentVM.vhd for the name of the parent hard disk.

  6. Click Create.

  7. Under the Virtual Machine menu panel, click Create.

  8. Type C:\VirtualConfig\Node2.vmc for the New Configuration File.

  9. Allocate 256 MB for the memory.

  10. Select Use an existing virtual hard disk and type C:\VirtualDisks\Node2.vhd in the .vhd file box.

  11. Select Public for the Virtual Network Adapter.

  12. Click Create.

Cluster Node Configuration

Now that the cluster nodes are created, they must be booted and configured. This will involve booting the cluster nodes into Mini-setup, configuring a name and administrator password, modifying network settings, loading the latest Virtual Server Additions, and then turn off the virtual machine.

Cluster Node1 Configuration

  1. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

  2. Under the Virtual Machines menu panel, click Configure.

    Clstvs12.gif

  3. Select Node1.

  4. Click SCSI Controllers.

  5. Click the Add Controller button to add a controller for the Quorum disk.

  6. Enable Share SCSI bus for clustering. 

  7. Set the SCSI ID = 7.

  8. Click Apply.

  9. Click on the thumbnail of Node1 under the remote view column to turn it on.

  10. Click the thumbnail again to remotely control Node1.

    Note: You may have to enable administrative Virtual Machine Remote Control (VMRC) support and\or approve the download of the VMRC ActiveX control.

  11. The Sysprep Mini-Setup Wizard will appear and prompt you for a computer name and an admin password. Enter the following:

    Computer Name: NODE1

    Password: Pass@word1

  12. Once the machine finishes booting, log in as Administrator with Pass@word1 as the password.

  13. Go to the LAN connection and assign the following parameters to TCP/IP:

    1. IP Address: 10.10.10.2

    2. Subnet Mask: 255.255.255.0

    3. DNS: 10.10.10.1

  14. Install the Virtual Server Additions.

  15. Join Node1 to the Contoso.com domain. Restart when prompted.

  16. Shutdown Node1.

  17. Now you will add the Private network to Node1. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

  18. Under Configure select Node1. 

  19. Click Network Adapters.

  20. Click the Add Network Adapter button.

  21. Select Private for the network adapter, and then click Apply.

  22. Return to the Master Status Page.

Cluster Node2 Configuration

  1. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

  2. Under the Virtual Machines menu panel, click Configure.

    Clstvs13.gif

  3. Select Node2.

  4. Click SCSI Controllers.

  5. Click the Add Controller button to add a controller for the Quorum disk.

  6. Enable Share SCSI bus for clustering. 

  7. Set the SCSI ID = 6.

  8. Click Apply.

  9. Click on the thumbnail of Node2 under the remote view column to turn it on.

  10. Click the thumbnail again to remotely control Node2.

  11. The Sysprep Mini-Setup Wizard will appear and prompt you for a computer name and an admin password. Enter the following:

    Computer Name: NODE2

    Password: Pass@word1

  12. Once the machine finishes booting, log in as Administrator with Pass@word1 as the password.

  13. Go to the LAN connection and assign the following parameters to TCP/IP:

    1. IP Address: 10.10.10.3

    2. Subnet Mask: 255.255.255.0

    3. DNS: 10.10.10.1

    4. add Install the Virtual Server Additions

  14. Join Node2 to the Contoso.com domain. Restart when prompted.

  15. Shutdown Node2.

  16. Now you will add the Private network to Node2. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

  17. Under Configure select Node2.

  18. Click Network Adapters.

  19. Click the Add Network Adapter button.

  20. Select Private for the network adapter, and then click Apply.

  21. Now return to the Master Status Page.

Cluster Installation

Installation Overview

During the installation process, some nodes will be shut down while others are being installed. This step helps guarantee that data on disks attached to the shared bus is not lost or corrupted. This can happen when multiple nodes simultaneously try to write to a disk that is not protected by the cluster software. The default behavior of how new disks are mounted has changed in Windows 2003 Server from the behavior in the Microsoft Windows 2000 operating system. In Windows 2003, logical disks that are not on the same bus as the boot partition will not be automatically mounted and assigned a drive letter. This helps ensure that the server will not mount drives that could possibly belong to another server in a complex SAN environment. Although the drives will not be mounted, it is recommended that you follow the procedures below as a precautionary measure to be certain the shared disks will not become corrupted.

Use the table below to determine which nodes and storage devices should be turned on during each step.

The steps in this guide are for a two-node cluster.

Step

Node 1

Node 2

Storage

Comments

Setting up networks

On

On

Off

Verify that all storage devices on the shared bus are turned off. Turn on all nodes.

Setting up shared disks

On

Off

On

Shutdown all nodes. Turn on the shared storage, and then turn on the first node.

Verifying disk configuration

Off

On

On

Turn off the first node, turn on second node.

Configuring the first node

On

Off

On

Turn off all nodes, and then turn on the first node.

Configuring the second node

On

On

On

Turn on the second node after the first node is successfully configured.

Post-installation

On

On

On

All nodes should be on.

Several steps must be taken before configuring the Cluster service software. These steps are:

  1. Set up the network for the cluster heartbeat for each node.

  2. Set up shared SCSI controllers and quorum disks.

Perform these steps on each cluster node before proceeding with the installation of cluster service on the first node.

To configure the cluster service, you must be logged on with an account that has administrative permissions to all nodes. Each node must be a member of the same domain.

Setting up Networks

Each cluster node requires at least two network adapters with two or more independent networks, to avoid a single point of failure. One is to connect to a public network, and one is to connect to a private network consisting of cluster nodes only. Servers with multiple network adapters are referred to as “multi-homed.” Because multi-homed servers can be problematic, it is critical that you follow the network configuration recommendations outlined in this document.

The private network adapter is used for node-to-node communication, cluster status information, and cluster management. Each node’s public network adapter connects the cluster to the public network where clients reside and should be configured as a backup route for internal cluster communication. To do so, configure the roles of these networks as either Internal Cluster Communications Only or All Communications for the Cluster service.

To eliminate possible communication issues, remove all unnecessary network traffic from the Private network adapter that is set to Internal Cluster communications only.

To verify that all network connections are correct, private network adapters must be on a network that is on a different logical network from the public adapters.

Cluster Node1 Network Configuration

  1. From the Virtual Server Administration website Master Status Page, Click on the thumbnail of the virtual machine named Node1 to power it on.

  2. Click the thumbnail again to remotely control Node1.

  3. Log on to the machine as the Administrator with Pass@word1 as the password.

  4.   Allow the new NIC to be detected via Plug-n-Play.

    It is a good idea to change the names of the network connections for clarity.

  5. Click Start, open Control Panel.

  6. Double-click Network Connections. 

  7. Right-click the Local Area Connection 2 icon.

  8. Click Rename.

  9. Type Private in the textbox and then press Enter.

  10. Right-click the Local Area Connection icon.

  11. Click Rename.

  12. Type Public in the textbox, and then press Enter.

  13. On the Advanced menu, click Advanced Settings.

  14. In the Connections box, make sure that your bindings are in the following order, and then click OK.

    1. Public – Local Area Connection

    2. Private – Local Area Connections 2

    3. Remote Access Connections

  15. Right-click the network connection for your Private adapter, and then click Properties.

  16. On the General tab, make sure that only the Internet Protocol (TCP/IP) check box is selected. Click to clear the check boxes for all other clients, services, and protocols.

  17. Highlight Internet Protocol (TCP/IP), and then click Properties.

  18. Assign the following parameters to TCP/IP:

    1. IP Address: 192.168.1.2

    2. Subnet Mask: 255.255.255.0

  19. Verify that there are no values defined in the Default Gateway box or under Use the Following DNS server addresses.

  20. Click the Advanced button.

  21. On the DNS tab, verify that no values are defined. Make sure that the Register this connection's addresses in DNS and Use this connection's DNS suffix in DNS registration check boxes are cleared.

  22. On the WINS tab, verify that there are no values defined. Click Disable NetBIOS over TCP/IP.

  23. Now return to the Master Status Page by clicking on the Master Status Page link under the Navigation menu panel.

Cluster Node2 Network Configuration

  1. From the Virtual Server Administration website Master Status Page, click on the thumbnail of the virtual machine named Node2 to turn it on.

  2. Click the thumbnail again to remotely control Node2.

  3. Log on to the machine as the Administrator with Pass@word1 as the password.

  4. Allow the new NIC to be detected via Plug-n-Play.

    It is a good idea that you change the names of the network connections for clarity.

  5. Click Start, open Control Panel.

  6. Double-click Network Connections. 

  7. Right-click the Local Area Connection 2 icon.

  8. Click Rename.

  9. Type Private in the textbox and then press Enter.

  10. Right-click the Local Area Connection icon.

  11. Click Rename.

  12. Type Public in the textbox, and then press Enter.

  13. On the Advanced menu, click Advanced Settings.

  14. In the Connections box, make sure that your bindings are in the following order, and then click OK.

    1. Public – Local Area Connection

    2. Private – Local Area Connections 2

    3. Remote Access Connections

  15. Right-click the network connection for your Private adapter, and then click Properties.

  16. On the General tab, make sure that only the Internet Protocol (TCP/IP) check box is selected. Click to clear the check boxes for all other clients, services, and protocols.

  17. Highlight Internet Protocol (TCP/IP), and then click Properties.

  18. Assign the following parameters to TCP/IP.

    1. IP Address: 192.168.1.3

    2. Subnet Mask: 255.255.255.0

  19. Verify that there are no values defined in the Default Gateway box or under Use the Following DNS server addresses.

  20. Click the Advanced button.

  21. On the DNS tab, verify that no values are defined. Make sure that the Register this connection's addresses in DNS and Use this connection's DNS suffix in DNS registration check boxes are cleared.

  22. On the WINS tab, verify that there are no values defined. Click Disable NetBIOS over TCP/IP.

  23. Return to the Master Status Page by clicking on the Master Status Page link under the Navigation menu panel.

Configuring the Public Network Adapter

If IP addresses are obtained via DHCP, access to cluster nodes may be unavailable if the DHCP server is inaccessible. For this reason, static IP addresses are required for all interfaces on a server cluster. Keep in mind that cluster service will only recognize one network interface per subnet. If you need assistance with TCP/IP pertaining to Windows Server 2003, please refer to online help.

Verifying Connectivity and Name Resolution

To verify that the private and public networks are communicating properly, ping all IP addresses from each node. You should be able to ping all IP addresses, locally and on the remote nodes.

To verify name resolution, ping each node from a client using the node’s machine name instead of its IP address. It should only return the IP address for the public network. You may also want to try a PING –a command to do a reverse lookup on the IP Addresses.

Verifying Domain Membership

All nodes in the cluster must be members of the same domain and they must be able to access a domain controller and a DNS server. You should have at least one domain controller on the same network segment as the cluster. For high availability, another domain controller should also be available to remove a single point of failure. In this guide, all nodes are configured as member servers.

To verify that the nodes are properly configured as domain members, that they can interact with the domain DNS server and that they have proper secure channels with the domain controllers, perform the following steps:

  1. Turn on each node if not already turned on. Log in as the domain administrator with the following credentials:

    User ID = Administrator@contoso.com

    Password = Pass@word1

  2. If you log in successfully, then you are properly configured.

Creating a Cluster User Account

The Cluster service requires a domain user who is a member of the Local Administrators group on each node under which the Cluster service can run. Because setup requires a user name and password, this user account must be created before configuring the Cluster service. This user account should be dedicated only to running the Cluster service and should not belong to an individual.

Note: The cluster service account does not need to be a member of the Domain Administrators group. For security reasons, granting domain administrator rights to the cluster service account is not recommended.

The cluster service account requires the following permissions to function properly on all nodes in the cluster. The Cluster Configuration Wizard grants the following permissions automatically:

  • Act as part of the operating system

  • Adjust memory quotas for a process

  • Back up files and directories

  • Increase scheduling priority

  • Log on as a service

  • Restore files and directories

For additional information, see the following article in the Microsoft Knowledge Base:

269229 How to Manually Re-Create the Cluster Service Account

To create a cluster user account

  1. From the Master Status Page, click on the ClusterDC thumbnail to establish a remote control session.

  2. If required, Log in as **administrator@Contoso.com **with Pass@word1 as the password

  3. Click Start, point to All Programs, point to Administrative Tools, and then click Active Directory Users and Computers.

  4. Click the plus sign (+) to expand the domain if it is not already expanded.

  5. Right-click Users, point to New, and then click User.

  6. Type Cluster for the First Name.

  7. Type Service for the Last Name.

  8. Type Cluster for the user logon name.

  9. Click Next.

    Figure 7. Type the cluster name.

    Figure 7. Type the cluster name.

  10. Set the password to Pass@word1

  11. Set the password settings to User Cannot Change Password and Password Never Expires. Click Next, and then click Finish to create this user.

    Note: If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the cluster service configuration on each node before password expiration. For additional information, see the following article in the Microsoft Knowledge Base: 305813 How to Change the Cluster Service Account Password

  12. Quit the Active Directory Users and Computers snap-in.

Setting up Shared Disks

Warning: To avoid corrupting the cluster disks, make sure that Windows Server 2003 and the Cluster service are installed, configured, and running on at least one node before you start an operating system on another node. It is critical to never have more then one node on until the Cluster service is configured.
To proceed, turn off all virtual cluster nodes, but leave ClusterDC turned on and running.

Configuring Node1 for Shared Disks

In order to attach and configure the shared quorum disk to the cluster nodes, you must ensure that the other node is turned off. Follow the steps below for Node1.

To add a Quorum Disk
  1. Important: Perform this procedure with Node2 turned off.

  2. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

  3. Click Configure and select Node1.

  4. Select Hard Disks.

    Clstvs15.gif

  5. Click the Add Disk button.

  6. Set attachment to SCSI ID 0.

  7. Type C:\VirtualDisks\Quorum.vhd for the name of the virtual hard disk.

  8. Click Apply.

To configure shared disks
  1. From the Virtual Server Administration website Master Status Page, click on the thumbnail of the virtual machine named Node1 to turn it on. Ensure Node2 is turned off.

  2. Log in as administrator@contoso.com with Pass@word1 as the password.

  3. Right-click My Computer, click Manage, and then expand Storage.

  4. Double-click Disk Management.

  5. If you connect a new disk, then it automatically starts the Write Signature and Upgrade Disk Wizard. If this happens, do the following steps:

    1. Click Next to proceed to the next screen.

    2. Click Next to select Disk 1 in initialize.

    3. Click Next. Do not check the Disk 1 option; doing so will convert the disk to an unsupported dynamic disk.

    4. Click Finish

  6. Right-click unallocated disk space for the quorum drive.

  7. Click New Partition.

  8. The New Partition Wizard will begin. Click Next.

  9. Select the Primary Partition partition type. Click Next.

  10. The default is set to maximum size for the partition size. Click Next.

  11. Use the drop-down box to change the drive letter to Q. Click Next. For additional information on cluster drive letter assignments, see the following article in the Microsoft Knowledge Base:

    318534 Best Practices for Drive-Letter Assignments on a Server Cluster

  12. Format the partition using NTFS. In the Volume Label box, type Quorum Disk as the name for the disk. It is critical to assign drive labels for shared disks, because this can dramatically reduce troubleshooting time in the event of a disk recovery situation.

  13. Click Finish.

To verify disk access and functionality
  1. On Node1, Start Windows Explorer.

  2. Right-click drive Q, click New, and then click Text Document.

  3. Assign the file name QuorumTest.txt

  4. Open the file and Type This is a test.

  5. Save the file.

  6. Node1 is now configured.

  7. Shutdown Node1.

Configuring Node2 for Shared Disks

Follow the steps below for Node2 in order to attach and configure the shared quorum disk to the cluster nodes. Ensure Node1 is turned off before turning on Node2.

To add a Quorum Disk
  1. Important: Perform this procedure with Node1 turned off.

  2. From the Virtual Server Administration website Master Status Page, locate the Virtual Machines menu panel.

  3. Under Virtual Machines menu panel, click Configure and select Node2.

  4. Select Hard Disks.

  5. Click the Add Disk button.

  6. Set attachment to SCSI ID 0.

  7. Type C:\VirtualDisks\Quorum.vhd for the name of the virtual hard disk.

  8. Click Apply.

To configure shared disks
  1. From the Virtual Server Administration website Master Status Page, click on the Node2 thumbnail to turn it on. Ensure Node1 is turned off.

  2. Click the thumbnail again to remotely control Node2.

  3. Log in to the machine as Administrator@Contoso.com with Pass@word1 as the password.

  4. Right-click My Computer, click Manage, and then expand Storage.

  5. Double-click Disk Management.

  6. The Quorum drive is listed, but it has not been assigned a drive letter. Right-click the drive and select Change Drive Letter and Paths.

  7. Click Add.

  8. Use the drop-down box to change the drive letter to Q, and press OK.

  9. Close Computer Management.

To verify disk access and functionality
  1. On Node2, Double-click My Computer.

  2. Double-click drive Q. You should see the test document Test.txt.

  3. Open Test.txt and add more text. Click File and then Save.

  4. Select the file, and then press the Del key to delete it from the clustered disk.

    Node2 is now configured.

  5. Shutdown Node2.

Configuring the Cluster Service

You must supply all initial cluster configuration information in the first installation phase. This is accomplished using the Cluster Configuration Wizard.

Note: During Cluster service configuration on Node 1, you must turn off all other nodes. All shared storage devices should be turned on.

To configure Node1

  1. From the Virtual Server Administration website, Master Status Page, click on the Node1 thumbnail to turn it on. Ensure Node2 is turned off.

  2. Log in with the following credentials:

    User ID = Administrator@contoso.com

    Password = Pass@word1

  3. Verify that you can see the ClusterDC by pinging 10.10.10.1

  4. Click Start, click All Programs, click Administrative Tools, and then click Cluster Administrator.

  5. When prompted by the Open Connection to Cluster Wizard, click Create new cluster in the Action drop-down list, as shown in Figure 9 below.

    Figure 9. The Action drop-down list.

    Figure 9. The Action drop-down list.

  6. Verify that you have the necessary prerequisites to configure the cluster, as shown in Figure 10 below. Click Next.

    Figure 10. A list of prerequisites is part of the New Server Cluster Wizard Welcome page.

    Figure 10. A list of prerequisites is part of the New Server Cluster Wizard Welcome page.

  7. Type MyCluster (a unique NetBIOS name for the cluster up to 15 characters), then click Next. Adherence to DNS naming rules is recommended. For additional information, see the following articles in the Microsoft Knowledge Base (https://support.microsoft.com/default.aspx?scid=fh;EN-US;KBJUMP):

    163409 NetBIOS Suffixes (16th Character of the NetBIOS Name)

    254680 DNS Namespace Planning

    Figure 11. Adherence to DNS naming rules is recommended when naming the cluster.

    Figure 11. Adherence to DNS naming rules is recommended when naming the cluster.

  8. If you are logged on locally with an account that is not a Domain User with Local Administrator privileges, the wizard will prompt you to specify an account. This is not the account the Cluster service will use when starting.

    Figure 12. The New Server Cluster Wizard prompts you to specify an account.

    Figure 12. The New Server Cluster Wizard prompts you to specify an account.

    Note: If you have appropriate credentials, the prompt mentioned in step 5 and shown in Figure 12 may not appear.

  9. Because it is possible to configure clusters remotely, you must verify or type the name of the server that is going to be used as the first node to create the cluster. In this case, type Node1 as shown in Figure 13 below.

  10. Click the Advanced button to set the analyze mode and Select Advanced (minimum) Configuration option, then press OK

  11. Click Next.

    Figure 13. Select the name of the computer that will be the first node in the cluster.

    Figure 13. Select the name of the computer that will be the first node in the cluster.

  12. Figure 14 below illustrates that the Setup process will now analyze the node for possible hardware or software problems that may cause problems with the installation. Review any warnings or error messages. You can also click the Details button to get detailed information about each message.

    Note: Since we setup the cluster nodes with the OS boot disk as IDE, you will get a warning that this node is not manageable, you can ignore this warning.

    Figure 14. The Setup process analyzes the node for possible hardware or software problems.

    Figure 14. The Setup process analyzes the node for possible hardware or software problems.

  13. Type the unique cluster IP address 10.10.10.100, and click Next.

    The New Server Cluster Wizard automatically associates the cluster IP address with one of the public networks by using the subnet mask to select the correct network. The cluster IP address should be used for administrative purposes only, not for client connections.

  14. Type CLUSTER for the user name and Pass@word1 for the password of the cluster service account.

  15. Select Contoso.com for the domain name in the Domain drop-down list, and click Next.

    At this point, the Cluster Configuration Wizard validates the user account and password.

  16. Review the Summary page shown in Figure 17 below to verify that all the information that is about to be used to create the cluster is correct.

  17. Click the Quorum button and select Disk Q: as the quorum disk, Press OK.

    The summary information displayed on this screen can be used to reconfigure the cluster in the event of a disaster recovery situation. It is recommended that you save and print a hard copy to keep with a change management log at the server.

    Figure 17. The Proposed Cluster Configuration page.

    Figure 17. The Proposed Cluster Configuration page.

  18. Press Next to start the Cluster creation process

  19. Review any warnings or errors encountered during cluster creation. To do this, click the plus signs to expand the message information, and then click Next. Warnings and errors appear in the Creating the Cluster page as shown in Figure 18.

    Figure 18. Warnings and errors appear on the Creating the Cluster page.

    Figure 18. Warnings and errors appear on the Creating the Cluster page.

  20. Click Finish to complete the installation. Figure 19 below illustrates the final step.

    Figure 19. The final step in setting up a new server cluster.

    Figure 19. The final step in setting up a new server cluster.

    Note: To view a detailed summary, click the View Log button or view the text file stored in the following location: %SystemRoot%\System32\LogFiles\Cluster\ClCfgSrv.Log

Validating the Cluster Installation

Use the Cluster Administrator (CluAdmin.exe) to validate the cluster service installation on Node1.

  1. If the Cluster Administrator is not already running, click Start, point to All Programs, then point to Administrative Tools, and then select Cluster Administrator.

  2. Verify that the State of all resources is successfully Online, as shown in Figure 20 below.

    Figure 20. The Cluster Administer verifies that all resources are successfully online.

    Figure 20. The Cluster Administer verifies that all resources are successfully online.

    Note: As general rules: Do not put anything in the cluster group, do not take anything out of the cluster group, and do not use anything in the cluster group for anything other than cluster administration.

Configuring the Second Node

Installing the cluster service on the second node requires less time than on the first node because setup configures the cluster service network settings on the second node based on the configuration of the first node. You can also add multiple nodes to the cluster at the same time.

Note: For this section, leave Node1 and all shared disks turned on. Then turn on Node2. The cluster service will control access to the shared disks at this point to eliminate any chance of corrupting the volume.

  1. Turn on Node2 and let it boot completely.

  2. Open Cluster Administrator on Node1.

  3. Click File, click New, and then click Node.

  4. The Add Cluster Computers Wizard will start. Click Next.

  5. If you are not logged on with appropriate credentials, you will be asked to specify a domain account that has administrative rights over all nodes in the cluster.

  6. Enter the Node2 for computer name for the node you want to add to the cluster. Click the Add button.  

  7. Click the Advanced button to set the analyze mode and Select Advanced (minimum) Configuration option, then press OK.

  8. Click Next.

    Figure 21. Adding nodes to the cluster.

    Figure 21. Adding nodes to the cluster.

  9. The Setup wizard will perform an analysis of all the nodes to verify that they are configured properly.

  10. Type Pass@word1 for the password for the account used to start the cluster service, Press Next.

  11. Review the summary information that is displayed for accuracy. The summary information will be used to configure the other nodes when they join the cluster.

  12. Review any warnings or errors encountered during cluster creation, then press Next.

  13. Click Finish to complete the installation. You now have an operating two-node cluster.

Post-Installation Configuration

In order to ensure the proper operation of the cluster, there are a few steps that should be performed post-installation.

Private Network Configuration

Now that the networks have been configured correctly on each node and the Cluster service has been configured, you need to configure the network roles to define their functionality within the cluster. Here is a list of the network configuration options available in Cluster Administrator:

  • Enable for cluster use: If this check box is selected, the cluster service uses this network. This check box is selected by default for all networks.

  • Client access only (public network): Select this option if you want the cluster service to use this network adapter only for external communication with other clients. No node-to-node communication will take place on this network adapter.

  • Internal cluster communications only (private network): Select this option if you want the cluster service to use this network only for node-to-node communication.

  • All communications (mixed network): Select this option if you want the cluster service to use the network adapter for node-to-node communication and for communication with external clients. This option is selected by default for all networks.

This paper explains how to configure one mixed network and one private network, the most common configuration. It is assumed that only two networks are in use. If you have available resources, two dedicated redundant networks for internal-only cluster communication are recommended.

To configure the private network
  1. Start the Cluster Administrator.

  2. In the left pane, click Cluster Configuration, click Networks, right-click Private, and then click Properties.

  3. Click Internal cluster communications only (private network), as shown in Figure 22.

    Figure 22. Using Cluster Administrator to configure the private network.

    Figure 22. Using Cluster Administrator to configure the private network.

  4. Click OK.

  5. Right-click Public, and then click Properties (shown in Figure 23 below).

  6. Click to select the Enable this network for cluster use check box.

  7. Click the All communications (mixed network) option, and then click OK.

    Figure 23. The Public Properties dialog box.

    Figure 23. The Public Properties dialog box.

Private Adapter Prioritization

After configuring the role specifying how the cluster service will use the network adapters, the next step is to prioritize the order in which network adapters will be used for intra-cluster communication. This is applicable only if two or more networks were configured for node-to-node communication. Priority arrows on the right side of the screen specify the order in which the cluster service will use the network adapters for communication between nodes. The cluster service always attempts to use the first network adapter listed for remote procedure call (RPC) communication between the nodes. Cluster service will only use the next network adapter in the list if it cannot communicate using the first network adapter.

  1. Start Cluster Administrator.

  2. In the left pane, right-click the cluster name (in the upper left corner), and then click Properties.

  3. Click the Network Priority tab, as shown in Figure 24 below.

    Figure 24. The Network Priority tab in Cluster Administrator.

    Figure 24. The Network Priority tab in Cluster Administrator.

  4. Verify that the Private network is listed first. Use the Move Up or Move Down buttons to change the priority order.

  5. Click OK.

Test Installation

There are several methods to verify a Cluster Service installation after the setup process has been completed. These include:

  • Cluster Administrator: If installation was completed only on Node1, start Cluster Administrator, and then attempt to connect to the cluster. If a second node was installed, start Cluster Administrator on either node, connect to the cluster, and then verify that the second node is listed.

  • Services Applet: Use the services snap-in to verify that the cluster service is listed and started.

  • Event Log: Use the Event Viewer to check for ClusSvc entries in the system log. You should see entries confirming that the cluster service successfully formed or joined a cluster.

  • Cluster service registry entries: Verify that the cluster service installation process wrote the correct entries to the registry. You can find many of the registry settings under HKEY_LOCAL_MACHINE\Cluster.

  • Click Start, click Run, and then type the Virtual Server name. Verify that you can connect and view associated resources.

Appendix A

Sample SYSPREP.INF file

[GuiUnattended]

    EncryptedAdminPassword=NO

    AutoLogon=No

    AutoLogonCount=1

    OEMSkipRegional=1

    TimeZone=4

[Identification]

    JoinWorkgroup=WORKGROUP

[Networking]

    InstallDefaultComponents=Yes

[UserData]

    FullName = "Cluster Parent Hard Disk"

    OrgName = "Contoso"

See the following resources for further information:

For the latest information about Windows Server 2003, see the Windows 2003 Server Web site at https://www.microsoft.com/windowsserver2003/default.mspx