Chapter 3 - Setting Up an MSCS Cluster

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

This chapter presents the processes and gives procedures for configuring an MSCS cluster. There are six key steps to installing a cluster:

  • Choosing hardware 

  • Choosing and configuring SCSI hardware

  • Choosing and configuring network hardware 

  • Gathering information for installing a cluster 

  • Installing a cluster

  • Verifying your cluster installation 

This chapter also discusses:

  • Installing Cluster Administrator on a remote computer 

  • Installing Windows NT Service Packs on cluster nodes 

  • Uninstalling a cluster 

Choosing Hardware

For best clustering results, choose similar—if not identical—hardware for both nodes. The two computers should be the same brand, and preferably the same model. The SCSI adapters used in both nodes must be identical to ensure that the adapter BIOS and other firmware in both nodes are 100-percent compatible. The network adapters used in each node need not be identical.

Depending on your cluster configuration, hardware choices, and fault tolerance requirements, you may not have enough expansion slots in your computers. For example, if each node has a local SCSI bus, three network adapters, and three SCSI adapters for three shared SCSI buses, each node requires seven expansion slots plus slots for video, sound, and other physical devices that require adapters. For this reason, consider purchasing dual or quad SCSI adapters and network adapters. These adapters put the hardware for two or four separate adapters on one physical adapter. While this does conserve on expansion slot use, you do lose some hardware redundancy.

Note The hardware requirements discussed in this book are generic. For specific information about supported hardware, see the MSCS Hardware Compatibility List. For specific information about MSCS original-equipment-manufacturer (OEM) configurations that have been verified by Microsoft, check with your hardware vendor.

Choosing and Configuring SCSI Hardware

You need the following SCSI hardware:

  • SCSI cables 

  • SCSI terminators 

  • A SCSI adapter in each cluster server to provide a shared external bus between the nodes 

  • At least one disk drive in a storage enclosure on the shared bus 

Before you set up the bus, please review the SCSI concepts and terminology discussed next.

Note The SCSI concepts, terminology, and configuration information presented here is fairly generic. For specific information about your SCSI hardware, see your SCSI hardware documentation.

SCSI Concepts and Terminology

Devices on a SCSI bus use one of two transmission methods:

  • Single-ended 

    Devices using single-ended transmission establish a signal connection, using one data lead and one ground lead. Although this method is generally less expensive, it is prone to noise problems and requires strict adherence to maximum cable lengths. 

  • Differential 

    Devices using differential transmission establish a signal connection such that neither lead is at ground potential. Because this effectively cancels noise on the bus, you can use longer cables and faster bus speeds. However, it is generally more expensive to implement than single-ended transmission. 

Configure all devices on any one SCSI bus to use the same transmission method.

Transmission Types and Type Combinations

When using physical buses, you must connect each device to the type of bus that uses the same transmission method that the device uses. That is, you must connect single-ended devices to single-ended buses, and differential devices to differential buses.

Use only one transmission method on a single SCSI bus. However, you can configure devices that use different transmission methods by installing a signal converter between them. Signal converters convert single-ended-SCSI signals to differential-SCSI signals.

Important Do not connect single-ended and differential devices together unless you connect them through a signal converter. Attempting to connect them without a signal converter can damage your hardware.

Termination of the SCSI Bus

In this documentation, the term shared SCSI bus refers to the total interconnect between the server systems and the shared storage. In practice, the shared bus typically consists of multiple bus segments, each of which is an electrically complete bus.

Each physical SCSI bus must be terminated only at the ends of the bus. Some devices and adapters include internal termination that you must remove if the device is not at the end of the bus. MSCS works best if you remove all internal bus termination and use either Y-cables or trilink connectors to implement external bus termination. This allows the device to be removed for maintenance without affecting the rest of the devices on the bus. These rules also apply to the individual bus segments within the shared SCSI bus.

Data-Path Sizes

Devices on a SCSI bus can have one of two data path sizes:

  • Narrow 

    Narrow devices use an 8-bit data path. They are usually single-ended devices, although narrow differential devices do exist. 

  • Wide 

    Wide devices use a 16-bit data path. They are either single-ended or differential. 

Setting Up for a Shared SCSI Bus

You must terminate the shared SCSI bus as described here, including the guidelines for terminating the shared bus and connecting cables. Each SCSI bus segment must have exactly two points of termination, and these points must be located at the two ends of the segment. If you overlook any of these requirements, the bus may not operate properly in the MSCS environment.

Terminating a bus segment

There are a number of ways to terminate a bus segment, some of which work better than others for MSCS.

  • SCSI adapters 

    Although SCSI adapters have internal termination that you can use to terminate the bus, this method is not recommended. If the server becomes disconnected from the shared bus in this type of setup, the bus will not be properly terminated and will be inoperable. 

  • Storage enclosures 

    You can use the internal termination of a storage enclosure to terminate the bus if the enclosure is at the end of the bus. 

  • Y cables 

    You can connect Y cables to some devices. If the device is at the end of the bus, you can attach a terminator to one branch of a Y cable to terminate the bus. To use this method of termination, you must remove or disable the internal terminators in the device. 

  • Trilink connectors 

    You can connect trilink connectors to certain devices. If the device is at the end of the bus, you can attach a terminator to one of the trilink connectors to terminate the bus. To use this method of termination, you must remove or disable the internal terminators in the device. 

Note If your device is not at the end of the shared bus, you must disable the internal termination of the device.

Y cables or trilink connectors are recommended for terminating the bus. Besides providing termination, Y cables and trilink connectors give you a way to isolate systems and storage enclosures from the shared bus without affecting the bus termination. This allows your shared bus to operate while you perform maintenance on a device or change the configuration.

Shared Bus with Middle Storage Enclosure

Figure 3.1 shows a configuration with a system at each end of the bus and the storage enclosure in the middle. A Y cable is attached to the unterminated port of each host bus adapter. The bus is terminated by terminators attached to each Y cable. The internal termination of the storage enclosure is disabled.

Cc723242.xcla_c01(en-us,TechNet.10).gif 

Figure 3.1 Middle storage enclosure

If you disconnect one of the Y cables from the adapter port (as shown in Figure 3.2), the shared bus is still properly terminated. Although the disconnected server is no longer available, the cluster can still operate with the remaining server.

Cc723242.xcla_c03(en-us,TechNet.10).gif 

Figure 3.2 Properly terminated, shared SCSI bus with disconnected Y cable

Note There is no Y cable at the storage enclosure in the configuration shown in Figure 3.2. If the shared storage is disconnected, the bus becomes inoperable. If you have more than one storage enclosure on the bus, use a Y cable or a trilink connector to connect each enclosure to the bus. This allows the bus to remain operable if one of the enclosures is disconnected.

Drive Quantities

Each physical disk or fault-tolerant disk set on the shared SCSI bus is owned by only one node of the cluster. The ownership of the disks moves from one node to another when the disk group fails over or moves to the other node. Therefore, if you intend to share resources from both nodes of a cluster, you need at least two shared storage disks.

If you are using a hardware RAID solution, you need at least two shared volumes, regardless of whether they are mirrored disks, striped sets with parity, or non-fault-tolerant volume sets.

For more information on sharing resources from both nodes of a cluster, see clustering models A and E in Chapter 2, "Planning Your Cluster Environment."

Assigning Drive Letters

The disk resources on the shared SCSI bus must have the same drive letter on both nodes. Because computers vary in the way they assign drive letters, and because MSCS makes all drive assignments permanent during setup, you should assign drive letters to all disk resources on the shared SCSI bus before installing MSCS. Use Windows NT Disk Administrator to manually assign drive letters only on the first node; MSCS Setup copies this configuration when you install MSCS on the second node.

When assigning drive letters, choose letters that do not conflict with existing drive letters on either node. For best results, choose sequential drive letters for the disks on the shared SCSI buses, starting with, at a minimum, the next sequential drive letter after the highest drive letter assigned on either node. For example, if one node has a C drive, and the other node has drives C, D, and E, then assign letters F, G, and H to the first three disk resources on the shared SCSI bus. If you want to allow for the addition of local devices on either node, use higher letters (for example, L, M, and N).

If the drive letters on the two nodes do not conflict, you need not assign permanent drive letters before installing MSCS.

Important Before installing MSCS, you must use the SCSI controllers on each node to do the following:

  • Set the SCSI controller ID to a different target ID on each node (for example, 6 on the first node and 7 on the second). 

  • Disable the boot-time SCSI reset operation on each controller. 

Some SCSI controllers must have the BIOS disabled or they will cause the computer to stop responding at boot time.

For more information on using Disk Administrator, see Disk Administrator Help, or Chapter 7 of Windows NT Server Version 4.0 Concepts and Planning. For more information on assigning drive letters and installing MSCS, see "Installing a Cluster," later in this chapter.

Drive-Partitioning Issues

Although you can partition drives that will be used on the shared SCSI bus, there are some restrictions imposed by MSCS:

  • You must partition and format all disks you will use with MSCS before running MSCS Setup 

  • Partitions of a single physical disk cannot be members of different fault-tolerant disk sets. 

  • All partitions on one disk are managed as one resource, and move as a unit between nodes. 

  • All partitions must be formatted with NTFS (they can be either compressed or uncompressed). 

Choosing and Configuring Network Hardware

The nodes of an MSCS cluster must be connected by one or more physically independent networks (sometimes referred to as interconnects). Although MSCS clusters can function with only one interconnect, two interconnects are strongly recommended and are required for the verification of MSCS OEM systems that include both hardware and MSCS software.

Redundant, independent interconnects eliminate any single point of failure that could disrupt communication between nodes. When two nodes are unable to communicate, they are said to be partitioned. After two nodes become partitioned, MSCS automatically shuts down one node to guarantee the consistency of application data and the cluster configuration. This can lead to the unavailability of all cluster resources. For example, if each node has only one network adapter, and the network cable on one of the nodes fails, each node (because it is unable to communicate with the other) attempts to take control of the quorum resource. There is no guarantee that the node with a functioning network connection will gain control of the quorum resource. If the node with the failed network cable gains control, the entire cluster is unavailable to network clients.

Each network can have one of four roles* *in a cluster. The network can support:

  • Only node-to-node communication 

  • Only client-to-cluster communication 

  • Both node-to-node communication and client-to-cluster communication 

  • No cluster-related communication 

Networks that support only node-to-node communication are referred to as private networks. Networks that support client-to-cluster communication (either with or without supporting node-to-node communication) are referred to as public networks.

Before you install the MSCS software, you must configure both nodes to use the TCP/IP protocol over all interconnects. Also, each network adapter must have an assigned static IP address that is on the same network as the corresponding network adapter on the other node. Therefore, there can be no routers between two MSCS nodes. However, routers can be placed between the cluster and its clients. If all interconnects must run through a hub, use separate hubs to isolate each interconnect.

Important MSCS does not support the use of IP addresses assigned from a Dynamic Host Configuration Protocol (DHCP) server for the cluster administration address (which is associated with the cluster name) or any IP Address resources. However, you can use either static IP addresses, or IP addresses permanently leased from a DHCP server, for the Windows NT network configuration on each node.

Installing MSCS on Computers with Logically Multihomed Adapters

A logically multihomed adapter is one that has two IP addresses assigned to it. These adapters can be used only for node-to-node cluster communication if their primary addresses are on the same IP Subnet. (A logically multihomed adapter's primary address is the one that appears when you run Control Panel, double-click Network, click TCP/IP on the Protocols tab, and click Properties.

If the primary addresses on both nodes are not on the same IP Subnet, reorder the IP addresses assigned to the adapter by clicking Advanced on the IP Address tab, and then removing and adding the IP addresses again. Add the IP addresses with the matching subnet first, and then add the other IP addresses.

Private Network Addressing Options

If an interconnect connects only the cluster nodes and does not support any other network clients, you can assign it a private IP network address instead of using one of your enterprise's official IP network addresses.

By agreement with the Internet Assigned Numbers Authority (IANA), several IP networks are always left available for private use within an enterprise. These reserved numbers are:

  • 10.0.0.0 through 10.255.255.255 (Class A) 

  • 172.16.0.0 through 172.31.255.255 (Class B) 

  • 192.168.0.0 through 192.168.255.255 (Class C) 

You can use any of these networks or one of their subnets to configure a private interconnect for a cluster. For example, address 10.0.0.1 can be assigned to the first node with a subnet mask of 255.0.0.0. Address 10.0.0.2 can be assigned to the second node. No default gateway or WINS servers should be specified for this network.

Ask your network administrator which of the private networks or subnets you may use within your enterprise before configuring your cluster.

Note These private network addresses should never be routed. For more information on private IP network addresses, see request for comments (RFC) 1918, "Address Allocation for Private Internets."

Identifying Network Adapters

Because you can use MSCS Setup to assign network descriptions to each adapter, you should be able to identify, before installing MSCS, which network adapters will be used for which cluster roles.

If your cluster node uses multiple, identical network cards, it can be difficult to identify them when you run Setup. You can use the TCP/IP Ipconfig utility to display the network driver name with an index (such as E190x1 and E190x2), and the network adapter's IP address and subnet mask. Using this information, you can assign appropriate names to the networks when you run MSCS Setup. For example, if El90x1 uses an IANA private IP address such as 10.0.0.1, and El90x2 uses an IP address on a network that supports clients, assign El90x1 a description for Private Network and assign El90x2 a description for Public Network when you run MSCS Setup.

Gathering Information for Installing a Cluster

Before installing MSCS, make sure you have the following:

  • Appropriate permissions to install a cluster 

    You must log on to each node under a domain account that has Administrator permissions on the node. 

    Both nodes must be configured as members of the same domain (not a workgroup) and have computer accounts on that domain.

    You must also supply the user name, password, and domain for the account under which the Cluster Service will run. This account requires no special account privileges, but password restrictions, such as requiring password changes and Change password on next logon, should be turned off. 

  • The name of a folder that will store the cluster files on each node 

    The default folder is %WinDir%\cluster, where %WinDir% is your Windows NT folder. 

  • A cluster name 

    When you run MSCS Setup and install the second node, you connect to the first node by specifying the cluster name (or the computer name of node 1). The cluster name can also be used to connect to a cluster in Cluster Administrator (as can either node name). The cluster name cannot be the same as the computer name of either node and cannot conflict with any other computer name (printer, server, cluster, domain, and so forth) in use on your network. The cluster name can be changed using Cluster Administrator.

  • A static IP address and subnet mask for the cluster 

    The static IP address and associated subnet mask you give your cluster is a floating address that is paired with the cluster name.

Installing a Cluster

After you connect and configure your shared SCSI bus and assign drive letters, you install MSCS in two phases. First, you install MSCS on one node, setting up all the basic information for the cluster. Then, you install MSCS on the second node, and most of the configuration settings from the first node are automatically detected.

You can install either node first, as long as you prepare the SCSI controllers and disk resources on the shared SCSI bus as described in "Assigning Drive Letters," earlier in this chapter.

Connecting Shared SCSI Buses and Assigning Drive Letters

You must connect and configure your shared SCSI bus and assign drive letters before you can install MSCS on either node.

Before you begin, determine which server configuration you will use for the two nodes: two member servers, two BDCs, or one PDC and one BDC. For more information on domain models and their impact on clustering and server applications, see "Choosing a Domain Model" in Chapter 2, "Planning Your Cluster Environment."

To connect the shared SCSI buses and assign drive letters
  1. Install Windows NT Server, Enterprise Edition on both nodes, following the instructions provided in the Windows NT Server documentation. 

    Prepare the SCSI controllers for the shared SCSI bus, following the instructions in your SCSI bus owner's manual.

    • Install the SCSI controllers for the shared SCSI bus.

    • Ensure the SCSI controllers for the shared SCSI bus are using different SCSI IDs. 

    Note Do not connect the shared SCSI buses to both computers while configuring the two systems. Install the controllers but do not connect them to the shared SCSI bus. 

  2. Install and configure all network cards to use TCP/IP, following the manufacturer's instructions. Verify you have connectivity on all networks. 

  3. Connect the shared SCSI devices to the shared buses, and then connect the shared SCSI buses to both nodes. 

    Important Install MSCS on at least one node before you run Windows NT Server, Enterprise Edition simultaneously on both nodes. 

  4. Start Windows NT Server, Enterprise Edition on the first node on which you intend to install MSCS. Turn on the second node, but do not allow Windows NT Server, Enterprise Edition to start. To do this, turn the node on, and then press the SPACEBAR when the OS Loader screen appears, allowing you to select an operating system. 

  5. If the drive letters in use on the two nodes do not match, or you want to allow for the addition of local devices on either node, run Windows NT Disk Administrator and assign drive letters to the disks available on the shared SCSI buses. 

You are now ready to install MSCS on the first node.

Installing MSCS on the First Node

After you connect and configure your shared SCSI bus and assign drive letters, you are ready to install MSCS on the first node.

Note If you are not installing the first node on a primary domain controller (PDC), the PDC of the domain to which the server belongs must be online. If it is not, you cannot install MSCS.

If you have not yet installed Service Pack 3 as part of your Windows NT Server, Enterprise Edition installation, install it on both nodes before you begin installing MSCS.

To install MSCS on the first node
  1. Start Windows NT Server, Enterprise Edition, and log on using a domain account that has administrator permissions on the node. 

    If you previously disabled the Installer, click Start, click Run, and type: nhloader.exe

  2. In the Microsoft Windows NT Server, Enterprise Edition Installer dialog box, click Continue; then, select the Microsoft Cluster Server (MSCS) check box, and click Start Installation

    If you are installing from your network, establish a network connection to the MSCS Setup files; then switch to the Cluster\I386 or Cluster\Alpha folder (depending on your platform), and run Setup.exe. 

  3. At the welcome screen, click Next

  4. Ensure your MSCS hardware configuration is compatible with MSCS, click I Agree, and then click Next.

  5. Click Form a New Cluster, and then click Next

  6. Type the name of the new cluster in Enter the name of the cluster to join or form, and then click Next

  7. Enter the path to the folder you want to contain the MSCS files, or click Browse to specify the path, and then click Next.

    By default, MSCS installs the files in a \cluster folder within the Windows NT folder (typically C:\Winnt\Cluster). 

  8. Enter the user name, password, and domain for the account the Cluster Service will run under, and then click Next

    This account requires no special account privileges, but password restrictions, such as requiring password changes and Change password on next logon, should be turned off. 

  9. Click to add or remove the disks on the shared SCSI bus that you will use with your cluster, and then click Next

    By default, all SCSI disks on SCSI buses other than the system SCSI bus appear in Shared cluster disks. If you have more that one local SCSI bus, some drives in Shared cluster disks will not be on the shared SCSI bus. Click these drives, and then click Remove

  10. Click the disk resource on the shared SCSI bus on which you want to store the quorum resource, and then click Next.

    You can store the quorum resource on any Physical Disk resource on the shared SCSI bus. If necessary at a later time, you can move the quorum resource to another disk or folder using Cluster Administrator. 

  11. Click Next to allow Setup to identify all network resources that are available in your computer. 

    For each network adapter installed in the node, specify:

    • A name that describes the network, using a meaningful description so you can identify the networks when working in Cluster Administrator.

    • Whether the network is enabled for cluster use, and if so, what type of communication the network will be used for. 

  12. Click a name, and then click Up and Down to prioritize the networks available for communication between nodes; then click Next when you are finished.

    MSCS will attempt to use the first network card listed for all communication between nodes. MSCS use other networks only if that network fails. 

  13. Enter the static IP address and subnet mask that you want to use to administer the cluster; then in Network, click the network over which clients will detect the cluster, and click Next when you are finished.

  14. Click Finish

For more information about naming and describing networks, see "Identifying Network Adapters" earlier in this chapter.

Installing MSCS on the Second Node

After you install MSCS on the first node, you are ready to install MSCS on the second node.

Note To install MSCS on the second node of a cluster, the first node must be online. Also, the PDC of the domain to which the server belongs must be online. If either server is not online, you cannot install MSCS.

To install MSCS on the second node
  1. Start Windows NT Server, Enterprise Edition and log on, using the same domain account that you used to install MSCS on the first node.

  2. In the Microsoft Windows NT Server, Enterprise Edition Installer dialog box, click Continue; then, select the Microsoft Cluster Server (MSCS) check box, and click Start Installation

    If you are installing from your network, establish a network connection to the MSCS setup files; then switch to the Cluster\I386 or Cluster\Alpha folder (depending on your platform), and run Setup.exe. 

  3. At the welcome screen, click Next

  4. Ensure your MSCS hardware configuration is compatible with MSCS, click I Agree, and then click Next.

  5. Click Join an existing cluster, and then click Next

  6. Type the name of the cluster you established on the first node (in step 6 of the previous procedure) in Enter the name of the cluster to join or form, and then click Next

  7. Enter the path to the folder that you want to contain the MSCS files, and then click Next

    By default, MSCS installs the files in a \cluster folder within the Windows NT folder (typically C:\Winnt\Cluster). 

  8. Enter the password for the domain user account you specified when installing the first node, and then click Next.

  9. Click Finish

Microsoft includes a utility for backing up your cluster configuration. For more information, see the MSCS Release notes.

Verifying Your Cluster Installation

You can verify the installation of your cluster by starting Cluster Administrator and checking that both nodes in your cluster are detected.

To start Cluster Administrator

  1. On either node, click Start, point to Programs, point to Administrative Tools (Common), and then click Cluster Administrator

  2. In Cluster or Server Name, type the name of the cluster. 

    Or, type either the name or IP address of one of the nodes. 

If installation is successful, the computer names of both nodes appear on the left side of the Cluster Administrator window.

Installing Cluster Administrator on a Remote Computer

You can install Cluster Administrator on any computer running Service Pack 3 with version 4.0 of either Windows NT Workstation or Windows NT Server. You can also install Cluster Administrator on any computer running Windows NT Server, Enterprise Edition (which includes Service Pack 3).

Note When you install Cluster Administrator, Cluster.exe in also installed (in the Windows NT\System32 folder). For more information on Cluster.exe, see "Administering Clusters from the Command Line" in Chapter 4, "Managing MSCS."

To install Cluster Administrator

  1. Run MSCS Setup from the Microsoft Windows NT Server, Enterprise Edition 4.0 Components CD (in the Mscs\Cluster\I386 or Mscs\Cluster\Alpha folder, depending on your platform).

    Or, if you are installing from your network, establish a network connection to the MSCS Setup files, switch to the Cluster\I386 or Cluster\Alpha folder (depending on your platform), and run Setup.exe. 

  2. If prompted, click Install Cluster Administrator

  3. Specify the folder you want to contain the MSCS files, and click Next.

    By default, MSCS installs the files in C:\Winnt\Cluster. 

  4. Click Finish

Installing Windows NT Service Packs on Cluster Nodes

MSCS requires Windows NT Server, Enterprise Edition. You can install Windows NT Server Service Packs on MSCS nodes using the following procedure. Always install any Service Packs on both nodes.

To install a later Service Pack

  1. On one node (referred to here as node A), take all groups offline, or move them to the other node (referred to here as node B). 

  2. Install the Service Pack on node A and restart the computer. 

  3. Bring the node A groups back online, or move them back to node A from node B. 

  4. Take any remaining groups on node B offline, or move them to node A. 

  5. Install the Service Pack on node B, and then restart the computer. 

  6. Bring the node B groups back online, or move them back to node B from node A. 

Uninstalling a Cluster

You can uninstall the node of a cluster at any time. Before uninstalling a node, you should take all groups offline or move them to the other node. You should also close Cluster Administrator.

To uninstall MSCS

  1. Click Start, point to Settings, and click Control Panel

  2. In Control Panel, double-click Add/Remove Programs

  3. On the Install/Uninstall tab, click Microsoft Cluster Server, and then click Add/Remove

Note If you have installed only Cluster Administrator, you cannot uninstall it. You must manually delete the cluster folder within the Windows NT folder (typically C:\Winnt\Cluster), remove Cluster Administrator from the Administrative Tools (Common) folder (typically C:Winnt\Profiles\All Users\Start Menu\Programs\Administrative Tools (Common)).

Cc723242.spacer(en-us,TechNet.10).gif