Step 1: Prepare for Your Deployment

Applies To: Windows HPC Server 2008

The first step in the deployment of your HPC cluster is to make important decisions, such as deciding how you will be adding nodes to your cluster, and choosing a network topology for your cluster. The following checklist describes the steps involved in preparing for your deployment.

Checklist: Prepare for your deployment

Task Description

1.1. Review initial considerations and system requirements

Review the list of initial considerations and system requirements to ensure that you have all the necessary hardware and software components to deploy an HPC cluster.

1.2. Decide how to add compute nodes to your cluster

Decide if you will be adding compute nodes to your cluster from bare metal, as preconfigured nodes, or using an XML file.

1.3. Choose the Active Directory domain for your cluster

Choose the Active Directory® domain to which you will join the head node and compute nodes of your HPC cluster.

1.4. Choose a user account for installation and diagnostics

Choose an existing domain account with enough privileges to perform installation and diagnostics tasks.

1.5. Choose a network topology for your cluster

Choose how the nodes in your cluster will be connected, and how the cluster will be connected to your enterprise network.

1.6. Prepare for multicast (optional)

If you will be deploying nodes from bare metal and want to multicast the operating system image that you will be using during deployment, configure your network switches appropriately.

1.7. Prepare for the integration of scriptable power control tools (optional)

If you want to use your own power controls tools to start, shut down, and reboot compute nodes remotely, obtain and test all the necessary components of your power control tools.

1.1. Review initial considerations and system requirements

The following sections list some initial considerations that you need to review, as well as hardware and software requirements for Windows HPC Server 2008.

Initial considerations

Review the following initial considerations before you deploy your HPC cluster.

Compatibility with previous versions

The following list describes compatibility between Windows HPC Server 2008 and Windows Compute Cluster Server 2003:

  • Windows HPC Server 2008 provides application programming interface (API)-level compatibility for applications that are integrated with Windows Compute Cluster Server 2003. These applications might, however, require changes to run on Windows Server® 2008. If you encounter problems running your application on Windows Server 2008, you should consult your software vendor.

  • Windows HPC Server 2008 supports job submission from Windows Compute Cluster Server 2003 clients, including jobs that are submitted through the use of the command-line tools, the Compute Cluster Job Manager, and the COM APIs.

  • The Windows HPC Server 2008 client tools, including the cluster administration console (HPC Cluster Manager), the job scheduling console (HPC Job Manager), the command-line tools, and the APIs cannot be used to manage or submit jobs to a Windows Compute Cluster Server 2003 cluster.

  • Clusters that have both Windows Compute Cluster Server 2003 nodes and Windows HPC Server 2008 nodes are not supported.

  • A side-by-side installation of Windows HPC Server 2008 and Windows Compute Cluster Server 2003 on the same computer is not supported. This includes the Windows HPC Server 2008 client utilities.

  • The upgrade of a Windows Compute Cluster Server 2003 head node to a Windows HPC Server 2008 head node is not supported.

Server roles added during installation

The installation of HPC Pack 2008 adds the following server roles to the head node:

  • Dynamic Host Configuration Protocol (DHCP) Server, to provide IP addresses and related information for compute nodes.

  • Windows Deployment Services, to deploy compute nodes remotely.

  • File Services, to manage shared folders.

  • Network Policy and Access Services, which enables Routing and Remote Access so that network address translation (NAT) services can be provided to the cluster nodes.

Hardware requirements

Hardware requirements for Windows HPC Server 2008 are very similar to those for the 64-bit editions of Windows Server 2008.

Note

For more information about installing Windows Server 2008, including system requirements, see Installing Windows Server 2008 (https://go.microsoft.com/fwlink/?LinkID=119578).

Processor (x64-based):

  • Minimum: 1.4 GHz

  • Recommended: 2 GHz or faster

RAM:

  • Minimum: 512 MB

  • Recommended: 2 GB or more

Available disk space:

  • Minimum: 50 GB

  • Recommended: 80 GB or more

Drive:

  • DVD-ROM drive

Network adapters:

  • The number of network adapters on the head node and on the compute nodes depends on the network topology that you choose for your cluster. For more information about the different HPC cluster network topologies, see Appendix 1: HPC Cluster Networking.

Software requirements

The following list outlines the software requirements for the head node and the compute nodes in a Windows HPC Server 2008 cluster:

  • Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008

  • Microsoft HPC Pack 2008

Important

Microsoft HPC Pack 2008 cannot be installed on any edition of Windows Server 2008 R2. It can only be installed on Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008.

To enable users to submit jobs to your HPC cluster, you can install the utilities included with Microsoft HPC Pack 2008 on client computers. Those client computers must be running any of the following operating systems:

  • Windows XP Professional with Service Pack 3 or later (x86- or x64-based)

  • Windows Vista® Enterprise, Windows Vista Business, Windows Vista Home, or Windows Vista Ultimate

  • Windows Server 2003 Standard Edition or Windows Server 2003 Enterprise Edition with Service Pack 2 or later (x86- or x64-based)

  • Windows Server 2003, Compute Cluster Edition

  • Windows Server 2003 R2 Standard Edition or Windows Server 2003 R2 Enterprise Edition (x86- or x64-based)

1.2. Decide how to add compute nodes to your cluster

There are three ways to add compute nodes to your cluster:

  • From bare metal. The operating system and all the necessary HPC cluster components are automatically installed on each compute node as it is added to the cluster. No manual installation of the operating system or other software is required.

  • Add preconfigured compute nodes. The compute nodes are already running Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008, and Microsoft HPC Pack 2008 is manually installed on each node.

  • Import a node XML file. An XML file that contains a list of all the nodes that will be deployed is used. This XML file can be used to add nodes from bare metal or from preconfigured nodes. For more information about node XML files, see Appendix 2: Creating a Node XML File.

The following is a list of details to take into consideration when choosing how to add nodes to your HPC cluster:

  • When deploying nodes from bare metal, Windows HPC Server 2008 automatically generates computer names for your compute nodes. During the configuration process, you will be required to specify the naming convention to use when automatically generating computer names for the new nodes.

  • Compute nodes are assigned their computer name in the order that they are deployed.

  • If you want to add compute nodes from bare metal and assign computer names in a different way, you can use a node XML file. For more information about node XML files, see Appendix 2: Creating a Node XML File.

  • If you want to add preconfigured nodes to your cluster, you will need to install Windows Server 2008 HPC Edition, or another 64-bit edition of Windows Server 2008 on each node (if not already installed), as well as Microsoft HPC Pack 2008.

1.3. Choose the Active Directory domain for your cluster

The head node and the compute nodes in your HPC cluster must be members of an Active Directory domain. Before deploying your cluster, you must choose the Active Directory domain that you will use for your HPC cluster.

If you do not have an Active Directory domain to which you can join your cluster, or if you prefer not to join an existing domain, you can install the Active Directory Domain Services role on the head node and then configure a domain controller on that node. For more information about installing the Active Directory Domain Services role on a computer that is running Windows Server 2008, see the AD DS Installation and Removal Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=119580).

Warning

If you choose to install and configure an Active Directory domain controller on the head node, consult with your network administrator about the correct way to isolate the new Active Directory domain from the enterprise network, or how to join the new domain to an existing Active Directory forest.

1.4. Choose a user account for installation and diagnostics

During the configuration process of your HPC cluster, you must provide credentials for a domain user account that will be used for installation and diagnostics. You must choose an existing account or create a new account, before starting your cluster deployment.

The following is a list of details to take into consideration when choosing the user account:

  • The user account that you choose must be a domain account with enough privileges to create Active Directory computer accounts for the compute nodes. Alternatively, you can create the computer accounts manually or ask your domain administrator to create them for you.

  • If part of your deployment requires access to resources on the enterprise network, the user account must have the necessary permissions to access those resources—for example, installation files that are available on a network server.

  • If you want to restart nodes remotely from the cluster administration console (HPC Cluster Manager), the account must be a member of the local Administrators group on the head node. This requirement is only necessary if you do not have scriptable power control tools that you can use to remotely restart the compute nodes.

1.5. Choose a network topology for your cluster

Windows HPC Server 2008 supports five cluster topologies. These topologies are distinguished by how the compute nodes in the cluster are connected to each other and to the enterprise network. The five supported cluster topologies are:

  • Topology 1: Compute Nodes Isolated on a Private Network

  • Topology 2: All Nodes on Enterprise and Private Networks

  • Topology 3: Compute Nodes Isolated on Private and Application Networks

  • Topology 4: All Nodes on Enterprise, Private, and Application Networks

  • Topology 5: All Nodes on an Enterprise Network

For more information about each network topology, see Appendix 1: HPC Cluster Networking.

When you are choosing a network topology, you must take into consideration your existing network infrastructure:

  • Decide which network in the topology that you have chosen will serve as the enterprise network, the private network, and the application network.

  • Do not have the network adapter that is connected to the enterprise network on the head node in an automatic configuration (that is, the IP address for that adapter does not start with: 169.254). That adapter must have a valid IP address, dynamically or manually assigned (static).

  • If you choose a topology that includes a private network, and you are planning to add nodes to your cluster from bare metal:

    • Ensure that there are no Pre-Boot Execution Environment (PXE) servers on the private network.

    • If you want to use an existing DHCP server for your private network, ensure that it is configured to recognize the head node as the PXE server in the network.

  • If you want to enable DHCP server on your head node for the private or application networks and there are other DHCP servers connected to those networks, you must disable those DHCP servers.

  • If you have an existing Domain Name System (DNS) server connected to the same network as the compute nodes, no action is necessary, but the compute nodes will be automatically deregistered from that DNS server.

  • Contact your system administrator to determine if Internet Protocol security (IPsec) is enforced on your domain through Group Policy. If IPsec is enforced on your domain through Group Policy, you may experience issues during deployment. A workaround is to make your head node an IPsec boundary server so that compute nodes can communicate with the head node during PXE boot.

1.6. Prepare for multicast (optional)

If you will be deploying nodes from bare metal and want to multicast the operating system image that you will be using during deployment, we recommend that you prepare for multicast by:

  • Enabling Internet Group Management Protocol (IGMP) snooping on your network switches, if this feature is available. This will help to reduce multicast traffic.

  • Disabling Spanning Tree Protocol (STP) on your network switches, if this feature is enabled.

Note

For more information about these settings, contact your network administrator or your networking hardware vendor.

1.7. Prepare for the integration of scriptable power control tools (optional)

The cluster administration console (HPC Cluster Manager) includes actions to start, shut down, and reboot compute nodes remotely. These actions are linked to a script file (CcpPower.cmd) that performs these power control operations using operating system commands. You can replace the default operating system commands in that script file with your own power control scripts, such as Intelligent Platform Management Interface (IPMI) scripts that are provided by your vendor of cluster solutions.

In preparation for this integration, you must obtain all the necessary scripts, dynamically linked library (DLL) files, and all other components of your power control tools. After you have obtained all the necessary components, test them independently and ensure that they work as intended on the computers that you will be deploying as compute nodes in your cluster.

For information about modifying CcpPower.cmd to integrate your own scriptable power control tools, see Appendix 5: Scriptable Power Control Tools.