Deployment Roadmap for Windows HPC Server 2008 R2

Updated: March 2011

Applies To: Windows HPC Server 2008 R2

This topic provides a high level roadmap to deploy Windows® HPC Server 2008 R2 using several common configurations, with links to detailed deployment guidance. Windows HPC Server 2008 R2 supports a wide range of HPC computing workloads and network environments. To meet your specific computing needs and environment, you can adapt or combine these common configurations.

For each cluster configuration, the following information is provided:

  • Features of the configuration

  • Example deployment options

  • Key deployment steps

  • Additional considerations

Important
  • This roadmap is not a replacement for the step-by-step deployment guides in the Deployment section of the Windows HPC Server 2008 R2 Technical Library (https://go.microsoft.com/fwlink/?LinkId=214556). Please refer to the guides for detailed procedures.

  • The deployment options shown for each configuration are only examples. Each configuration supports other network topologies and node deployment options.

  • If you are planning to use an application on the HPC cluster that is developed by an independent software vendor (ISV), consult the vendor’s documentation for information about the HPC cluster configurations that are recommended for the application. For more information about applications for Windows HPC Server 2008 R2, see Applications for Windows HPC Server (https://go.microsoft.com/fwlink/?LinkId=214407).

In this topic:

  • General prerequisites

  • Small HPC cluster   A small yet fully functional on-premises HPC cluster that is especially useful for pre-production and proof-of-concept deployments

  • Basic on-premises HPC cluster   A medium-size on-premises cluster that can run a variety of HPC jobs

  • SOA-enabled on-premises HPC cluster   A medium-size on-premises cluster that can run the full range of HPC jobs, including large service-oriented architecture (SOA) jobs

  • High availability cluster   An on-premises cluster that can scale to more than 1000 nodes and that enhances the availability of the head node and Windows Communication Foundation (WCF) broker node services for unscheduled and scheduled outages

  • Workstation cluster   An on-premises HPC cluster that is made up of Windows® 7 workstations that are not dedicated cluster nodes

  • Windows Azure cloud cluster   An HPC cluster that is made up of an on-premises head node and Windows Azure nodes in the cloud that can be added or removed as needed to change the capacity of the cluster

General prerequisites

For information about preparing and planning for an HPC cluster, see Prepare for Your Deployment in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=201563).

For each configuration shown in this topic, you will need the following, at a minimum:

  • A computer for the head node of the cluster and, in most cases, one or more computers for cluster nodes. The computers must meet the System Requirements for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=212269).

  • Optionally, one or more client computers on the enterprise network on which you can deploy the client utilities for Windows HPC Server 2008 R2 (HPC PowerShell, HPC Cluster Manager, and HPC Job Manager). These components allow remote management of the HPC cluster or job submission from client computers. You can also run these utilities on the head node, where they are installed when you install HPC Pack 2008 R2.

  • Network switches, network adapters, and connections for the cluster nodes. If your HPC applications require an application network with high bandwidth and low latency, you may require specialized hardware. For example, to run certain message passing interface (MPI) jobs, you may want to consider using an InfiniBand network for your application network.

  • An existing Active Directory doMayn that the nodes of the HPC cluster will join. Generally the doMayn controller for the doMayn is a separate computer in the enterprise network, but in a small HPC cluster in a test environment you can optionally install the Active Directory DoMayn Services role on the head node.

  • One or more user accounts in the Active Directory doMayn with sufficient permissions to deploy the head node and to add nodes to the HPC cluster.

Small HPC cluster

Figure 1   A small Windows HPC Server cluster

Features of the sample configuration

  • Supports a small (around 5 nodes) on-premises cluster that can run and test a variety of HPC jobs, including parametric sweep, message passing interface (MPI), and task flow jobs.

  • Useful for pre-production and proof-of-concept deployments.

  • Can use the compute node role that is installed and enabled on the head node for additional computing power.

  • Can run small service-oriented architecture (SOA) jobs because of the Windows Communication Foundation (WCF) broker node role that is installed and enabled on the head node.

  • Can use but does not require a connection to the enterprise network infrastructure.

  • Adds preconfigured compute nodes to a private network.

Example deployment options

Item Description

HPC Pack 2008 R2 edition

Enterprise edition

Note
If you will not be using the Excel Services for HPC, you can install the Express edition instead.

HPC databases

Installed with SQL Server 2008 Express edition on the head node (default)

Network adapters

  • 2 on the head node, 1 on each compute node

Network configuration

  • Topology 1: Compute nodes isolated on a private network.

  • DHCP and network address translation (NAT) enabled on the private network (default)

Key deployment steps

Note
If you are new to Windows HPC and want the simplest path for setting up a small cluster, see DIY supercomputing: How to build a small Windows HPC cluster (https://go.microsoft.com/fwlink/?LinkId=214585).
Step Reference

Deploy the head node

  • Install Windows Server 2008 R2

  • Install HPC Pack 2008 R2 Enterprise edition, selecting the option to create a new HPC cluster by creating a head node, and using default settings

Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560)

Configure the head node

  • Configure HPC cluster networking with topology 1

  • Provide doMayn credentials to add nodes to the cluster

  • Specify how nodes will be named automatically

  • Create a compute node template, selecting the option to add nodes without an operating system image

  • Run a set of diagnostic tests to ensure that you can add nodes in your environment

Configure the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=198319)

Pre-configure the compute node computers

  • Install a supported edition of Windows Server 2008 R2 or a 64-bit edition of Windows Server 2008 or each computer

  • Join each computer to the doMayn

  • Install HPC Pack 2008 R2 Enterprise edition on each computer, selecting the option to join an existing HPC cluster by creating a new compute node

See the section “Pre-configure the compute nodes” in DIY supercomputing: How to build a small Windows HPC cluster (https://go.microsoft.com/fwlink/?LinkId=214585).

Add compute nodes, using the compute node template

Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkId=214588)

Additional considerations

  • These configuration and deployment options may not scale well for production deployments. For example, if you need to deploy a larger number of nodes, or see Basic on-premises HPC cluster in this topic.

  • Optionally, you can run Active Directory DoMayn Services on the head node instead of on a separate doMayn controller. However, this can adversely affect cluster performance.

  • If your computers each have additional network adapters, you can configure other network topologies, such as those with a dedicated application network.

  • If you want to use the head node as a compute node or a WCF broker node, ensure that you bring the head node online in HPC Cluster Manager.

Basic on-premises HPC cluster

Figure 2   A basic Windows HPC Server cluster

Features of the sample configuration

  • Supports a medium-size (up to 256 nodes) on-premises cluster that can run a variety of HPC jobs, including parametric sweep, message passing interface (MPI), and task flow jobs.

  • Can be used to run small service-oriented architecture (SOA) jobs because of the WCF broker node role that is installed and enabled on the head node. However, large SOA jobs may need additional broker nodes to be deployed in the HPC cluster. For more information, see SOA-enabled on-premises HPC cluster later in this topic.

  • Supports a cluster with more than 256 nodes, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.

  • Deploys nodes to the cluster from bare metal by using the Windows HPC Server 2008 R2 features to automatically install an operating system and the HPC Pack 2008 R2 components on the nodes, name them, and join them to the doMayn.

  • Supports additional node deployment options that are available in Windows HPC Server 2008 R2, depending on the network topology selected.

Example deployment options

Item Description

HPC Pack 2008 R2 edition

Enterprise edition

Note
If you will not be using the Excel Services for HPC, you can install the Express edition instead.

HPC databases

Installed with SQL Server 2008 Express edition on the head node (default)

Important
If you will be deploying a cluster with more than 256 nodes, consider using one or more remote instances of Microsoft SQL Server for the HPC databases. To do this, you must preconfigure the SQL Server databases before you deploy the head node. For more information, see Deploying an HPC Cluster with Remote Databases Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=186534).

Network adapters

  • 3 on the head node, 2 on each compute node

  • The network adapter on each compute node that connects to the private network must be configured to boot from pre-boot execution environment (PXE)

Network configuration

  • Topology 3: Compute nodes isolated on private and application networks

  • DHCP is enabled on the private and application networks (default)

  • NAT is disabled on the private and application networks (default)

Key deployment steps

Step Reference

Deploy the head node

  • Install Windows Server 2008 R2

  • Join the computer to the Active Directory doMayn

  • Install HPC Pack 2008 R2 Enterprise edition, selecting the option to create a new HPC cluster by creating a head node, and using default settings

Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560)

Configure the head node

  • Configure HPC cluster networking with topology 3

  • Provide doMayn credentials to add nodes to the cluster

  • Specify how nodes will be named automatically

  • Create a compute node template, selecting the option to add nodes with an operating system image

  • Add an operating system image that will be deployed using the node template

  • Run a set of diagnostic tests to ensure that you can add nodes in your environment

(Optional) Add drivers to the operating system image

Add Drivers for Operating System Images (https://go.microsoft.com/fwlink/?LinkId=214592)

Add compute nodes, using the compute node template

Deploy Nodes from Bare Metal (https://go.microsoft.com/fwlink/?LinkId=214594)

Additional considerations

This configuration also supports the following node deployment methods:

Item Reference

Add nodes on which Windows Server 2008 R2 and HPC Pack 2008 R2 are already installed

Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkID=214588)

Use an XML file that specifies attributes of the nodes that are added to the HPC cluster

Add Nodes by Importing a Node XML File

Deploy nodes from bare metal using an iSCSI connection to a network-attached storage array

Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=194674)

SOA-enabled on-premises HPC cluster

Figure 3   A Windows HPC Server cluster configured for SOA applications

Features of the sample configuration

  • Supports a medium-size (up to 256 nodes) on-premises cluster that can run a variety of parallel computing jobs, including parametric sweep, message passing interface (MPI), and task flow jobs, as well as service-oriented architecture (SOA) and Microsoft Excel calculation offloading jobs.

  • Supports a cluster with more than 256 nodes, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.

  • Supports communication of cluster nodes with SOA clients that are on the enterprise network.

  • Deploys additional broker nodes to the cluster to handle SOA jobs. Because a broker node role is installed by default on the head node, this may be necessary only for large SOA workloads.

  • Adds compute nodes to the cluster from bare metal by using the Windows HPC Server 2008 R2 features to automatically install an operating system and the HPC Pack 2008 R2 components on the nodes, name them, and join them to the doMayn.

  • Supports additional node deployment options that are available in Windows HPC Server 2008 R2, depending on the network topology selected.

Example deployment options

Item Description

HPC Pack 2008 R2 edition

Enterprise edition

Note
If you will not be using the Excel Services for HPC, you can install the Express edition instead.

HPC databases

Installed with SQL Server 2008 Express edition on the head node (default)

Important
If you will be deploying a cluster with more than 256 nodes, consider using one or more remote instances of Microsoft SQL Server for the HPC databases. To do this, you must preconfigure the SQL Server databases before you deploy the head node. For more information, see Deploying an HPC Cluster with Remote Databases Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=186534).

Network adapters

  • 3 on the head node, 3 on each compute node and each broker node

  • The network adapter on each compute node and each broker node that connects to the private network must be configured to boot from pre-boot execution environment (PXE)

Network configuration

  • Topology 4: All nodes on enterprise, private, and application networks

  • DHCP is enabled on the private and application networks (default)

  • NAT is disabled on the private and application networks (default)

Key deployment steps

Step Reference

Deploy the head node

  • Install Windows Server 2008 R2

  • Join the computer to the Active Directory doMayn

  • Install HPC Pack 2008 R2 Enterprise edition, selecting the option to create a new HPC cluster by creating a head node, and using default settings

Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560)

Configure the head node

  • Configure HPC cluster networking with topology 4

  • Provide doMayn credentials to add nodes to the cluster

  • Specify how nodes will be named automatically

  • Create a compute node template, selecting the option to add nodes with an operating system image

  • Add an operating system image that will be deployed using the node template

  • Create a broker node template, selecting the option to add nodes with an operating system image

  • Run a set of diagnostic tests to ensure that you can add nodes in your environment

(Optional) Add drivers to the operating system image

Add Drivers for Operating System Images (https://go.microsoft.com/fwlink/?LinkId=214592)

Deploy nodes to the cluster

  • Add compute nodes, using the compute node template

  • Add broker nodes, using the broker node template

  • (Optional) Configure additional WCF broker nodes, if they are required to handle the SOA jobs

Additional considerations

This configuration also supports the following node deployment methods:

Item Reference

Add nodes on which Windows Server 2008 R2 and HPC Pack 2008 R2 are already installed

Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkID=214588)

Use an XML file that specifies attributes of the nodes that are added to the HPC cluster

Add Nodes by Importing a Node XML File

Deploy nodes from bare metal using an iSCSI connection to a network-attached storage array

Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=194674)

High availability cluster

Figure 4   A Windows HPC Server cluster configured for high availability of the head node and WCF broker nodes

For detailed, step-by-step procedures for this configuration, see Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=198300).

Features of the sample configuration

  • Creates a cluster that can scale to more than 1000 nodes and that enhances the availability of the head node and WCF broker node services for unscheduled and scheduled outages.

  • Deploys the head node in the context of a preconfigured two-node failover cluster.

  • Deploys one or more WCF broker nodes, each in the context of a two-node failover cluster.

  • Is well suited to run a variety of parallel computing jobs, including parametric sweep, message passing interface (MPI), and task flow jobs, as well as SOA and Microsoft Excel calculation offloading jobs.

  • Deploys nodes to the cluster from bare metal by using the Windows HPC Server 2008 R2 features to automatically install an operating system and the HPC Pack 2008 R2 components, name them, and join them to the doMayn.

  • Supports additional node deployment options that are available in Windows HPC Server 2008 R2, depending on the network topology selected.

Example deployment options

Item Description

HPC Pack 2008 R2 edition

Enterprise edition

Note
If you will not be using the Excel Services for HPC, you can install the Express edition instead.

HPC databases

Installed on one or more remote instances of SQL Server 2008 SP1 or later that are preconfigured for Windows HPC Server 2008 R2

Important
Optionally, you can deploy the remote instances of SQL Server as failover cluster instances. For more information, see Installing a SQL Server 2008 Failover Cluster (https://go.microsoft.com/fwlink/?LinkId=182594)

Network adapters

  • 4 on each head node and each WCF broker node, 3 on each compute node

  • The network adapter on each compute node that connects to the private network must be configured to boot from pre-boot execution environment (PXE)

Network configuration

  • Topology 4: All nodes on enterprise, private, and application networks

  • Each server in a two-node failover cluster must additionally connect on a separate network to shared storage for the failover cluster

  • DHCP is enabled on the private and application networks (default)

  • DHCP is disabled on the private and application networks (default)

Key deployment steps

Step Reference

Install a supported operating system on the servers for the head nodes, the WCF broker nodes, and the remote servers for each SQL Server instance (if you will be installing SQL Server in a failover cluster)

Install Windows Server 2008 R2 on Multiple Servers, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201566)

Set up shared storage for the servers in each failover cluster

Configure failover clustering and file services on the servers for the head node

Install HPC Pack 2008 R2 on the first server that will run head node services

Install HPC Pack 2008 R2 on a Server that Will Run Head Node Services, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201570)

Configure the head node on the first server

Configure the Head Node on the First Server, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201571)

Install HPC Pack 2008 R2 and configure the head node on the second server that will run head node services

Install and Configure HPC Pack 2008 R2 on the Other Server that Will Run Head Node Services, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201572)

Install HPC Pack 2008 R2 on the WCF broker nodes, and then add the broker nodes to the HPC cluster

Create WCF Broker Nodes Running Windows HPC Server 2008 R2, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201574)

Set up shared storage for the WCF broker nodes

Set Up Shared Storage for WCF Broker Nodes , in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=214602)

Create failover clusters using the broker nodes

Create Failover Clusters Using WCF Broker Nodes, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkId=214604)

Add an operating system image that will be deployed to the nodes

Add an Operating System Image (https://go.microsoft.com/fwlink/?LinkId=214590)

(Optional) Add drivers to the operating system image

Add Drivers for Operating System Images (https://go.microsoft.com/fwlink/?LinkId=214592)

Create a compute node template, selecting the option to add nodes with an operating system image

Create a Node Template (https://go.microsoft.com/fwlink/?LinkId=214589)

Add compute nodes, using the compute node template

Deploy Nodes from Bare Metal (https://go.microsoft.com/fwlink/?LinkId=214594)

Additional considerations

This configuration also supports the following node deployment methods:

Item Reference

Add nodes on which Windows Server 2008 R2 and HPC Pack 2008 R2 are already installed

Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkID=214588)

Use an XML file that specifies attributes of the nodes that are added to the HPC cluster

Add Nodes by Importing a Node XML File

Deploy nodes from bare metal using an iSCSI connection to a network-attached storage array

Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=194674)

Workstation cluster

Figure 5   A Windows HPC Server cluster of workstations

Features of the sample configuration

  • Creates an HPC cluster that is made up of doMayn-joined Windows® 7 workstations (that are running Windows 7 Enterprise, Windows 7 Professional, or Windows 7 Ultimate). The workstation nodes do not need to be dedicated cluster computers, and can be used for other tasks.

  • Makes Windows 7 workstations available to the HPC cluster to run jobs according to a time-based or activity-based availability policy, or manually. For example, you can configure the cluster to use workstations only on nights and weekends, or when keyboard or mouse activity has not been detected for a certain time.

  • Can run a variety of HPC jobs, but is ideal for short-running jobs that can be interrupted and that do not require internode communication.

  • Can be adapted to include dedicated on-premises nodes in addition to workstation nodes. For information about deploying on-premises nodes, see Basic on-premises HPC cluster earlier in this topic.

  • Can support a large number of nodes, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.

Example deployment options

Item Description

HPC Pack 2008 R2 edition

Enterprise edition

Note
To use workstation nodes, you must install HPC Pack 2008 R2 Enterprise edition on the head node and on each workstation node

HPC databases

Installed with SQL Server 2008 Express edition on the head node (default)

Important
If you will be deploying a cluster with more than 256 nodes, consider using one or more remote instances of Microsoft SQL Server for the HPC databases. To do this, you must preconfigure the SQL Server databases before you deploy the head node. For more information, see the Deploying an HPC Cluster with Remote Databases Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=186534).

Network adapters

  • 1 on the head node, 1 on each workstation node

Network configuration

  • Topology 5: All nodes only on an enterprise network

Key deployment steps

Step Reference

Deploy the head node

  • Install Windows Server 2008 R2

  • Join the computer to the Active Directory doMayn

  • Install HPC Pack 2008 R2 Enterprise edition selecting the option to create a new HPC cluster by creating a head node, and using default settings

Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560)

Configure the head node

  • Configure HPC cluster networking with topology 5

  • Provide doMayn credentials to add nodes to the cluster

  • Specify how nodes will be named automatically

  • Create a workstation node template that will be used to define the availability policy of the workstation nodes that are added to the HPC cluster

  • Run a set of diagnostic tests to ensure that you can add nodes in your environment

  • Configure the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=198319)

  • Create a Workstation Node Template, in the Adding Workstation Nodes in Windows HPC Server 2008 R2 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214605)

Install HPC Pack 2008 R2 Enterprise edition on each Windows 7 computer that you will use as a workstation node, selecting the option to join an existing HPC cluster by creating a new workstation node

Note
The administrator of workstations in your organization can use any of a variety of deployment methods to install HPC Pack 2008 R2 on the workstations.

Install HPC Pack 2008 R2 on the Workstation Computers, in the Adding Workstation Nodes in Windows HPC Server 2008 R2 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214606)

Assign the node template to add the workstation nodes to the cluster

Assign a Workstation Node Template, in the Adding Workstation Nodes in Windows HPC Server 2008 R2 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214607)

Additional considerations

Windows Azure cloud cluster

Figure 6   A Windows HPC Server cluster using Windows Azure nodes

Important
To deploy Windows Azure worker nodes, you must be running Windows HPC Server 2008 R2 Service Pack 1 or later. For more information and release notes for the service pack, see Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 1 (https://go.microsoft.com/fwlink/?LinkID=202812).

Features of the sample configuration

  • Creates an HPC cluster that uses minimal on-premises infrastructure, and that adds or removes Windows Azure nodes in the cloud as needed to change the capacity of the cluster.

  • Makes Windows Azure computational resources available according to a time-based availability policy, or manually. You pay for Windows Azure nodes only when they are made available.

  • Can run a variety of parallel jobs, but is ideal for small, service-oriented architecture (SOA) jobs that do not process large amounts of data. You can run SOA jobs by using the broker node role that is installed and enabled by default on the head node. For larger SOA jobs, additional on-premises broker nodes may need to be deployed in the HPC cluster. For more information, see SOA-enabled on-premises HPC cluster earlier in this topic.

    Important
    If you want to run an ISV application in the cloud, check with the vendor of your ISV application for the availability of the application in Windows Azure.
  • Requires a Windows Azure subscription in which a hosted service and a storage account are preconfigured.

  • Can be adapted to include dedicated on-premises compute nodes in addition to Windows Azure worker nodes. For information about deploying compute nodes, see Basic on-premises HPC cluster earlier in this topic

  • Can support a large number of nodes, depending on your Windows Azure subscription and the on-premises configuration, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.

Example deployment options

Item Description

HPC Pack 2008 R2 edition

Express edition

HPC databases

Installed with SQL Server 2008 Express edition on the head node (default)

Important
If you will be deploying a cluster with more than 256 nodes, consider deploying the head node using one or more remote instances of Microsoft SQL Server for the HPC databases. To do this, you must preconfigure the SQL Server databases. For more information, see the Deploying an HPC Cluster with Remote Databases Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=186534).

Network adapters

  • 1 on the head node

    Note
    To deploy Windows Azure worker nodes, the head node must have Internet connectivity.

Network configuration

  • Topology 5: All nodes only on an enterprise network

  • Internet connection for the head node

  • TCP ports open on the external firewall: 80, 443, 5901, 5902, 7998, 7999

    Note
    These are needed so that the head node can communicate with the Windows Azure deployment services and with the Windows Azure nodes.

Key deployment steps

Step Reference

Deploy the head node

  • Install Windows Server 2008 R2

  • Join the computer to the Active Directory doMayn

  • Install HPC Pack 2008 R2 Express edition selecting the option to create a new HPC cluster by creating a head node, and using default settings

Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560)

Configure the head node

  • Configure HPC cluster networking with topology 5

  • Provide doMayn credentials to add nodes to the cluster

  • Specify how nodes will be named automatically

  • Run a set of diagnostic tests to ensure that you can add nodes in your environment

Configure the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=198319)

Prepare to deploy Windows Azure worker nodes

  • Configure the management certificate in the Windows Azure subscription and on the head node

  • Collect information about your Windows Azure subscription to enable a connection from the Windows HPC Server 2008 R2 cluster

  • Configure a proxy client (optional)

Create a Windows Azure worker node template to define the availability policy of the nodes

Step 4: Create a Windows Azure Worker Node Template, in the Deploying Windows Azure Work Node in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=200496)

Add the Windows Azure worker nodes to the cluster

Add Windows Azure Worker Nodes to the HPC Cluster, in the Deploying Windows Azure Work Nodes in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214615)

Start the Windows Azure worker nodes, which provisions the worker role nodes in Windows Azure

Start the Windows Azure Worker Nodes, in the Deploying Windows Azure Work Nodes in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214616)

Additional considerations

  • Before using Windows Azure worker nodes, consider your organization’s policies and other limitations for storing or processing sensitive data in the cloud.

  • The performance of Windows Azure worker nodes may be less than that of dedicated on-premises compute nodes.

  • Windows Azure worker nodes cannot access on-premises nodes or file shares directly.