Share via


Understanding HPC Cluster Network Topologies

Applies To: Windows HPC Server 2008

Windows HPC Server 2008 supports five cluster topologies designed to meet a wide range of user needs and performance, scaling, and access requirements. These topologies are distinguished by how many networks the cluster is connected to, and in what manner.

Cluster networks

The following table lists and describes the networks to which a cluster can be connected.

Network name Description

Enterprise network

An organizational network connected to the head node and optionally, the cluster compute nodes. The enterprise network is often the business or organizational network most users log onto to perform their work. All intra-cluster management and deployment traffic is carried on the enterprise network unless a private network and optionally, an application network, also connect the cluster nodes.

Private network

A dedicated network that carries intra-cluster communication between nodes. This network, if it exists, carries management, deployment, and application traffic if no application network exists.

Application network

A dedicated network, preferably with high bandwidth and low latency. This network carries parallel Message Passing Interface (MPI) application communication between cluster nodes. If the jobs that you intend to submit to the cluster do not use MPI libraries, no application traffic will be generated and an application network is not required.

Cluster topologies

The following table lists the five cluster network topologies supported by Windows HPC Server 2008.

Topology Description

1. Compute nodes isolated on a private network

  • Network traffic between compute nodes and resources on the enterprise network (such as databases and file servers) pass through the head node. Depending on the amount of traffic, this might impact cluster performance.

  • The private network carries all communication between the head node and the compute nodes, including deployment, management and application traffic (for example, MPI communication). For this reason, cluster performance is more consistent because intra-cluster communication is routed onto the private network.

  • A possible drawback is that compute nodes are not directly accessible by users on the enterprise network. This has implications when developing and debugging parallel applications for use on the cluster.

2. All nodes on enterprise and private networks

  • Communication between nodes, including deployment, management, and application traffic, is carried on the private network. This offers more consistent cluster performance because intra-cluster communication is routed onto a private network.

  • Traffic from the enterprise network can be routed directly to a compute node.

  • This topology is well suited for developing and debugging applications because all compute nodes are connected to the enterprise network.

  • This topology provides users on the enterprise network with direct access to compute nodes.

  • This topology provides compute nodes with faster access to enterprise network resources.

3. Compute nodes isolated on private and application networks

  • The private network carries deployment and management communication between the head node and the compute nodes, This offers more consistent cluster performance because intra-cluster communication is routed onto a private network.

  • MPI jobs running on the cluster use the high-performance application network for cross-node communication.

  • A possible drawback is that compute nodes are not directly accessible by users on the enterprise network. This has implications when developing and debugging parallel applications for use on the cluster.

4.  All nodes on enterprise, private, and application networks

  • The private network carries deployment and management communication between the head node and the compute nodes, This offers more consistent cluster performance because intra-cluster communication is routed onto a private network.

  • MPI jobs running on the cluster use the high-performance application network for cross-node communication.

  • Traffic from the enterprise network can be routed directly to a compute node.

  • This topology is well suited for developing and debugging applications because all compute nodes are connected to the enterprise network.

  • This topology provides users on the enterprise network with direct access to compute nodes.

  • This topology provides compute nodes with faster access to enterprise network resources.

5. All nodes only on an enterprise network

  • All traffic, including enterprise, intra-cluster and application traffic, is carried over the enterprise network. This maximizes access to the compute nodes by users and developers on the enterprise network.

  • This topology provides users on the enterprise network with direct access to compute nodes.

  • Access of resources on the enterprise network by individual compute nodes is faster.

  • This topology is well suited for developing and debugging applications because all cluster nodes are connected to the enterprise network.

  • This topology provides users on the enterprise network with direct access to compute nodes.

  • This topology provides compute nodes with faster access to enterprise network resources.

  • Because all nodes are connected only to the enterprise network, you cannot use Windows Deployment Services to deploy compute node images using the new deployment tools in Windows HPC Server 2008.

Additional references