Understanding HPC Cluster Network Topologies
Updated: January 13, 2014
Applies To: Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2
Microsoft® HPC Pack supports five cluster topologies designed to meet a wide range of user needs and performance, scaling, and access requirements. These topologies are distinguished by how the nodes in the cluster are connected to each other and to the enterprise network.
In this topic:
The following table lists and describes the networks to which the nodes in an HPC cluster can be connected.
An organizational network connected to the head node and in some cases to other nodes in the cluster. The enterprise network is often the public or organization network that most users log on to perform their work. All intra-cluster management and deployment traffic is carried on the enterprise network unless a private network and an optional application network also connect the cluster nodes.
A dedicated network that carries intra-cluster communication between nodes. This network, if it exists, carries management, deployment, and application traffic if no application network exists.
A dedicated network, preferably with high throughput and low latency. This network is normally used for parallel Message Passing Interface (MPI) application communication between cluster nodes.
The following table lists the five cluster network topologies that are supported by HPC Pack.
1. Compute nodes isolated on a private network
2. All nodes on enterprise and private networks
3. Compute nodes isolated on private and application networks
4. All nodes on enterprise, private, and application networks
5. All nodes only on an enterprise network
If you want to add broker nodes, workstation nodes, or unmanaged server nodes to your cluster, you must choose a network topology that will work with the type of jobs and services that these two types of nodes will be running. Also, you must connect the nodes to the HPC networks of the topology that you choose, in a way that they can communicate with all the nodes that they need to interact with.
Unmanaged server nodes are supported starting in HPC Pack 2008 R2 with Service Pack 3 (SP3).
For example, broker nodes must be connected to the network where the clients that are starting service-oriented architecture (SOA) sessions are connected (usually the enterprise network) and to the network where the compute nodes that are running the SOA services are connected (if different from the network where the clients are connected). In most cases, having a private network, and if possible also a high-throughput and low-latency application network, will make the work of broker nodes more efficient because all communication between the broker nodes and the clients that are starting SOA sessions will not have to occur over the enterprise network, which is a busy network in most organizations.
In the case of workstation nodes and unmanaged server nodes, topology 5 (all nodes on an enterprise network) is the recommended topology because in that topology the nodes (usually already connected to the enterprise network) are able to communicate with all other types of nodes in the cluster. Although other topologies are supported for workstation nodes and unmanaged server nodes, depending on the type and the scope of the jobs that you want to run, there might be important limitations that you need to consider. For example, if you choose topology 1 (compute nodes isolated on a private network) or topology 3 (compute nodes isolated on private and application networks), and workstation nodes are already connected to the enterprise network, communication between compute nodes and workstation nodes will not be possible.
For detailed information about network topologies, as well as information about advanced network configurations, see HPC Cluster Networking.