Compute Cluster Network Topology: Scenario 3

Updated: June 6, 2006

Applies To: Windows Compute Cluster Server 2003

Compute nodes isolated on private and MPI networks

Cluster topology scenario 3 is similar to topology scenario 1. The public network is attached only to the head node, which acts as a gateway between the compute nodes on the private network and resources and users on the public network. The key difference in this scenario is that the cluster is configured with an MPI network connecting all nodes. In this topology, the head node has a third network interface, a high-speed adaptor connected to the MPI network. When an MPI network is present, jobs running on the cluster then use the high speed network for cross-node communication.

Each compute node in this scenario has a second network adaptor, one for the private network and another for the MPI network. The MPI network is used to isolate latency sensitive MPI traffic.

Network Topology Scenario 3
Considerations when using this topology
  • Improved cluster response because internal cluster network traffic is routed onto the private and MPI networks.

  • Cluster compute nodes are not directly accessible by users on the public network.

Network Configuration

Enabling ICS on the head node is recommended for this cluster topology. Enabling ICS will provide network address translation to the cluster compute nodes so the nodes can access network services and resources on the public network. Enabling ICS also provides DNS and DHCP services to the private network interface of each compute node.

ICS does not provide dynamic IP addresses to the MPI network interfaces of the cluster nodes.

See Also

Community Additions