Skip to main content
Best Practices for Large Deployments of Azure Nodes with Microsoft HPC Pack

Updated: April 16, 2014

Applies To: Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2, Windows HPC Server 2008 R2

Starting with HPC Pack 2008 R2 with Service Pack 1, Windows HPC cluster administrators and developers can increase the power of the on-premises cluster by adding computational resources on-demand in Azure. This HPC cluster “burst” scenario with Azure nodes enables larger HPC workloads, sometimes requiring thousands of cores in addition to or in place of on-premises cluster resources. This topic provides guidance and best practices recommendations to assist in planning and implementing a large deployment of Azure nodes from an on-premises HPC Pack cluster. These recommended best practices should help minimize the occurrence of Azure deployment timeouts, deployment failures, and loss of live instances.

noteNote
  • These best practices include recommendations for both the Azure environment and the configuration of the on-premises head node. Most recommendations will also improve the behavior of smaller deployments of Azure nodes. Exceptions to these guidelines are test deployments on which performance and reliability of the head node services may not be critical and very small deployments, where the head node services will not be highly stressed.

  • Many of the considerations for configuring an on-premises head node for a large Azure deployment also apply to clusters that contain comparably large numbers of on-premises compute nodes.

  • These recommendations supplement the cluster, networking, and other requirements to add Azure nodes to a Windows HPC cluster. For more information, see Requirements for Azure Nodes.

  • These general recommendations may change over time and may need to be adjusted for your HPC workloads.

In this topic:

Applicable versions of HPC Pack and Azure SDK for .NET

These recommendations are generally based on HPC Pack 2012 and HPC Pack 2012, but they are also useful for large deployments performed with HPC Pack 2008 R2.

The following table lists the versions of HPC Pack and the related versions of Azure SDK for .NET that these guidelines apply to.

 

HPC Pack Azure SDK

HPC Pack 2012 R2

Azure SDK for .NET 2.2

HPC Pack 2012 with Service Pack 1 (SP1)

Azure SDK for .NET 2.0

HPC Pack 2012

Azure SDK for .NET 1.8

HPC Pack 2008 R2 with Service Pack 4 (SP4)

Azure SDK for .NET 1.7

HPC Pack 2008 R2 with Service Pack 3 (SP3)

Azure SDK for .NET 1.6

Thresholds for large deployments of Azure nodes

A deployment of Azure nodes for an HPC cluster is considered “large” when it becomes necessary to consider the configuration of the head node and when the deployment will demand a significant percentage of the Azure cluster of resources that could be used by a single cloud service. A larger deployment would risk deployment timeouts and losing live instances.

ImportantImportant
Each Azure subscription is allocated a quota of cores and other resources, which also affects your ability to deploy large numbers of Azure nodes. At this time, the default quota of CPU cores per subscription is 20. To be able to deploy a large number of Azure nodes, you might first need to contact Microsoft Support to request a core quota increase for your subscription. Note that a quota is a credit limit, not a guarantee of availability of resources.

The following table lists practical threshold numbers of role instances for a large deployment of Azure nodes in a single cloud service. The threshold depends on the virtual machine size (predefined in Azure) that is chosen for the Azure role instances.

 

Role instance size Number of role instances

A9

noteNote
This size is supported starting with HPC Pack 2012 R2.

125

A8

noteNote
This size is supported starting with HPC Pack 2012 R2.

250

A7

noteNote
This size is supported starting with HPC Pack 2012 with SP1.

250

A6

noteNote
This size is supported starting with HPC Pack 2012 with SP1.

250

Extra Large

250

Large

500

Medium

800

Small

1000

For details about each virtual machine size, including the number of CPU cores and memory for each size, see Virtual Machine and Cloud Service Sizes for Azure.

To deploy more than these threshold numbers of role instances in one service with high reliability usually requires the manual involvement of the Azure operations team. To initiate this, contact your Microsoft sales representative, your Microsoft Premier Support account manager, or Microsoft Support. For more information about support plans, see Azure Support.

Although there is no hard, enforceable limit that applies to all Azure node deployments, 1000 instances per cloud service is a practical production limit.

Best practices for using Azure for large deployments

The following are general guidelines to successfully create and use large Azure deployments with your HPC cluster.

Provide early signals to the Azure operations team

Unless you have made arrangements to deploy to a dedicated Azure cluster in a data center, the most important recommendation is to communicate the need to the Azure operations team (through a Microsoft Support channel) for a large amount of capacity ahead of time and to plan deployments accordingly to eliminate capacity as the bottleneck. This is also an opportunity to obtain additional guidance about deployment strategies beyond the ones that are described in this topic.

Spread out deployments to multiple cloud services

We recommend splitting large deployments into several smaller-sized deployments, by using multiple cloud services, for the following reasons:

  • To allow flexibility in starting and stopping groups of nodes.

  • To make possible the stopping of idle instances after jobs have finished.

  • To facilitate finding available nodes in the Azure clusters, especially when Extra Large instances are used.

  • To enable the use of multiple Azure data centers for disaster recovery or business continuity scenarios.

There is no fixed limit on the size of a cloud service, but general guidance is fewer than 500 to 700 virtual machine instances or fewer than 1000 cores. Larger deployments would risk deployment timeouts, losing live instances, and problems with virtual IP address swapping.

The maximum tested number of cloud services for a single HPC cluster overall is 32.

noteNote
You may encounter limitations in the number of cloud services and role instances that you can manage through HPC Pack or the Azure Management Portal.

Be flexible with location

Having dependencies on other services and other geographic requirements may be inevitable, but it can help if your Azure deployment is not tied to a specific region or geography. However, it is not recommended to place multiple deployments in different geographic regions unless they have external dependencies in those geographic regions or unless you have high availability and disaster recovery requirements..

Be flexible with virtual machine size

Having strict dependencies on a certain virtual machine size (for example, Extra Large) can impact the success of deployments at a large scale. Having flexibility to adjust or even mix-and-match virtual machine sizes to balance instance counts and cores can help.

Use multiple Azure storage accounts for node deployments

It is recommended to use different Azure storage accounts for simultaneous large Azure node deployments and for custom applications. For certain applications that are constrained by I/O, use several storage accounts. Additionally, as a best practice, a storage account that is used for an Azure node deployment should not be used for purposes other than node provisioning. For example, if you plan to use Azure storage to move job and task data to and from the head node or to and from the Azure nodes, configure a separate storage account for that purpose.

noteNote
You incur charges for the total amount of data stored and for the storage transactions on the Azure storage accounts, independent of the number of Azure storage accounts. However, each subscription will limit the total number of storage accounts. If you need additional storage accounts in your subscription, contact Azure Support.

Adjust the number of proxy node instances to support the deployment

Proxy nodes are Azure worker role instances that are automatically added to each Azure node deployment from an HPC cluster to facilitate communication between on-premises head nodes and the Azure nodes. The demand for resources on the proxy nodes depends on the number of nodes deployed in Azure and the jobs running on those nodes. You should generally increase the number of proxy nodes in a large Azure deployment.

noteNote
  • The proxy role instances incur charges in Azure along with the Azure node instances.

  • The proxy role instances consume cores that are allocated to the subscription and reduce the number of cores that are available to deploy Azure nodes.

HPC Pack 2012 introduced HPC management tools for you to configure the number of proxy nodes in each Azure node deployment (cloud service). (In HPC Pack 2008 R2, the number is automatically set at 2 proxy nodes per deployment.) The number of role instances for the proxy nodes can also be scaled up or down by using the tools in the Azure Management Portal, without redeploying nodes. The recommended maximum number of proxy nodes for a single deployment is 10.

Larger or heavily used deployments may require more than the number of proxy nodes listed in the following table, which is based on a CPU utilization below 50 percent and bandwidth less than the quota.

 

Azure nodes per cloud service Number of proxy nodes

<100

2

100 - 400

3

400 - 800

4

800 - 1000

5

For more information about proxy node configuration options, see Set the Number of Azure Proxy Nodes.

Best practices to configure the head node for large deployments

Large deployments of Azure nodes can place significant demands on the head node (or head nodes) of a cluster. The head node performs several tasks to support the deployment:

  • Accesses proxy node instances that are created in an Azure deployment to facilitate communication with the Azure nodes (see Adjust the number of proxy node instances to support the deployment, in this topic).

  • Accesses Azure storage accounts for blob (such as runtime packages), queue, and table data.

  • Manages the heartbeat interval and responses, the number of proxies (starting with HPC Pack 2012), the number of deployments, and the number of nodes.

As Azure deployments grow in size and throughput, the stress put on the HPC cluster head node increases. In general, the key elements necessary to ensure your head node can support the deployment are:

  • Sufficient RAM

  • Sufficient disk space

  • An appropriately sized, well-maintained SQL Server database for the HPC cluster databases

Hardware specifications for the head node

The following are suggested minimum specifications for a head node to support a large Azure deployment:

  • 8 CPU cores

  • 2 disks

  • 16 GB of RAM

Configure remote SQL Server databases

For large deployments we recommend that you install the cluster databases on a remote server that is running Microsoft SQL Server, instead of installing the cluster databases on the head node. For general guidelines to select and configure an edition of SQL Server for the cluster, see Database Capacity Planning and Tuning for Microsoft HPC Pack.

Do not configure the head node for additional cluster roles

As a general best practice for most production deployments, we recommend that head nodes are not configured with an additional cluster role (compute node role or WCF broker node role). Having the head node serve more than one purpose may prevent it from successfully performing its primary management role. To change the roles performed by your head node, first take the node offline by using the action in Node Management in HPC Cluster Manager. Then, right-click the head node, and click Change Role.

Additionally, moving cluster storage off of the head node will ensure the head node does not run out of space and will operate effectively.

Use the HPC Client Utilities to connect remotely to the head node

When the head node is operating under a heavy load, its performance can be negatively impacted by having many users connected with remote desktop connections. Rather than having users connect to the head node by using Remote Desktop Services (RDS), users and administrators should install the HPC Pack Client Utilities on their workstations and access the cluster by using these remote tools.

Disable performance counter collection and event forwarding

For large deployments, performance counter collection and event forwarding can put a large burden on the HPC Management Service and SQL Server. For these deployments, it may be desirable to disable these capabilities by using the HPC cluster management tools. For example, set the CollectCounters cluster property to false by using the Set-HpcClusterProperty HPC PowerShell cmdlet. There may be a tradeoff between improved performance and collecting metrics that may help you troubleshoot issues that arise.

Disable unneeded head node services

To ensure a minimal hardware footprint from the operating system, and as a general HPC cluster best practice, disable any operating system services that are not required for operation of the HPC cluster. We especially encourage disabling any desktop-oriented features that may have been enabled.

Do not run NAT on the head node

Although HPC Pack allows quick configuration of the Routing and Remote Access service (RRAS) running on the head node to provide network address translation (NAT) and to allow compute nodes to reach the enterprise network, this may make the head node a significant bottleneck for network bandwidth and may also affect its performance. As a general best practice for larger deployments or deployments with significant traffic between compute nodes and the public network, we recommend one of the following alternatives:

  • Provide a direct public network connection to each compute node.

  • Provide a dedicated NAT router, such as a separate server running a Windows Server operating system and that is dual-homed on the two networks and running RRAS.

Ensure a reasonable period of storage for completed jobs

The TtlCompletedJobs property of the cluscfg command and the Set-HpcClusterProperty HPC cmdlet control how long completed jobs remain in the SQL Server database for the HPC cluster. Setting a large value for this property ensures that job information is maintained in the system for a long time, which may be desirable for reporting purposes. However, a large number of jobs in the system will increase the storage and memory requirements of the system, since the database (and queries against it) will generally be larger.

Configure a reasonable number of missed heartbeats before marking nodes unreachable

HPC Pack uses a heartbeat signal to verify node availability. A compute node's lack of response to this periodic health probe by the HPC Job Scheduler Service determines if the node will be marked as unreachable. By configuring heartbeat options in Job Scheduler Configuration in HPC Cluster Manager, or by using the cluscfg command or the Set-HpcClusterProperty HPC cmdlet, the cluster administrator can set the frequency of the heartbeats (HeartbeatInterval) and the number of heartbeats that a node can miss (InactivityCount) before it is marked as unreachable. For example, the default HeartbeatInterval of 30 seconds could be increased to 2 minutes when the cluster includes a large Azure deployment. The default InactivityCount is set to 3, which is suitable for some on-premises deployments, but it should be increased to 10 or more when Azure nodes are deployed.

noteNote
Starting with HPC Pack 2012 with SP1, the number of missed heartbeats is configured separately for on-premises nodes and Azure nodes. The InactivityCountAzure cluster property configures the number of missed heartbeats after which worker nodes that are deployed in Azure are considered unreachable by the cluster. The default value of InactivityCountAzure is set to 10. Starting with HPC Pack 2012 with SP1, the InactivityCount property applies exclusively to on-premises nodes.

If the head node or WCF broker nodes are configured for high availability in a failover cluster, you should also consider the heartbeat signal used by each failover cluster computer to monitor the availability of the other computer (or computers) in the failover cluster. By default, if a computer misses five heartbeats, once every second, communication with that computer is considered to have failed. You can use Failover Cluster Manager to decrease the frequency of heartbeats, or increase the number of missed heartbeats, in a cluster with a large Azure deployment.

If you are running service-oriented architecture (SOA) jobs on the Azure nodes, you may need to adjust monitoring timeout settings in the service registration file to manage large sessions. For more information about the SOA service configuration file, see SOA Service Configuration Files in Windows HPC Server 2008 R2.

Configure a registry key to improve the performance of file staging operations

Starting with HPC Pack 2008 R2 with SP2, you can set a registry key on the head node computer to improve the performance of diagnostic tests, clusrun operations, and the hpcfile utility on large deployments of Azure nodes. To do this, add a new DWORD value called FileStagingMaxConcurrentCalls in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\HPC. We recommend that you configure a value that is between 50% and 100% of the number of Azure nodes that you plan to deploy. To complete the configuration, after you set the FileStagingMaxConcurrentCalls value, you must stop and then restart the HPC Job Scheduler Service.

CautionCaution
Incorrectly editing the registry may severely damage your system. Before making changes to the registry, you should back up any valued data on the computer.

See Also