Skip to main content
Requirements to Add Azure Nodes with Microsoft HPC Pack

Updated: December 21, 2016

Applies To: HPC Pack 2016, Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2

This section describes the requirements to add Azure nodes to your on-premises HPC cluster.

To deploy Azure nodes on your Windows HPC cluster, you must be running at least Microsoft® HPC Pack 2008 R2 with Service Pack 1 (SP1), or a later version of HPC Pack. For information about the Azure features that are supported by the version of HPC Pack running on your cluster, see Azure Feature Compatibility with Microsoft HPC Pack.

For installation instructions for HPC Pack and service packs, see:

The head node computer (or computers) where HPC Pack is installed must be fully configured (that is, all the steps required in the Deployment To-do List have been completed). Your HPC cluster can be configured in any cluster network topology (1-5) that is supported by HPC Pack. The head node must be able to connect over the Internet to Azure services. In most cases, this Internet connectivity is provided by the connection of the head node to the enterprise network. You might need to contact your network administrator to configure this connectivity.

For more information about the cluster network topologies that are supported by HPC Pack, see Configure the HPC Cluster Network.

If you are considering deploying a large number of Azure nodes, be aware that large deployments can place significant demands on your head node and the HPC cluster databases. You may need additional RAM or disk space on the head node computer, and you might need to install the cluster databases on a remote server that is running Microsoft SQL Server. For more information, see Best Practices for Large Deployments of Azure Nodes with Microsoft HPC Pack.

System_CAPS_ICON_important.jpg Important

When adding Azure nodes to an on-premises cluster, the name of the head node must adhere to the following naming rules:

  • Contains only alphanumeric characters
  • Does not begin with a numeric character

If a network firewall is running on your enterprise network, the firewall must allow TCP communication on port 443 from your head node to Azure services. Depending on the version of HPC Pack that is installed, and whether you use features such as remote desktop connections to Azure nodes, you may need to configure connectivity over additional ports. If necessary, contact your network administrator to open the necessary firewall ports. For detailed information about the ports in any internal or external firewalls that must be open by default for the deployment and operation of Azure nodes, see Firewall ports used for communication with Azure nodes.

System_CAPS_ICON_note.jpg Note

By default, the HPC Job scheduler Service on the head node communicates with proxy nodes in Azure by the Net.TCP protocol through port 443. However, in some enterprise networks Net.TCP communication through port 443 is not allowed, which will prevent communication with Azure node deployments. If you are using at least HPC Pack 2012, you can configure the HPC Job Scheduler Service to communicate by the HTTPS protocol through port 443, which is typically allowed in an enterprise network. To do this, run the following HPC Powershell cmdlet to change the value of the NettcpOver443 cluster property:

Set-HpcClusterProperty -NettcpOver443:$false

For more information, see Set-HpcClusterProperty.

Be aware that HTTPS communication will be slower than Net.TCP communication and may affect the performance of your cluster.

You can verify that the necessary firewall ports are open by running the Azure Firewall Ports Test, which is a diagnostic test installed in HPC Pack starting with HPC Pack 2008 R2 with SP2. This test verifies general communication from the head node to Azure through any existing internal and external firewalls. For more information, see Running Diagnostic Tests.

Advanced firewall and proxy client configuration (optional)

If your enterprise network uses a proxy server or network firewall device that manages Internet traffic, you may need to perform additional configuration steps on the head node, or on your proxy server or network firewall device, to allow the HPC Pack services to communicate with Azure. This is necessary only in some cluster and network environments.

To deploy and use the Azure nodes, the following services that run under the system account on a HPC Pack head node must be able to communicate over the Internet with the services for Azure:

  • HPCManagement

  • HPCScheduler

  • HPCBrokerWorker

Because these services run under the system account, they may be blocked by certain proxy servers or network firewalls unless those devices are configured to allow their traffic. Depending on your network environment, you may also need to configure client software on the head node to associate specific user credentials with the services.

System_CAPS_ICON_important.jpg Important

  • You should consult with your network administrator and the vendor of your proxy server or network firewall to find out if a proxy server or network firewall on your enterprise network will block the traffic for the HPCManagement, HPCScheduler, and HPCBrokerWorker services for HPC Pack. If additional configuration is needed, the specific configuration steps will depend on factors such as your specific network and security policies, your proxy server or network firewall, and whether and what type of firewall client software is running on the head node.
  • The Azure Firewall Ports Test can help detect this issue. If all of the firewall ports required for communication between HPC Pack and Azure are open, but the diagnostic test fails, this can indicate a problem with the configuration of a proxy server or network firewall.

You must obtain or have access to an Azure subscription account. At a minimum, an Azure cloud service, an Azure storage account, and a management certificate must be configured to support a deployment of Azure nodes. Depending on the version of HPC Pack that is installed on your cluster and the subscription terms, you may be able to configure or use other Azure features or services from a subscription in your deployment. For more information, see Azure Feature Compatibility with Microsoft HPC Pack.

System_CAPS_ICON_note.jpg Note

Each subscription limits the number of role instances that can be provisioned in a cloud service, as well the number of cloud services and storage accounts. If you are planning a large deployment of Azure nodes, you may need multiple subscriptions or multiple cloud services, and you may need to request an increase in the quota of role instances. For more information, see Best Practices for Large Deployments of Azure Nodes with Microsoft HPC Pack.

Azure management certificate

Before you can deploy Azure nodes in your Windows HPC cluster, a management certificate must be uploaded to your Azure subscription. A corresponding certificate must be configured on the head node computer (or head node computers, if the head node is configured for high availability). For certain scenarios with some versions of HPC Pack, a certificate must also be configured on a client computer that is used to manage the cluster and that needs a connection to Azure. The management certificate must be a valid X.509 v3 certificate with a key size of at least 2048 bits and is required to authenticate access from the HPC cluster to resources in the Azure subscription.

System_CAPS_ICON_note.jpg Note

The same management certificate can be used for more than one Azure node deployment from a subscription.

If you do not already have a management certificate configured in your Azure subscription, you have the following options to obtain one:

  • In versions of HPC Pack before HPC Pack 2016, use the Default Microsoft HPC Azure Management certificate that is generated automatically on the head node when HPC Pack is installed. This certificate is self-signed and unique to your installation of HPC Pack on the head node. This certificate is intended only for testing purposes and proof-of-concept deployments. This certificate file is located in the following location on the head node computer: %CCP_HOME%\bin\hpccert.cer.

  • Obtain a certificate from a public or enterprise certification authority.

  • Create a self-signed X.509 v3 certificate. For example, to create the management certificate by using the Certificate Creation Tool (makecert.exe) in Visual Studio, see Create a Management Certificate for Azure.

  • Reuse an existing certificate that is configured in the Azure subscription.

If you obtain or use a new management certificate or the Default Microsoft HPC Azure Management certificate, upload the .cer file to your Azure subscription by using the Azure Management Portal.

For information and procedures to import the management certificate to the required certificate stores on the head node or head nodes and on client computers (if required), see Options to Configure the Azure Management Certificate for Azure Burst Deployments.

Azure cloud service and storage account

If you have not already done so, create a cloud service and a storage account in your Azure subscription to add Azure nodes to your Windows HPC cluster. You can perform these procedures by using the classic portal, or other methods to create resources in the classic deployment model.

  • You must configure a separate cloud service for each Azure node template that you create. However, you can configure a storage account that is used in multiple node templates.

  • As a best practice, the storage account that is used for an Azure node deployment should not be used for purposes other than node provisioning. If you plan to use Azure storage to move job and task data to and from the head node or to and from the Azure nodes, configure a separate storage account for that purpose.

  • To optimize performance, for each Azure node template, configure the cloud service, the storage account (or accounts), and any geographically bound feature such as an Azure virtual network in the same region or affinity group.

  • If you have business continuity requirements for your Azure node deployments, you should plan to create cloud services and storage accounts in more than one geographic region to deploy Azure nodes.

  • Do not deploy a separate, custom cloud service package to a cloud service that is used to add Azure nodes to a Windows HPC cluster. A cloud service package will be automatically deployed by HPC Pack when the Azure nodes are provisioned.

Pricing considerations

  • The Azure subscription will be charged for the time that the Azure nodes in a deployment are available, as well as for the compute and storage services that are used. For more information, review the terms of the subscription for Azure. For general information, see Azure Pricing Overview.

  • Each time that you start (provision) a set of Azure nodes by using HPC Pack, additional proxy role instances are automatically configured in Azure to facilitate communication between the head node and the Azure nodes. Depending on your version of HPC Pack, this number is either fixed (2 proxy nodes per deployment with HPC Pack 2008 R2) or configurable (starting with HPC Pack 2012). The proxy role instances incur charges in Azure along with the Azure node instances, and they consume cores that are allocated to the subscription (and thus reduce the number of cores that are available to deploy Azure nodes). For more information, see Set the Number of Azure Proxy Nodes.

Burst to Azure Worker Instances with Microsoft HPC Pack
Best Practices for Large Deployments of Azure Nodes with Microsoft HPC Pack