Skip to main content
Release Notes for Microsoft HPC Pack 2012 R2

Updated: March 5, 2014

Applies To: Microsoft HPC Pack 2012 R2

These release notes address late-breaking issues and information for the high-performance computing (HPC) cluster administrator about Microsoft® HPC Pack 2012 R2. You can install HPC Pack 2012 R2 to upgrade an existing HPC cluster that is currently running HPC Pack 2012 with Service Pack 1 (SP1). You should upgrade all head nodes, compute nodes, Windows Communication Foundation (WCF) broker nodes, workstation nodes, unmanaged server nodes, and computers that are running the HPC Pack client utilities. For important information about new features in HPC Pack 2012 R2, including updated system requirements, see What's New in Microsoft HPC Pack 2012 R2.

To download the upgrade packages for HPC Pack 2012 R2, go to the Microsoft Download Center.

noteNote
In addition to upgrade packages for existing HPC Pack 2012 with SP1 clusters, you can download an HPC Pack 2012 R2 installation package for a new cluster installation. To get started with a new cluster installation, see the Getting Started Guide for Microsoft HPC Pack 2012 R2 and HPC Pack 2012. If you are migrating from an HPC Pack 2008 R2 cluster, see Migrate a Cluster to HPC Pack 2012 R2 or HPC Pack 2012.

In this topic:

Before you upgrade to HPC Pack 2012 R2
ImportantImportant
The upgrade installation package for HPC Pack 2012 R2 does not support uninstallation back to HPC Pack 2012 with SP1. After you upgrade, if you want to downgrade to HPC Pack 2012 with SP1, you must completely uninstall the HPC Pack 2012 R2 features from the head node computer and the other computers in your cluster. If you want, you can reinstall HPC Pack 2012 with SP1 and restore the data in the HPC databases.

Perform the following actions before you upgrade to HPC Pack 2012 R2:

  • Take all compute nodes, workstation nodes, and unmanaged server nodes offline and wait for all current jobs to drain.

  • If you have a node template in which an Automatic availability policy is configured, set the availability policy to Manual.

  • Stop all existing Windows Azure nodes so that they are in the Not-Deployed state. If you do not stop them, you may be unable to use or delete the nodes from HPC Cluster Manager after the upgrade, but charges for their use will continue to accrue in Windows Azure. You must redeploy (provision) the Windows Azure nodes after you upgrade the head node.

    noteNote
    Under certain conditions, the upgrade installation program might prompt you to stop Windows Azure nodes before it upgrades the head node, even if you have already stopped all Windows Azure nodes (or do not have any Windows Azure nodes deployed). In this case, you can safely continue the installation.

  • Ensure that all diagnostic tests have finished or are canceled.

  • Close any HPC Cluster Manager and HPC Job Manager applications that are connected to the cluster head node.

  • After all active operations on the cluster have stopped, back up all HPC databases by using a backup method of your choice.

  • If you added drivers to an operating system image on an HPC Pack 2012 with SP1 head node, or plan to add drivers after installation of HPC Pack 2012 R2, you must install an updated version of the Deployment Image Servicing and Management (DISM) tool on the head node. If you do not do this, you will be unable to add drivers to operating system images after the upgrade, or deploy nodes using an operating system image to which drivers were previously added, until the updated DISM tool is configured. For more information and instructions to install the DISM tool from the Windows ADK for Windows 8.1, see Adding drivers to operating system images on an HPC Pack 2012 R2 head node running on Windows Server 2012 could fail in this topic.

Additional considerations for installing the upgrade

  • When you upgrade, several settings that are related to HPC services are reset to their default values, including the following:

    • Firewall rules

    • Event log settings for all HPC services

    • Service configuration properties such as dependencies and startup options

    • Service configuration files for the HPC services (for example, HpcSession.exe.config)

    • If the head node or WCF broker nodes are configured for high availability in a failover cluster, the HPC Pack related resources that are configured in the failover cluster

    After you upgrade, you may need to re-create settings that you have customized for your cluster or restore them from backup files.

    noteNote
    You can find more installation details in the following log file after you upgrade the head node: %windir%\temp\HPCSetupLogs\hpcpatch-DateTime.txt

  • When you upgrade the head node, the files that the head node uses to deploy a compute node or a WCF broker node from bare metal are also updated. Later, if you install a new compute node or WCF broker node from bare metal or if you reimage an existing node, the upgrade is automatically applied to that node.

^ Top of page

Upgrade the head node to Microsoft HPC Pack 2012 R2
To upgrade the head node to Microsoft HPC Pack 2012 R2
  1. Download the x64 version of the HPC Pack 2012 R2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.

  2. Run the installation program as an administrator from the location where you saved it.

  3. Read the informational message that appears. If you are ready to upgrade, click OK.

noteNote
  • After the installation completes, if you are prompted, restart the computer.

  • You can confirm that the head node is upgraded to HPC Pack 2012 R2. To view the version number in HPC Cluster Manager, on the Help menu, click About. The server version number and the client version number that appear will be similar to 4.2.4400.

If you have set up a head node for high availability in the context of a failover cluster, use the following procedure to apply the update.

To upgrade a high-availability head node to HPC Pack 2012 R2
  1. Download the x64 version of the HPC Pack 2012 R2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.

  2. Take the following high-availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, hpcsession, and hpcsoadiagmon.

  3. Upgrade the active head node by running the installation program as an administrator from the location where you saved it.

    After you upgrade the active head node, in most cases, the active head node restarts and fails over to the second head node.

    noteNote
    Because the second head node is not upgraded, Failover Cluster Manager might report a failed status for the resource group and the HPC services.

  4. Use the following procedure to upgrade the second head node:

    1. Take the following high-availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, hpcsession, and hpcsoadiagmon.

    2. Verify that the second head node is the active head node. If it is not, use Failover Cluster Manager to make the second head node the active head node.

    3. Upgrade the second head node.

      If you have additional head nodes in the cluster, move the current active head node to passive. After failover occurs, upgrade the current active head node according to the preceding steps.

ImportantImportant
While you are upgrading each head node that is configured for high availability, leave the Microsoft SQL Server resources online.

^ Top of page

Upgrade compute nodes, WCF broker nodes, workstation nodes, and unmanaged server nodes to HPC Pack 2012 R2

To work with a head node that is upgraded to HPC Pack 2012 R2, you must also upgrade existing compute nodes and WCF broker nodes. You can optionally upgrade your existing workstation nodes and unmanaged server nodes. Depending on the type of node, you can use one of the following methods to upgrade to HPC Pack 2012 R2:

  • Upgrade existing nodes that are running HPC Pack 2012 with SP1, either manually or by using a clusrun command.

    noteNote
    If you do not have administrative permissions on workstation nodes and unmanaged server nodes in the cluster, the clusrun command might not be able to upgrade the node. In these cases, the administrator of the workstation and unmanaged servers should perform the upgrade.

  • Reimage an existing compute node or broker node that was deployed by using an operating system image. For more information, see Reimage Compute Nodes.

    noteNote
    • Ensure that you edit the node template to add a step to copy the MS-MPI installation program to each node. Starting in HPC Pack 2012 R2, MPISetup.exe is installed in a separate folder in the REMINST share on the head node, and it is not automatically installed when you deploy a node using a template created in an earlier version of HPC Pack. For more information, see Node templates for bare metal deployment must include a step to copy MS-MPI setup files in this topic.

    • After you upgrade the head node to HPC Pack 2012 R2, if you install a new node from bare metal or if you reimage an existing node, HPC Pack 2012 R2 is automatically installed on that node.

To use clusrun to upgrade existing nodes to HPC Pack 2012 R2
  1. Download the appropriate version of the HPC Pack 2012 R2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a shared folder such as \\headnodename\install.

  2. In HPC Cluster Manager, view nodes by node group to identify a group of nodes that you want to upgrade; for example, ComputeNodes.

  3. Take the nodes in the node group offline.

  4. Open an elevated command prompt and type the appropriate clusrun command to install the upgrade. The following command is an example to install the x64 version.

    clusrun /nodegroup:ComputeNodes \\headnodename\install\HPC2012R2-x64.exe -unattend -SystemReboot
    noteNote
    The SystemReboot parameter is required. It causes the nodes to restart after the upgrade completes.

    After the upgrade and the nodes in the group restart, bring the nodes online.

To upgrade individual nodes manually to HPC Pack 2012 R2, you can copy the installation program to a shared folder on the head node. Then, access the existing nodes by making a remote connection to run the upgrade installation program from the shared folder.

ImportantImportant
If you have WCF broker nodes that are configured for high availability in a failover cluster, upgrade the high-availability broker nodes as follows:

  1. Upgrade the active broker node to HPC Pack 2012 R2.

  2. Fail over the passive broker node to the active broker node.

  3. Upgrade the active broker node (that is not yet updated).

^ Top of page

Redeploy existing Windows Azure nodes

If you previously added Windows Azure nodes to your HPC Pack 2012 with SP1 cluster, you must start (provision) those nodes again to install the updated HPC Pack components. If you previously changed the availability policy of the nodes from Automatic to Manual, you can reconfigure an Automatic availability policy in the Windows Azure node template so that the nodes come online and offline at scheduled intervals. For more information, see Steps to Deploy Windows Azure Nodes with Microsoft HPC Pack.

^ Top of page

Upgrade client computers to HPC Pack 2012 R2

To update computers on which the HPC Pack 2012 with SP1 client utilities are installed, ensure that any HPC client applications, including HPC Cluster Manager and HPC Job Manager, are stopped. Then, upgrade the computers to HPC Pack 2012 R2.

To upgrade client computers to HPC Pack 2012 R2
  1. Download the appropriate version of the HPC Pack 2012 R2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.

  2. Run the installation program as an administrator from the location where you saved it.

  3. Read the informational message that appears. If you are ready to upgrade, click OK.

^ Top of page

Uninstall HPC Pack 2012 R2
ImportantImportant
The upgrade installation package for HPC Pack 2012 R2 does not support uninstallation back to HPC Pack 2012 with SP1. After you upgrade, if you want to downgrade to HPC Pack 2012 with SP1, you must completely uninstall the HPC Pack 2012 R2 features from the head node computer and the other computers in your cluster. If you want, you can reinstall HPC Pack 2012 with SP1 and restore the data in the HPC databases

To uninstall HPC Pack 2012 R2, uninstall the features in the following order:

  • HPC Pack 2012 R2 Web Components (if they are installed)

  • HPC Pack 2012 R2 Key Storage Provider (if it is installed)

  • HPC Pack 2012 R2 Services for Excel 2010

  • HPC Pack 2012 R2 Server Components

  • HPC Pack 2012 R2 Client Components

  • HPC Pack 2012 R2 Microsoft MPI

ImportantImportant
Not all features are installed on all computers. For example, HPC Pack 2012 R2 Server Components is not installed when you choose to install only the client components.

When HPC Pack is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2012 R2, you can remove the following programs if they will no longer be used:

  • Microsoft Report Viewer Redistributable 2010 SP1

  • Microsoft SQL Server 2012 Express

    noteNote
    This program also includes Microsoft SQL Server Setup Support Files.

  • Microsoft SQL Server 2012 Native Client

Additionally, the following server roles and features might have been added when HPC Pack was installed, and they can be removed if they will no longer be used:

  • Dynamic Host Configuration Protocol (DHCP) Server server role

  • File Services server role

  • File Server Resource Manager role service

  • Routing and Remote Access Service server role

  • Windows Deployment Services server role

  • Microsoft .NET Framework feature

  • Message Queuing feature

^ Top of page

Known issues

^ Top of page

The HPC Pack iSCSI provider is not automatically updated during the upgrade

If you plan to deploy on-premises compute nodes from bare metal over Internet SCSI (iSCSI) from a network-attached storage array, ensure that you manually install the iSCSI provider that is available for HPC Pack 2012 R2. The iSCSI provider is not automatically updated during the upgrade.

If iSCSI storage is on Windows Server 2012 R2, a compute node might fail to deploy as a base node

Under certain conditions, if iSCSI storage is on a server running Windows Server 2012 R2, a compute node will fail to be deployed as an iSCSI base node. The problem occurs when all of the following are true:

  • The node was previously deployed as a base node

  • iSCSI boot nodes were deployed with this base node image

  • iSCSI boot nodes that were deployed with this base node image were deployed with another base node image

Under these conditions, a DISKPART error occurs during base node deployment on this node. Redeployment will fail because the related disk on the iSCSI storage server is in an error state.

Workaround

Manually delete the disk for the node on the iSCSI storage server, and deploy the base node again.

To delete the base node disk on the iSCSI storage server
  1. On the iSCSI storage server, navigate to the iSCSI management console. In Server Manager, click File and Storage Services, and then click iSCSI.

  2. Locate the iSCSI virtual disk for the base node that is in the Error state. For example, if the base node name is iscsi-cn5, the virtual disk name will be iscsi-cn5-base.vhdx.

  3. Manually delete the disk.

You can now redeploy the base node. It will recreate a clean disk for this node.

Running the hpcpack.exe command after upgrade to HPC Pack 2012 R2 causes an unhandled exception

If you attempt to run the hpcpack.exe command after upgrading from HPC Pack 2012 with SP1 to HPC Pack 2012 R2, you will see an error message similar to Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly 'Microsoft.WindowsAzure.Storage.DataMovement, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The problem occurs because the binary file Microsoft.WindowsAzure.Storage.DataMovement.dll is not replaced correctly during the upgrade. The hpcpack command runs normally following a new installation of HPC Pack 2012 R2.

Workaround

To work around this problem, obtain a version of Microsoft.WindowsAzure.Storage.DataMovement.dll (version 1.0.8696.442) that is compatible with HPC Pack 2012 R2. Use this version to replace the %CCP_HOME%\bin\ Microsoft.WindowsAzure.Storage.DataMovement.dll (version 1.0.8698.18) on the upgraded nodes where you intend to run the hpcpack command.

You can obtain the updated version of Microsoft.WindowsAzure.Storage.DataMovement.dll in one of three ways.

  1. Perform a complete installation of HPC Pack 2012 R2 (not an upgrade) on another computer. The updated version of Microsoft.WindowsAzure.Storage.DataMovement.dll is located in the %CCP_HOME%\bin folder.

  2. Add and deploy one or more Windows Azure nodes from the upgraded head node. The updated version of Microsoft.WindowsAzure.Storage.DataMovement.dll can be found in the %CCP_HOME% folder on the Windows Azure nodes.

  3. On the upgraded node, extract the updated version of Microsoft.WindowsAzure.Storage.DataMovement.dll from %CCP_HOME%\bin\HpcAzureRuntime.cspkg.

To extract Microsoft.WindowsAzure.Storage.DataMovement.dll from HpcAzureRuntime.cspkg
  1. On the upgraded head node, copy %CCP_HOME%\bin\HpcAzureRuntime.cspkg to a temporary folder such as C:\temp.

  2. Rename C:\temp\HpcAzureRuntime.cspkg to C:\temp\HpcAzureRuntime.zip.

  3. Extract the file C:\temp\HpcAzureRuntime.zip\HpcWorkerRole1_ad046378-8951-4cdb-989c-5143f734d58c.cssx to the C:\temp folder.

  4. Rename HpcWorkerRole1_ad046378-8951-4cdb-989c-5143f734d58c.cssx to HpcWorkerRole1_ad046378-8951-4cdb-989c-5143f734d58c.zip.

  5. Extract the file C:\temp\HpcWorkerRole1_ad046378-8951-4cdb-989c-5143f734d58c.zip\approot\Microsoft.WindowsAzure.Storage.DataMovement.dll to the C:\temp folder.

  6. Replace the file %CCP_HOME%\bin\Microsoft.WindowsAzure.Storage.DataMovement.dll (version 1.0.8698.18) on the node with C:\temp\Microsoft.WindowsAzure.Storage.DataMovement.dll (version 1.0.8696.442).

  7. Replace the file on all remaining upgraded nodes (including client computers, compute nodes, broker nodes, workstation nodes, and unmanaged server nodes) where hpcpack will be used.

    noteNote
    On Windows Azure nodes you do not need to manually update Microsoft.WindowsAzure.Storage.DataMovement.dl. The correct version is automatically installed on these nodes.

  8. Remove the temporary files that you generated under C:\temp.

Node templates for bare metal deployment must include a step to copy MS-MPI setup files

If you are upgrading to HPC Pack 2012 R2 and have an existing node template that deploys nodes from bare metal or by using iSCSI boot nodes, node deployments that use the existing node template will fail. This is because, starting in HPC Pack 2012 R2, the MS-MPI installation file (MSMPISetup.exe) is installed in a separate folder in the REMINST share on the head node and MS-MPI is not properly installed using the existing node templates. The following node templates are affected:

  • Compute node templates with an operating system specified

  • Broker node templates with an operating system specified

  • iSCSI base node templates

Workaround

You can manually delete any affected node template and create a new node template with HPC Pack 2012 R2. A new node template that you create will automatically include the necessary steps to copy MSMPISetup.exe and the HPC Pack setup files from the head node to the new node. For more information to create the template, see Create a Node Template.

Alternatively, use Node Template Editor to manually edit each existing node template to add a Unicast Copy task to copy MSMPISetup.exe to the node, before the step to install HPC Pack.

To add a Unicast Copy task to the node template
  1. In Node Template Editor, click Add Task, click Deployment, and then click Unicast Copy.

  2. Set the following values:

    • Destination: %INSTALLDRIVE%\HpcInstall\MPI

    • Source: z:\MPI

    • Directory: True

  3. Click Move Up or Move Down to ensure that the task runs after the Mount Share task and before the Install HPC Pack task.

  4. Click Save.

For more information to edit the template, see Edit a Node Template.

Adding drivers to operating system images on an HPC Pack 2012 R2 head node running on Windows Server 2012 could fail

The following actions related to drivers for operating system images will show an error message or fail when you upgrade to or install an HPC Pack 2012 R2 head node on a typical installation of Windows Server 2012:

  • Adding drivers to an operating system (WIM) image that is used for bare metal or iSCSI deployment.

  • Configuring the cluster network in topology 1, 2, 3, or 4, if drivers are present on the head node after a previous version of HPC Pack was uninstalled.

  • Deploying nodes with an operating system image to which drivers were added previously in HPC Pack 2012 with SP1.

You might see an error message similar to: The operation on the Windows PE image failed. Install an updated DISM tool on the head node, and then try again.

The problem occurs because these actions require a version of the Deployment Image Servicing and Management (DISM) tool on the head node that is compatible with Windows Server 2012 R2. The version of DISM that is installed by default in Windows Server 2012 is not compatible with later versions of Windows Server, including Windows Server 2012 R2. For more information about the compatibility of the DISM tool, see DISM Overview.

Workaround

Configure on the head node the DISM tool from the Windows Assessment and Deployment Kit (Windows ADK) for Windows 8.1, and try the action again.

noteNote
If you already upgraded to HPC Pack 2012 R2, and drivers were previously added to operating system images by using HPC Pack 2012, the drivers must be removed. Then, after the Windows ADK 8.1 DISM tool is configured, add the drivers again. If you configure the Windows ADK 8.1 DISM tool before upgrading, you do not need to add drivers again.

To configure the DISM tool from the Windows ADK 8.1
  1. On a computer running Windows Server 2012 R2 or Windows 8.1, download the Windows ADK 8.1 from the Microsoft Download Center.

  2. Run adksetup.exe.

  3. In the installation wizard, on the Select the features you want to install page, select Deployment Tools. Complete the wizard.

    By default, the Windows ADK 8.1 DISM tool is installed in the following folder: C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\DISM.

  4. Copy the entire Windows ADK 8.1 DISM folder to the head node, to a folder such as C:\DISM.

  5. On the head node, configure the DISMDIR environment variable to point to the location of the new DISM tool. For example, type the following command at a command prompt:

    setx /m DISMDIR "C:\DISM"
  6. If the head node is configured for high availability in a failover cluster, perform the preceding two steps on each head node in the cluster.

Bare metal deployment of some nodes may pause or fail to complete

Under certain conditions, some nodes in a bare metal deployment will pause or fail to deploy because of a corruption of data related to the node deployment tasks. Symptoms include:

  • Some nodes fail to download and boot into the Windows Preinstallation Environment (Windows PE)

  • Some nodes pause for a long time at the step “Sending PXE command to boot node to the current OS”

Workarounds

If nodes fail to boot into Windows PE, cancel the deployment of the affected nodes. Stop and restart the HPC Management Service, and then redeploy the nodes.

If nodes pause will booting to the current OS, restart the nodes. If the deployment step has already timed out, stop and restart the HPC Management Service, and then redeploy the nodes.

New location of MS-MPI executable files may introduce breaking changes in existing MPI applications

When installed with HPC Pack 2012 R2, MS-MPI executable files such as mpiexec.exe are installed in the MSMPI_BIN folder on a cluster node (by default, %PROGRAMFILES%\Microsoft MPI\Bin), not the %CCP_HOME%\Bin folder used in previous versions of HPC Pack. Because of this change, existing scripts or configuration files for MPI applications that specify the %CCP_HOME%\Bin location for MS-MPI executable files will fail.

Workaround

Uninstall MS-MPI, and then reinstall MS-MPI by using a version of the MS-MPI Redistributable Package that is supported by HPC Pack 2012 R2 (at least version 4.2.4400). To maintain MPI application compatibility with previous versions of HPC Pack, install the MS-MPI executable files to the same folder used to install other HPC Pack program files (such as C:\Program Files\Microsoft HPC Pack 2012\). You can download the MS-MPI Redistributable Package from the Microsoft Download Center.

Alternatively, update any existing scripts or configuration files to use the MSMPI_BIN environment variable to specify the location of the MS-MPI executable files on an HPC Pack 2012 R2 node.

Configure setting for improved SOA broker throughput on Windows Azure nodes

You might experience performance problems with service oriented architecture (SOA) services that are running on Windows Azure nodes. This can occur because the value of the serviceRequestPrefetchCount setting in the SOA service registration file does not permit sufficient broker throughput. For SOA services that are running on Windows Azure nodes, a suggested value for this setting is 100.

To configure the setting, add serviceRequestPrefetchCount in the loadBalancing tag in the Microsoft.Hpc.broker node in the SOA service registration file. For example:

<microsoft.Hpc.broker>
<loadBalancing messageResendLimit="3"
                   serviceRequestPrefetchCount="100"
                   maxConnectionCountPerAzureProxy="64"/>
</Microsoft.Hpc.broker>

For more information about the service registration file, see SOA Service Configuration Files.

Before general availability, the A8 or A9 instance size will cause Windows Azure burst deployment failures

Before the Windows Azure A8 and A9 instance sizes are generally available in selected geographic regions in early 2014, you will not be able to deploy Windows Azure nodes configured to use these sizes. When you attempt to start (provision) the nodes that are added using either of these sizes, the deployment will fail because the size is not recognized by Windows Azure services. You will see an error message in the Provisioning Log similar to: Windows Azure deployment failure (BadRequest): Value 'A8' specified for parameter 'RoleSize' is invalid.

Until the A8 and A9 sizes are available, delete any nodes you configured with these sizes. Then deploy Windows Azure nodes using a supported size such as Small, Medium, Large, ExtraLarge, A5, A6, or A7. For more information about available sizes, see Virtual Machine and Cloud Service Sizes for Windows Azure.