Share via


Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 3

Updated: November 2011

Applies To: Windows HPC Server 2008 R2

These release notes address late-breaking issues and information about the Service Pack 3 (SP3) of Microsoft® HPC Pack 2008 R2. You can install HPC Pack 2008 R2 SP3 to upgrade an existing Windows® HPC Server 2008 R2 with SP2 HPC cluster.

In this topic:

  • Before you install HPC Pack 2008 R2 SP3

  • Compatibility of Windows HPC Server 2008 R2 with SP3 with previous versions of Windows HPC Server 2008 R2

  • Install Microsoft HPC Pack 2008 R2 SP3 on the head node

  • Install HPC Pack 2008 R2 SP3 on a high availability head node

  • Install HPC Pack 2008 R2 SP3 on compute nodes, WCF broker nodes, and workstation nodes

  • Redeploy existing Windows Azure VM nodes using the SP3 components

  • Install the LINQ to HPC components (Preview)

  • Install the Microsoft HPC Pack web components

  • Install the HPC soft card key storage provider

  • Uninstall HPC Pack 2008 R2 with SP3

  • Known issues

Before you install HPC Pack 2008 R2 SP3

Be aware of the following items and recommendations before you install HPC Pack 2008 R2 SP3 on the head node:

  • When you install the service pack, new indexes and new parameters for some procedures are added to the HPC databases. To prevent potential data loss, create backups of the following databases before installing the service pack:

    • Cluster management database

    • Job scheduling database

    • Reporting database

    • Diagnostics database

    You can use a backup method of your choice to back up the HPC databases. For more information and an overview of the backup and restore process, see Backing Up and Restoring a Windows HPC Server Cluster.

  •  

  • When you install the service pack, several settings related to HPC services are reset to their default values, including the following:

    • Firewall rules

    • Event log settings for all HPC services

    • Service properties such as dependencies and startup options

    • If the head node is configured for high availability in a failover cluster, the related resources that are configured in the failover cluster

    Other details can be found in the following log file after you install the service pack: %windir%\temp\HPCSetupLogs\hpcpatch-DateTime.txt

  • When you install the service pack on the head node, the files that are used to deploy a compute node or a Windows Communication Foundation (WCF) broker node from bare metal are also updated. Later, if you install a new compute node or WCF broker node from bare metal or reimage an existing node, the service pack is automatically applied to that node.

  • Verify the following before installing the service pack:

    • Close all open windows for applications related to HPC Pack 2008 R2, such as HPC Cluster Manager.

    • Ensure that all diagnostic tests have finished, or are canceled.

    • Do not apply the service pack during critical times or while a long running job is still running. When you upgrade either a head node or other node in a cluster, you may be prompted to reboot the computer to complete the installation.

    • Ensure that you stop Windows Azure nodes that you previously deployed so that they are in the Not-Deployed state. If you do not do this, you may be unable to use or delete the nodes after the service pack is installed, but charges will continue to accrue in Windows Azure. To use the existing Windows Azure worker nodes in Windows HPC Server 2008 R2 with SP3, you must restart them after the service pack is installed. To use existing Windows Azure VM nodes, you must redeploy the nodes using the HPC Pack 2008 R2 with SP3 components. For more information, see Redeploy existing Windows Azure VM nodes using the SP3 components.

^ Top of page

Compatibility of Windows HPC Server 2008 R2 with SP3 with previous versions of Windows HPC Server 2008 R2

The following list summarizes the supported compatibility of Windows HPC Server 2008 R2 with SP3 with previous versions of Windows HPC Server 2008 R2.

For information about new features in SP3, see What's New in Windows HPC Server 2008 R2 Service Pack 3.

Feature Compatibility

HPC Cluster Manager or HPC PowerShell in Windows HPC Server 2008 R2 with SP3

Can manage a head node in a previous version of Windows HPC Server 2008 R2. However, functionality that has been added in SP3 cannot be used.

Note

HPC Cluster Manager or HPC PowerShell in Windows HPC Server 2008 R2, Windows HPC Server 2008 R2 with SP1, or Windows HPC Server 2008 R2 with SP2

Cannot manage a head node in Windows HPC Server 2008 R2 with SP3

Job scheduler in Windows HPC Server 2008 R2 with SP3

Can manage jobs in Windows HPC Server 2008 R2, Windows HPC Server 2008 R2 with SP1, or Windows HPC Server 2008 R2 with SP2

Job scheduler in Windows HPC Server 2008 R2, Windows HPC Server 2008 R2 with SP1, or Windows HPC Server 2008 R2 with SP2

Can manage jobs in a Windows HPC Server 2008 R2 with SP3 cluster. However, the jobs cannot use job scheduling features that are new in SP3.

Service-oriented architecture (SOA) client applications that are written to run SOA jobs in Windows HPC Server 2008 R2, Windows HPC Server 2008 R2 with SP1, or Windows HPC Server 2008 R2 with SP2

Can run SOA jobs in Windows HPC Server 2008 R2 with SP3. However, the SOA clients cannot set features that are new to Windows HPC Server 2008 R2 with SP3.

SOA services that run in Windows HPC Server 2008 R2, Windows HPC Server 2008 R2 with SP1, or Windows HPC Server 2008 R2 with SP2 client

Can run in Windows HPC Server 2008 R2 with SP3

Compute nodes and Windows Communication Foundation (WCF) broker nodes in a cluster with the head node updated to SP3

Must be updated with SP3

Workstation nodes in a cluster with the head node updated to SP3

Can run HPC Pack 2008 R2, HPC Pack 2008 R2 with SP2, or HPC Pack 2008 R2 with SP2. However, user activity detection or other configuration settings for workstation nodes that can be configured only in SP3 are ignored without warning on workstations running a previous version of HPC Pack 2008 R2.

Workstation nodes running HPC Pack 2008 R2 with SP3 in a cluster with the head node running HPC Pack 2008 R2, HPC Pack 2008 R2, or HPC Pack 2008 R2 with SP2

Can only be managed if a version of HPC Pack 2008 R2 compatible with the head node is installed

Windows Azure nodes provisioned in a Windows HPC Server 2008 R2 with SP1 or Windows HPC Server 2008 R2 with SP2 cluster

Worker nodes must be updated with SP3 by stopping and restarting the nodes. VM nodes deployed using SP2 components must be redeployed using the SP3 components.

^ Top of page

Install Microsoft HPC Pack 2008 R2 SP3 on the head node

To install Microsoft HPC Pack 2008 R2 SP3 on the head node computer

  1. Obtain the installation program for HPC Pack 2008 R2 SP3 (HPC2008R2_SP3-x64.exe) by downloading and extracting the update package from the Microsoft Download Center. Save the service pack installation program to installation media or to a network location.

  2. Run HPC2008R2_SP3-x64.exe as an administrator from the location where you saved the service pack.

  3. Read the informational message that appears. If you are ready to apply the service pack, click OK.

  4. Continue to follow the steps in the installation wizard.

Note
  • After you install the service pack, the computer restarts.

  • You can confirm that HPC Pack 2008 R2 SP3 is installed on the head node. To view the version number in HPC Cluster Manager, on the Help menu, click About. If HPC Pack 2008 R2 is installed, the server version number and the client version number shown are similar to 3.3.xxxx.x.

To help ensure that bare metal deployment of nodes functions properly after the service pack is installed, it is recommended that you run the UpdateHpcWinPE.cmd script on the head node. This script upgrades the Windows PE image (boot.wim).

To run the UpdateHpcWinPE.cmd script

  1. Open an elevated command prompt on the head node. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  2. Type the following command, and then press Enter:

    UpdateHpcWinPE.cmd
    

    The script performs the update of boot.wim.

^ Top of page

Install HPC Pack 2008 R2 SP3 on a high availability head node

If you have set up a head node for high availability in the context of a failover cluster, use the following procedure to apply the service pack.

To install HPC Pack 2008 R2 SP3 on a high availability head node

  1. Obtain the installation program for HPC Pack 2008 R2 SP3 (HPC2008R2_SP3-x64.exe) by downloading and extracting the update package from the Microsoft Download Center. Save the service pack installation program to installation media or to a network location.

  2. Take the following high availability HPC services offline using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, and hpcsession.

  3. Install the service pack on the active head node by running HPC2008R2_SP3-x64.exe.

    After you install the service pack on the active head node, in most cases, the active head node restarts and fails over to the second head node.

    Note
    Because the second head node is not upgraded, Failover Cluster Manager might report a failed status for the resource group and the HPC services.
  4. To install the service pack on the second head node, do the following:

    1. Take the following high availability HPC services offline using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, and hpcsession.

    2. Verify that the second head node is the active head node. If it is not, make the second head node the active head node using Failover Cluster Manager.

    3. Install the service pack on the second head node.

      After you install the service pack on the second head node, the high availability HPC services are brought online automatically.

Important
During the installation of the service pack on each head node configured for high availability, leave the SQL Server resources online.

To help ensure that bare metal deployment of nodes functions properly after the service pack is installed, it is recommended that you run the UpdateHpcWinPE.cmd script on each head node configured for high availability. This script upgrades the Windows PE image (boot.wim).

To run the UpdateHpcWinPE.cmd script

  1. Open an elevated command prompt on the head node. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  2. Type the following command, and then press Enter:

    UpdateHpcWinPE.cmd
    

    The script performs the update of boot.wim.

  3. Repeat the previous steps on the other head node in the failover cluster.

^ Top of page

Install HPC Pack 2008 R2 SP3 on compute nodes, WCF broker nodes, and workstation nodes

To work properly with a head node that is upgraded with HPC Pack 2008 R2 SP3, existing compute nodes and WCF broker nodes must also be upgraded. You can optionally upgrade your existing workstation nodes. Depending on the type of node, you can use one of the following methods to install HPC Pack 2008 R2 SP3:

  • Reimage an existing compute node or broker node that was deployed using an operating system image. For more information, see Reimage Compute Nodes.

    Note
    After the head node is upgraded with HPC Pack 2008 R2 SP3, if you install a new node from bare metal or reimage an existing node, HPC Pack 2008 R2 with SP3 is automatically installed on that node.
  • Install HPC Pack 2008 R2 SP3 on existing nodes that are running HPC Pack 2008 R2 with SP2, either manually or using a clusrun command.

To use clusrun to install HPC Pack 2008 SP3 on existing nodes

  1. Copy the appropriate version of SP3 (HPC2008R2_SP3-x64.exe or HPC2008R2_SP3-x86.exe), to a shared folder such as \\headnodename\SP3.

  2. In HPC Cluster Manager, view nodes by node group to identify a group of nodes that you want to update – for example, ComputeNodes.

  3. Take the nodes in the node group offline.

  4. Open an elevated command prompt and type the appropriate clusrun command for the operating system of the patch, e.g.:

    clusrun /nodegroup:ComputeNodes \\headnodname\SP3\HPC2008R2_SP3-x64.exe -unattend -SystemReboot
    
    Note
    The –SystemReboot parameter is required and causes the updated nodes to restart after the service pack is installed.

    After the service pack is installed and the nodes in the group restart, bring the nodes online.

To run the HPC Pack 2008 R2 SP3 installation program manually on individual nodes that are currently running HPC Pack 2008 R2 with SP2, you can copy HPC2008R2_SP3-x64.exe or HPC2008R2_SP3-x86.exe to a shared folder on the head node. Then access the existing nodes by remote desktop connection to install the service pack from the shared folder.

Important
If you have WCF broker nodes that are configured for high availability in a failover cluster, you should install HPC Pack 2008 R2 SP3 on the high availability broker nodes as follows:
  1. Install HPC Pack 2008 R2 SP3 on the passive head node

  2. Fail over the active broker node to the passive broker node

  3. Install HPC Pack 2008 R2 SP3 on the passive head node (which has not yet been upgraded)

Note
For more information about how to perform Mayntenance on WCF broker nodes that are configured in a failover cluster, see Performing Mayntenance on WCF Broker Nodes in a Failover Cluster with Windows HPC Server 2008 R2.

^ Top of page

Redeploy existing Windows Azure VM nodes using the SP3 components

VM nodes deployed using HPC Pack 2008 R2 with SP2 for Windows Azure Virtual Machines must be redeployed using the SP3 components, not simply restarted, after the head node of the cluster is updated to SP3. If you do not do this, the VM nodes will fail to become available in HPC Cluster Manager after they are restarted.

The following procedure outlines the necessary steps to redeploy the VM nodes after the head node is updated. For detailed steps to deploy VM nodes, see Deploying Windows Azure VM Nodes in Windows HPC 2008 R2 Step-by-Step Guide.

To redeploy the VM nodes using the HPC Pack 2008 R2 with SP3 components

  1. Stop the VM nodes.

  2. Depending on how you previously created the VHD for the VM nodes, either create a new VHD using the HPC Pack 2008 R2 with SP3 components, or create a differencing VHD. Ensure that you install the following SP3-compatible components:

    • Windows Azure SDK for .NET 1.6 integration components. The components are included with the Windows Azure SDK for .NET, which is available in the Microsoft Download Center.

    • Windows HPC integration components for Windows Azure. The installation program (HPCAzureVM.msi) is included with the SP3 update. To download the SP3 update packages, see HPC Pack 2008 R2 Service Pack 3 in the Microsoft Download Center.

    • Microsoft MPI (MS-MPI). To download the installation program, see HPC Pack 2008 R2 MS-MPI Redistributable Package in the Microsoft Download Center.

    Important
    If you update the Windows Azure SDK for .NET Integration Components to version 1.6 on an existing virtual machine, a WA Drive Miniport driver from the previous version of the Windows Azure SDK for .NET may reMayn. This legacy driver can interfere with mounting a VHD snapshot on a VM node. To work around this problem, use Device Manager on the virtual machine to uninstall the legacy WA Drive Miniport storage controller device. Then, use the Add legacy hardware action in Device Manager to manually install the updated WA Drive Miniport driver (wadrive.inf) from the folder where you installed the Windows Azure SDK for .NET 1.6 Integration Components.
  3. Add the new or differencing VHD to the image store on the head node.

  4. Upload the VHD to your Windows Azure subscription.

  5. Delete the VM nodes that you deployed using the SP2 components.

  6. Edit the node template that you used previously to deploy the VM nodes, and specify the new or differencing VHD in the template. Save the revised template.

  7. Add the VM nodes using the revised node template.

  8. Start the VM nodes.

Install the LINQ to HPC components (Preview)

If you want to install the HPC Pack 2008 R2 LINQ to HPC components (Preview), you must separately run the installation program (DISC_x64.msi or DISC_x86.msi) appropriate for the operating system on each computer. The installation programs are included in the HPC Pack 2008 R2 SP3 download packages available at the Microsoft Download Center, or you can locate the file on the full installation media for HPC Pack 2008 R2 with SP3 or later.

To run LINQ to HPC jobs on the cluster, you must install the LINQ to HPC components on the following computers:

  • The head node of your cluster

  • The compute nodes of your cluster that will be used to run LINQ to HPC jobs

  • Client computers that are used to submit LINQ to HPC jobs

For additional information and step-by-step procedures, see Deploying a Windows HPC Server Cluster to Run Jobs Using the LINQ to HPC Components (Preview).

^ Top of page

Install the Microsoft HPC Pack web components

To install the Microsoft HPC Pack web components, you must separately run an installation program (HpcWebComponents.msi) and the included configuration script (Set-HPCWebComponents.ps1). The installation program is included in the HPC Pack 2008 R2 SP3 download packages available at the Microsoft Download Center, or you can locate the file on the full installation media for HPC Pack 2008 R2 with SP3 or later.

Important
  • The HPC Pack 2008 R2 web components can only be installed on the head node of the cluster.

  • If you installed the HPC Pack 2008 R2 web components using the installation program provided in HPC Pack 2008 R2 SP2, you must uninstall the components on the head node before installing the components in HPC Pack 2008 R2 SP3.

For additional information and step-by-step procedures, see Install the Microsoft HPC Pack Web Components.

^ Top of page

Install the HPC soft card key storage provider

To enable soft card authentication when submitting jobs to the Windows HPC Server 2008 R2 with SP3 cluster, you must install the HPC soft card key storage provider (KSP) on the following computers:

  • The head node of your cluster

  • The compute nodes and workstation nodes of your cluster

To install the KSP, you must separately run the version of the installation program that is appropriate for the operating system on each computer: HpcKsp_x64.msi or HpcKsp_x86.msi.

Important
You can only install the HPC soft card KSP on an edition of Windows® 7 or Windows Server® 2008 R2.

The installation programs are included in the HPC Pack 2008 R2 SP3 download packages available at the Microsoft Download Center, or you can locate the files on the full installation media for HPC Pack 2008 R2 with SP3 or later.

Important
If you installed the HPC soft card key storage provider using the installation program provided in HPC Pack 2008 R2 SP2, you must uninstall the provider on each node before installing the provider in HPC Pack 2008 R2 SP3.

^ Top of page

Uninstall HPC Pack 2008 R2 with SP3

You can uninstall the service pack on the head node to revert to HPC Pack 2008 R2 with SP2, while preserving the data in the HPC databases, or you can completely uninstall HPC Pack 2008 R2.

To uninstall HPC Pack 2008 R2 SP3, uninstall the updates in the following order:

  1. Update for HPC Pack 2008 R2 Services for Excel 2010

  2. Update for HPC Pack 2008 R2 Server Components

  3. Update for HPC Pack 2008 R2 Client Components

  4. Update for HPC Pack 2008 R2 MS-MPI Redistributable Pack

To completely uninstall HPC Pack 2008 R2, uninstall the features in the following order:

  1. Microsoft HPC Pack 2008 R2 LINQ to HPC Components (Preview) (if they are installed)

  2. Microsoft HPC Pack 2008 R2 Web Components (if they are installed)

  3. Microsoft HPC Pack 2008 R2 Key Storage Provider (if it is installed)

  4. Microsoft HPC Pack 2008 R2 Services for Excel 2010

  5. Microsoft HPC Pack 2008 R2 Server Components

  6. Microsoft HPC Pack 2008 R2 Client Components

  7. Microsoft HPC Pack 2008 R2 MS-MPI Redistributable Pack

Important
Not all features are installed on all computers. For example, Microsoft HPC Pack 2008 R2 Services for Excel 2010 is only installed on the head node when the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Cycle Harvesting edition is installed.

When HPC Pack 2008 R2 is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2008 R2, you can remove the following programs if they will no longer be used:

  • Microsoft Report Viewer Redistributable 2008

  • Microsoft SQL Server 2008 (64-bit)

    Note
    This program also includes: Microsoft SQL Server 2008 Browser, Microsoft SQL Server 2008 Setup Support Files, and Microsoft SQL Server VSS Writer.
  • Microsoft SQL Server 2008 Native Client

Note
Microsoft SQL Server 2008 R2 is installed with HPC Pack 2008 R2 with SP2 or later.

Additionally, the following server roles and features might have been added when HPC Pack 2008 R2 was installed, and can be removed if they will no longer be used:

  • Dynamic Host Configuration Protocol (DHCP) Server server role

  • File Services server role

  • File Server Resource Manager role service

  • Network Policy and Access Services server role

  • Windows Deployment Services server role

  • Microsoft .NET Framework feature

  • Message Queuing feature

^ Top of page

Known issues

HPC Basic Profile Web Service is no longer supported

The HPC Basic Profile Web Service is no longer supported as of Windows HPC Server 2008 R2 with SP3, and further updates are not planned. Instead, use the Web Service Interface, which was introduced with HPC Pack 2008 R2 SP2. For information about the Web Service Interface and the enhancements available in Windows HPC Server 2008 R2 with SP3, see Working with the Web Service Interface.

Misleading error message when creating environment variables via REST interface

If you attempt to create multiple environment variables in a single request using the REST interface and the combined size of the variables exceeds the message size limit of 64KB, you will see an error message similar to Error 400: BadRequest. No additional information appears about the cause of the error.

If you see this error, instead of creating the environment variables in a single request, split the variable creation into multiple requests.

SMTP server validation in HPC Cluster Manager can fail

If you enable e-Mayl notifications in the E-Mayl Notifications tab in Job Scheduler Configuration, type an erroneous SMTP server name, and click Validate Server, validation of the server name can sometimes take up to 30 seconds or longer. During validation, HPC Cluster Manager appears unresponsive and cannot perform other tasks. If you attempt to perform other actions with HPC Cluster Manager during this time, the validation error bubble for the server name may later appear behind HPC Cluster Manager when validation is complete.

To avoid this problem, do not interact with HPC Cluster Manager while the SMTP server name is being validated.

Pool guarantees may not be met when running multiple exclusive jobs

In a cluster where resource pools are enabled, the guaranteed pool allocation may not be met when the following conditions are true: you run multiple exclusive jobs that have resource pool guarantees, the weights of the pools are not separated on even integers of the nodes available in the cluster, and all resources of the cluster are in use. The problem occurs because the pool weights do not divide evenly. This reduces the number of resources (cores) available and causes the resource pool boundary to straddle a node.

Example 1: A homogeneous cluster that has 31 nodes, each with 8 cores, and pool weights are based on percentages, for example, PoolA's weight is 75, PoolB's weight is 25, and the Default Pool's weight is 0. The resources are evenly balanced relative to the number of cores, but the number of nodes is between 23 and 24 for PoolA, and between 7 and 8 for PoolB.

Example 2: The same cluster configuration as in Example 1, but with 32 nodes. If all nodes are online, PoolA represents 24 potential nodes and PoolB represents 8 potential nodes, so pool guarantees are met. But if one of the nodes goes offline or becomes unreachable, the problem occurs again.

Workaround

To avoid the problem, separate the pool weights to sum up to the total number of nodes in a homogeneous cluster. For example, if there are 37 nodes available in the cluster, set PoolA's weight to 20 and PoolB's weight to 17, and Default Pool's weight to 0. Exclusive jobs will be allocated resources appropriately and will not overlap a pool boundary.

TaskImmediatePreemption can preempt too many tasks when an exclusive job is used

Setting TaskImmediatePreemptionEnabled to True enables the job scheduler to preempt tasks instead of entire jobs when a job of higher priority needs resources that are already allocated. This preemption is done on a task order basis, with the most recent tasks being preempted until the needed resources are acquired. Because of this, some tasks that are within a node that has been allocated by an exclusive job may be preempted and then restarted.

Leaving TaskImmediatePreemptionEnable=False will cause the job (instead of individual tasks) to be preempted and restarted.

Import of a VHD can fail if the VHD is on virtual drives or certain file shares

In certain cases, import of a VHD to the image store on the head node can fail with an error message similar to I/O error occurred. The problem occurs when HPC Cluster Manager attempts to import a selected VHD from a virtual disk drive in the following scenarios:

  • The DOS subst command is used to associate a drive letter with a folder

  • The net use command is used to associate a drive letter with a network share

In these scenarios, the problem occurs because HPC Cluster Manager and the virtual disk drives are available only in the user context. However, VHD import uses the HPC Management Service in the services context.

For similar reasons, VHD import can fail if you attempt to import a VHD from a file share such as \\ComputerName\ShareName and the share permissions are not configured to allow the HPC Management Service to access the VHD.

Workaround

To work around this problem, copy the VHD file to a local disk drive and then try to import it again. In the case where the VHD import failed because of a problem with the permissions on a file share, grant access to the share (or the VHD file itself) to the computer account of the computer where HPC Cluster Manager is running.

Windows Azure Report diagnostic test does not provide the names of Windows Azure nodes

After installation of HPC Pack 2008 R2 SP3, the Windows Azure Report diagnostic test does not provide the names of the role instances for the Windows Azure nodes. To work around this problem, you can run the following command on each node for which you want to see the name:

set COMPUTERNAME

You can also use a clusrun command, or create a new diagnostic test, to run this command on a group of nodes.

^ Top of page