Export (0) Print
Expand All
2 out of 2 rated this helpful - Rate this topic

Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 4

Updated: April 17, 2013

Applies To: Windows HPC Server 2008 R2

These release notes address late-breaking issues and information about the Service Pack 4 (SP4) for HPC Pack 2008 R2. You can install HPC Pack 2008 R2 SP4 to upgrade an existing HPC cluster that is currently running with Windows HPC Server 2008 R2 SP3. To download the packages for the SP4 update, see Microsoft HPC Pack 2008 R2 SP4 in the Microsoft Download Center.

ImportantImportant
  • HPC Pack 2008 R2 service packs must be applied sequentially to existing cluster nodes. Before you apply SP4 on a cluster node, ensure that SP3 is already installed. To confirm this, on a node where HPC Cluster Manager is installed, on the Help menu, click About. The version numbers shown will be similar to 3.3.xxxx.x.

  • If you are planning to perform multiple new cluster installations, and you have the original release-to-manufacturing (RTM) media for HPC Pack 2008 R2, an SP4 Media Integration Package is also provided. For more information, see the instructions that accompany Microsoft HPC Pack 2008 R2 SP4 in the Microsoft Download Center. Use this package to create an installation point that integrates the HPC Pack 2008 R2 service packs released to date with the RTM setup files. For an example of how to do this, Creating a service pack integrated installation point on the HPC team blog.

In this topic:

Be aware of the following issues and recommendations before you install HPC Pack 2008 R2 SP4 on the head node:

  • As a precaution, to prevent potential data loss, create backups of the following databases before you install the service pack:

    • Cluster management database

    • Job scheduling database

    • Reporting database

    • Diagnostics database

    You can use a backup method of your choice to back up the HPC databases. For more information and an overview of the backup and restore process, see Backing Up and Restoring a Windows HPC Server Cluster.

  • When you install the service pack, several settings that are related to HPC services are reset to their default values, including the following:

    • Firewall rules

    • Event log settings for all HPC services

    • Service properties such as dependencies and startup options

    • If the head node is configured for high availability in a failover cluster, the Windows HPC server-related resources that are configured in the failover cluster

    noteNote
    You can find more installation details in the following log file after you install the service pack: %windir%\temp\HPCSetupLogs\hpcpatch-DateTime.txt

  • When you install the service pack on the head node, the files that are used to deploy a compute node or a Windows Communication Foundation (WCF) broker node from bare metal are also updated. Later, if you install a new compute node or WCF broker node from bare metal or if you reimage an existing node, the service pack is automatically applied to that node.

  • In addition, you should take the following actions before you install the service pack:

    • Close all open windows for applications that are related to HPC Pack 2008 R2, such as HPC Cluster Manager.

    • Ensure that all diagnostic tests have finished, or are canceled.

    • Do not apply the service pack during critical times or while a long running job is still running. When you upgrade a head node or other node in a cluster, you may be prompted to reboot the computer to complete the installation.

    • Ensure that you stop Windows Azure nodes that you previously deployed so that they are in the Not-Deployed state. If you do not stop them, you may be unable to use or delete the nodes after the service pack is installed, but charges for their use will continue to accrue in Windows Azure. You must restart the existing Windows Azure worker nodes in Windows HPC Server 2008 R2 with SP4 after the service pack is installed. If you previously deployed Windows Azure VM nodes, you must redeploy the nodes by using the Windows Azure SDK 1.7 for .NET integration components and the HPC Pack 2008 R2 with SP4 components. For more information, see Redeploy existing Windows Azure VM nodes by using the SP4 components later in this topic.

^ Top of page

The following table summarizes the supported compatibility of Windows HPC Server 2008 R2 with SP4 with previous versions of Windows HPC Server 2008 R2. As noted, some functionality is not officially supported, but it is not prevented by Windows HPC Server 2008 R2.

For information about new features in SP4, see What's New in Windows HPC Server 2008 R2 Service Pack 4.

 

Feature Compatibility

Administrative functionality in HPC Cluster Manager or HPC PowerShell management cmdlets

  • Administrative functionality in Windows HPC Server 2008 R2 with SP4 is compatible with a head node in a previous version of Windows HPC Server 2008 R2. However, functionality that is available in SP4 but is not available in the previous version cannot be used. For example, HPC Cluster Manager in Windows HPC Server 2008 R2 with SP4 can run diagnostic tests on a head node in a previous version of Windows HPC Server 2008 R2. However, only the tests that are implemented in the previous version are available to run, and certain node selection criteria are not supported in the previous versions.

  • Administrative functionality in Windows HPC Server 2008 R2 or Windows HPC Server 2008 R2 with SP1, SP2, or SP3 cannot be used with a head node in Windows HPC Server 2008 R2 with SP4.

Job scheduling functionality that uses HPC Job Manager, HPC PowerShell, command-line commands, or the job submission API

  • Job scheduling functionality in Windows HPC Server 2008 R2 with SP4 can submit and manage jobs on a head node in a previous version of Windows HPC Server 2008 R2. However, attempting to use features that are available in SP4 but are not available in the previous version of Windows HPC Server 2008 R2 results in an error.

  • Job scheduling functionality in Windows HPC Server 2008 R2 or Windows HPC Server 2008 R2 with SP1, SP2, or SP3 can submit and manage jobs on a head node in Windows HPC Server 2008 R2 with SP4. However, job submission functionality that is available in SP4 but not in the previous version cannot be used.

Service-oriented architecture (SOA) and Microsoft MPI (MS-MPI) client applications and runtime API

  • Runtime functionality in Windows HPC Server 2008 R2 with SP4 can run SOA and MS-MPI applications that are written for previous versions of Windows HPC Server 2008 R2. However, attempting to use features that are available in SP4 but are not available in the previous version results in an error.

  • Runtime functionality in Windows HPC Server 2008 R2 or Windows HPC Server 2008 R2 with SP1, SP2, or SP3 can run SOA and MSMPI applications written for Windows HPC Server 2008 R2 with SP4. However, features that are available in SP4 but not in the previous version cannot be used.

Compute nodes and Windows Communication Foundation (WCF) broker nodes

  • A head node in Windows HPC Server 2008 R2 with SP4 is only supported to manage and run jobs on compute nodes and WCF broker nodes that are updated to SP4. However, connections to nodes that were deployed in a previous version are not prevented.

    noteNote
    You can run a clusrun command on a head node that is updated with SP4 to update compute nodes or WCF broker nodes on which HPC Pack 2008 R2 with SP3 is installed. For more information, see Install HPC Pack 2008 R2 SP4 on compute nodes, WCF broker nodes, and workstation nodes later in this topic.

  • A head node in Windows HPC Server 2008 R2 or Windows HPC Server 2008 R2 with SP1, SP2, or SP3 is not supported to manage or run jobs on compute nodes and WCF broker nodes that are updated to SP4. However, connections to these nodes are not prevented.

Workstation nodes and unmanaged server nodes

  • A head node in Windows HPC Server 2008 R2 with SP4 can run jobs on workstation nodes and unmanaged server nodes that were added by using a previous version of Windows HPC Server 2008 R2. However, some user activity detection or other configuration settings that are available in SP4 but not in the previous version cannot be used.

  • A head node in Windows HPC Server 2008 R2 or Windows HPC Server 2008 R2 with SP1, SP2, or SP3 is not supported to manage or run jobs on compute nodes and WCF broker nodes that are updated to SP4. However, connections to these nodes are not prevented.

Windows Azure nodes

  • Windows Azure nodes that are provisioned in a cluster running Windows HPC Server 2008 R2 with SP4 are not compatible with Windows HPC Server 2008 R2 with SP1, SP2, or SP3.

  • Windows Azure worker nodes that are provisioned in a cluster running Windows HPC Server 2008 R2 with SP1, SP2, or SP3 must be updated with SP4 components by stopping and restarting the nodes. VM nodes that were previously deployed by using SP3 components must be redeployed by using the Windows Azure SDK 1.7 for .NET integration components and the HPC Pack 2008 R2 with SP4 components. For more information, see Redeploy existing Windows Azure VM nodes by using the SP4 components later in this topic

^ Top of page

  1. Download the installation program for HPC Pack 2008 R2 SP4 (HPC2008R2_SP4-x64.exe). To do this, see HPC Pack 2008 R2 Service Pack 4 in the Microsoft Download Center. Save the service pack installation program to installation media or to a network location.

  2. Run HPC2008R2_SP4-x64.exe as an administrator from the location where you saved the service pack.

  3. Read the informational message that appears. If you are ready to apply the service pack, click OK.

  4. Continue to follow the steps in the installation wizard.

noteNote
  • After you install the service pack, the computer restarts.

  • You can confirm that HPC Pack 2008 R2 SP4 is installed on the head node. To view the version number in HPC Cluster Manager, on the Help menu, click About. The server version number and the client version number shown will be similar to 3.4.xxxx.x.

To help ensure that bare metal deployment of nodes functions properly after the service pack is installed, we recommend that you run the UpdateHpcWinPE.cmd script on the head node. This script upgrades the Windows PE image (boot.wim).

  1. Open an elevated command prompt on the head node. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  2. Type the following command, and then press ENTER:

    UpdateHpcWinPE.cmd
    

    The script performs the update of boot.wim.

^ Top of page

If you have set up a head node for high availability in the context of a failover cluster, use the following procedure to apply the service pack.

  1. Download the installation program for HPC Pack 2008 R2 SP4 (HPC2008R2_SP4-x64.exe). To do this, see HPC Pack 2008 R2 Service Pack 4 in the Microsoft Download Center. Save the service pack installation program to installation media or to a network location.

  2. Take the following high availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, and hpcsession.

  3. Install the service pack on the active head node by running HPC2008R2_SP4-x64.exe.

    After you install the service pack on the active head node, in most cases, the active head node restarts and fails over to the second head node.

    noteNote
    Because the second head node is not upgraded, Failover Cluster Manager might report a failed status for the resource group and the HPC services.

  4. Use the following procedure to install the service pack on the second head node:

    1. Take the following high availability HPC services offline using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, and hpcsession.

    2. Verify that the second head node is the active head node. If it is not, use Failover Cluster Manager to make the second head node the active head node.

    3. Install the service pack on the second head node.

      After you install the service pack on the second head node, the high availability HPC services are brought online automatically.

ImportantImportant
During the installation of the service pack on each head node that is configured for high availability, leave the SQL Server resources online.

To help ensure that bare metal deployment of nodes functions properly after the service pack is installed, we recommend that you run the UpdateHpcWinPE.cmd script on each head node that is configured for high availability. This script upgrades the Windows PE image (boot.wim).

  1. Open an elevated command prompt on the head node. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  2. Type the following command, and then press ENTER:

    UpdateHpcWinPE.cmd
    

    The script performs the update of boot.wim.

  3. Repeat the previous steps on the other head node in the failover cluster.

^ Top of page

To work properly with a head node that is upgraded with HPC Pack 2008 R2 SP4, existing compute nodes and WCF broker nodes must also be upgraded. You can optionally upgrade your existing workstation nodes. Depending on the type of node, you can use one of the following methods to install HPC Pack 2008 R2 SP4:

  • Reimage an existing compute node or broker node that was deployed by using an operating system image. For more information, see Reimage Compute Nodes.

    noteNote
    After the head node is upgraded with HPC Pack 2008 R2 SP4, if you install a new node from bare metal or if you reimage an existing node, HPC Pack 2008 R2 with SP4 is automatically installed on that node.

  • Install HPC Pack 2008 R2 SP4 on existing nodes that are running HPC Pack 2008 R2 with SP3, manually or by using a clusrun command.

  1. Copy the appropriate version of SP4 (HPC2008R2_SP4-x64.exe or HPC2008R2_SP4-x86.exe), to a shared folder such as \\headnodename\SP4.

  2. In HPC Cluster Manager, view nodes by node group to identify a group of nodes that you want to update, for example, ComputeNodes.

  3. Take the nodes in the node group offline.

  4. Open an elevated command prompt and type the appropriate clusrun command for the operating system of the patch, for example:

    clusrun /nodegroup:ComputeNodes \\headnodname\SP4\HPC2008R2_SP4-x64.exe -unattend -SystemReboot
    
    noteNote
    The SystemReboot parameter is required. This causes the updated nodes to restart after the service pack is installed.

    After the service pack is installed and the nodes in the group restart, bring the nodes online.

To run the HPC Pack 2008 R2 SP4 installation program manually on individual nodes that are currently running HPC Pack 2008 R2 with SP3, you can copy HPC2008R2_SP4-x64.exe or HPC2008R2_SP4-x86.exe to a shared folder on the head node. Then access the existing nodes by making a remote connection to install the service pack from the shared folder.

ImportantImportant
If you have WCF broker nodes that are configured for high availability in a failover cluster, install HPC Pack 2008 R2 SP4 on the high availability broker nodes as follows:

  1. Install HPC Pack 2008 R2 SP4 on the passive broker node.

  2. Fail over the active broker node to the passive broker node.

  3. Install HPC Pack 2008 R2 SP4 on the passive broker node that is not yet upgraded.

^ Top of page

noteNote
The Windows Azure VM Role feature (beta) will is being retired in 2013. For more information, see The Windows Azure VM role is retired.

VM nodes that are deployed by using the Windows Azure SDK for .NET 1.6 integration components and that are running with HPC Pack 2008 R2 with SP3 components must be redeployed by using SP4-compatible components. You cannot simply restart these nodes, and they must be redeployed after the head node of the cluster is updated to SP4. If you do not do this, the VM nodes will not be available in HPC Cluster Manager after they are restarted.

The following high-level procedure outlines the necessary steps to redeploy the VM nodes after the head node is updated. For detailed steps to deploy VM nodes, see Deploying Windows Azure VM Nodes in Windows HPC 2008 R2 Step-by-Step Guide.

  1. Stop the VM nodes.

  2. Depending on how you previously created the VHD for the VM nodes, create a new VHD by using the HPC Pack 2008 R2 with SP4 components, or create a differencing VHD. Ensure that you install the following SP4-compatible components:

    ImportantImportant
    If you update the Windows Azure SDK for .NET Integration Components to version 1.7 on an existing virtual machine, a WA Drive Miniport driver from the previous version of the Windows Azure SDK for .NET may remain. This legacy driver can interfere with mounting a VHD snapshot on a VM node. To work around this problem, use Device Manager on the virtual machine to uninstall the legacy WA Drive Miniport storage controller device. Then, use the Add legacy hardware action in Device Manager to manually install the updated WA Drive Miniport driver (wadrive.inf) from the folder where you installed the Windows Azure SDK for .NET 1.7 Integration Components.

  3. Add the new or differencing VHD to the image store on the head node.

  4. Upload the VHD to your Windows Azure subscription.

  5. Delete the VM nodes that you deployed by using the SP3 components.

  6. Edit the node template that you previously used to deploy the VM nodes, and specify the new or differencing VHD in the template. Save the revised template.

  7. Add the VM nodes by using the revised node template.

  8. Start the VM nodes.

To install the Microsoft HPC Pack web components, you must separately run an installation program (HpcWebComponents.msi) and the included configuration script (Set-HPCWebComponents.ps1). The installation program is included in the SP4 update. To download the SP4 update packages, see HPC Pack 2008 R2 Service Pack 4 in the Microsoft Download Center. You can also locate the file in the full installation media for HPC Pack 2008 R2 with SP4.

ImportantImportant
  • The HPC Pack 2008 R2 web components can only be installed on the head node of the cluster.

  • If you installed the HPC Pack 2008 R2 web components by using the installation program that is provided in HPC Pack 2008 R2 SP3, you must uninstall the components on the head node before you install the components in HPC Pack 2008 R2 SP4.

For additional information and step-by-step procedures, see Install the Microsoft HPC Pack Web Components.

^ Top of page

To enable soft card authentication when submitting jobs to the Windows HPC Server 2008 R2 with SP4 cluster, you must install the HPC soft card key storage provider (KSP) on the following computers:

  • The head node of your cluster

  • The compute nodes and workstation nodes of your cluster

To install the KSP, you must separately run the version of the installation program that is appropriate for the operating system on each computer: HpcKsp_x64.msi or HpcKsp_x86.msi.

ImportantImportant
You can only install the HPC soft card KSP on an edition of Windows 7 or Windows Server 2008 R2.

The installation programs are included in the SP4 updates. To download the SP4 update packages, see HPC Pack 2008 R2 Service Pack 4 in the Microsoft Download Center. You can also locate the files on the full installation media for HPC Pack 2008 R2 with SP4.

ImportantImportant
If you installed the HPC soft card key storage provider by using the installation program that is provided in HPC Pack 2008 R2 SP3, you must uninstall the provider on each node before you install the provider in HPC Pack 2008 R2 SP4.

^ Top of page

You can uninstall the service pack on the head node to revert to HPC Pack 2008 R2 with SP3, and preserve the data in the HPC databases, or you can completely uninstall HPC Pack 2008 R2.

To uninstall HPC Pack 2008 R2 SP4, uninstall the updates in the following order:

  1. Update for HPC Pack 2008 R2 Services for Excel 2010

  2. Update for HPC Pack 2008 R2 Server Components

  3. Update for HPC Pack 2008 R2 Client Components

  4. Update for HPC Pack 2008 R2 MS-MPI Redistributable Pack

noteNote
If the head node is configured for high availability in the context of a failover cluster, first uninstall the updates for SP4 on the passive head node. Then, uninstall the updates on the active head node.

To completely uninstall HPC Pack 2008 R2, uninstall the features in the following order:

  1. HPC Pack 2008 R2 Web Components (if they are installed)

  2. HPC Pack 2008 R2 Key Storage Provider (if it is installed)

  3. HPC Pack 2008 R2 Services for Excel 2010

  4. HPC Pack 2008 R2 Server Components

  5. HPC Pack 2008 R2 Client Components

  6. HPC Pack 2008 R2 MS-MPI Redistributable Pack

ImportantImportant
Not all features are installed on all computers. For example, HPC Pack 2008 R2 Services for Excel 2010 is only installed on the head node when the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Cycle Harvesting edition is installed.

When HPC Pack 2008 R2 is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2008 R2, you can remove the following programs if they will no longer be used:

  • Microsoft Report Viewer Redistributable 2008

  • Microsoft SQL Server 2008 (64-bit)

    noteNote
    This program also includes: Microsoft SQL Server 2008 Browser, Microsoft SQL Server 2008 Setup Support Files, and Microsoft SQL Server VSS Writer.

  • Microsoft SQL Server 2008 Native Client

noteNote
Microsoft SQL Server 2008 R2 is installed with HPC Pack 2008 R2 with SP2, SP3, or SP4.

Additionally, the following server roles and features might have been added when HPC Pack 2008 R2 was installed, and they can be removed if they will no longer be used:

  • Dynamic Host Configuration Protocol (DHCP) Server server role

  • File Services server role

  • File Server Resource Manager role service

  • Network Policy and Access Services server role

  • Windows Deployment Services server role

  • Microsoft .NET Framework feature

  • Message Queuing feature

^ Top of page

The VM Role feature (beta) in Windows Azure is being retired on May 15, 2013. Also now deprecated are the settings in Microsoft HPC Pack 2008 R2 and Microsoft HPC Pack 2012 to deploy a custom VHD to VM role nodes from a Windows HPC cluster. After the retirement date, VM role deployments from an HPC cluster will fail or be inaccessible. To add Windows Azure nodes to an HPC cluster, use the Windows Azure worker role.

A head node that has a name that contains non-alphanumeric characters will cause the deployment of Windows Azure nodes to fail. This is because the name of the head node is used for certain data structures in Windows Azure, and Windows Azure excludes non-alphanumeric characters in the names of those structures. The name of the head node is the NetBIOS name of the head node computer, or in a cluster configured for high availability of the head node, it is the name of the clustered instance of the head node.

When adding Windows Azure nodes to an on-premises cluster, the name of the head node must adhere to the following naming rules:

  • Must contain only alphanumeric characters

  • Cannot begin with a numeric character

Because of a known issue, when Windows Azure nodes are deployed by using the same node names that were used in a previous deployment, the Cluster Utilization Report may erroneously indicate a utilization that exceeds 100% on available cores on the Windows Azure nodes. No workaround is available.

In the following scenario, the existing Windows Azure nodes of a deployment that uses a specific Windows Azure node template enter the Error health state, and they can no longer be used to run HPC jobs:

  • New Windows Azure nodes are added by using the same node template.

  • A deployment failure occurs when you try to start the new nodes. For example, this occurs if the number of new nodes added exceeds the Windows Azure subscription limit for cores.

Workaround   To use the existing Windows Azure nodes of the deployment that are in the Error health state to run jobs, first stop the nodes. Start the nodes to redeploy them, and then bring the nodes online.

In a cluster that is configured for high availability of the head node, if the primary head node (the head node that owns the clustered file server) fails or becomes unavailable under certain conditions while Windows Azure nodes are running jobs, the nodes may not respond to actions to be taken offline or to stop.

This problem can occur in the following scenarios:

  1. You attempt to take the nodes offline while the job is running. If the head node fails while the nodes are draining, some nodes do not go offline. If they succeed, they cannot be stopped, and instead they remain offline.

  2. You attempt to stop the nodes while the job is running. If the head node fails while the nodes are draining, the stop operation fails, and the nodes revert to the Online state.

  3. The head node fails while the job is running, and the job that is running on the nodes completes. If you then attempt to stop the nodes, the stop operation fails, and the nodes revert to the Online state.

Workarounds   When the failed head node comes back online, you can perform actions to stop or take offline the Windows Azure nodes. The following workarounds are also possible before the failed head node becomes available:

  • If you are unable stop the nodes, you can first take the nodes offline, and then try to stop the nodes.

  • If you are unable to take the Windows Azure nodes offline, you can use the Windows Azure Portal to delete the nodes.

To specify a file path properly when passing a file or folder parameter to hpcfile, ensure that you use a fully qualified path name that begins with a drive letter (for example, C:\logs), not a relative path. If you use a relative path, hpcfile prepends the path with its working directory, E:\approot.

You may encounter failures when you use certain HPC API functions (such as IScheduler.GetNodesInGroup) or HPC PowerShell commands from a non-domain-joined client computer. Functions, such as GetNodesInNodeGroup, expect fully qualified domain names that are not created on non-domain-joined computers.

Workaround   You can set the DNS suffix for the client computer to match the domain of the head node, or join the client computer to a trusted domain. For more information, see To change the Domain Name System (DNS) suffix of your computer.

The node template settings to enable Windows Azure Connect for IPsec-protected connections between your on-premises computers and Windows Azure nodes are deprecated as of this release. They will be removed in a later release of HPC Pack.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.