What's New in Microsoft HPC Pack 2012 Service Pack 1 (SP1)
Updated: January 13, 2014
Applies To: Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2
This document lists the new features and changes that are available in Microsoft® HPC Pack 2012 Service Pack 1 (SP1). For information about HPC Pack 2012, see What's New in Microsoft HPC Pack 2012.
In this topic:
Additional configuration options for Windows Azure virtual networks Service Pack 1 supports connectivity between Windows Azure and an on-premises network without requiring a virtual private network (VPN) hardware device. You can now use the Routing and Remote Access service (RRAS), configured in an on-premises server that is running Windows Server 2012, to connect to a Windows Azure virtual network. To use this service, you must create a dynamic-routing virtual network gateway. Additionally, when you are configuring Windows Azure node deployments in which a Windows Azure virtual network is available, you now specify a connection to either an on-premises network (via a hardware or software VPN gateway) or a subnet in the virtual network. You can use the latter configuration to connect to worker role instances from a head node that is deployed in a virtual machine that is hosted in Windows Azure.
At this time, HPC Pack 2012 does not support configuration of a point-to-site VPN. Point-to-site VPN is currently available in Windows Azure as a preview.
Simplified configuration of the Windows Azure management certificate for burst deployments The Windows Azure management certificate for a burst deployment only needs to be installed in the Windows Azure subscription and in the Local Computer\Personal and Local Computer\Trusted Root Certification Authority certificate stores on the head node. You no longer need to install the certificate in the Current User certificate stores or on any client computers. In addition, if you choose to use the Default HPC Azure Management Certificate with HPC Pack 2012 with SP1, you only need to upload the certificate to the Windows Azure subscription to complete the certificate configuration. For more information, see Options to Configure the Azure Management Certificate for Azure Burst Deployments.
Additional virtual machine sizes for Windows Azure node deployments You can now use the recently introduced A6 and A7 virtual machine sizes to add Windows Azure nodes to a cluster. These sizes are in addition to the Small, Medium, Large, and Extra Large virtual machine sizes that were previously supported in HPC Pack. These virtual machine sizes provide additional combinations of CPU cores, memory, and other properties for your workloads in Windows Azure. For details about the virtual machine sizes, see Virtual Machine and Cloud Service Sizes for Windows Azure.
Additional logging options for Windows Azure node deployments Service Pack 1 introduces additional options for storing trace log information from Windows Azure nodes to persistent Windows Azure storage. You can now use Windows Azure blob storage to automatically capture trace log files from Windows Azure proxy nodes, compute nodes, or both. You might choose to do this to ensure that you have log data available for troubleshooting after a Windows Azure node deployment is stopped, and log files can no longer be accessed directly on the role instances. To enable saving the log files in the Windows Azure storage account that you configure for the deployment, in HPC Cluster Manager, on the Options menu, click Windows Azure Deployment Configuration. You can also configure a new cluster property, AzureLogstoBlob, by using the Set-HpcClusterProperty HPC PowerShell cmdlet. For more information, see Troubleshoot Deployments of Azure Nodes with Microsoft HPC Pack.
Saving log files to blob storage uses table storage space and generates storage transactions on the storage account that is associated with each deployment. The storage space and the storage transactions will incur charges according to the subscription terms. Log files are not deleted automatically if you later specify not to save the files automatically in blob storage.
Option to collect troubleshooting data for Microsoft Customer Service and Support You can opt to collect on the head node, and upload to Microsoft, certain data about the availability, connectivity, and performance of your Windows Azure node deployments. You might choose to do this if you need to open a support incident that is related to a Windows Azure node deployment. Microsoft Customer Service and Support will use the data for troubleshooting purposes and for feature improvement. To enable data collection, in HPC Cluster Manager, on the Options menu, click Windows Azure Support Data Collection. You can also configure a new cluster property, AzureMetricsCollectionEnabled, by using the Set-HpcClusterProperty HPC PowerShell cmdlet. For more information about the data collection, see the Microsoft HPC Pack Privacy Statement.
Removal of VM role deployment option Because the Windows Azure VM Role feature is now retired, HPC Pack 2012 does not include previous HPC Pack settings that allowed VM role nodes in deployments of Windows Azure nodes. Only worker role instances can be added as Windows Azure compute resources to an HPC PackServer cluster.
Windows Azure HPC Scheduler SDK not updated The Windows Azure HPC Scheduler SDK is not updated with this release. The most recent version of the Windows Azure HPC Scheduler SDK is version 1.8, which is compatible with HPC Pack 2012 and also requires version 1.8 of the Windows Azure SDK for .NET x64. Support for the Windows Azure HPC Scheduler SDK is based on the support lifecycle of the prerequisite version of the Windows Azure SDK for .NET x64. For more information, see Windows Azure Cloud Services Support Lifecycle Policy.
Re-enabling of Internet SCSI (iSCSI) deployment HPC Pack 2012 with SP1 again supports deployment of iSCSI boot nodes. This functionality was disabled in the release to manufacturing (RTM) version of HPC Pack 2012. Service Pack 1 supports an HPC Pack 2012 storage provider that is available from the Microsoft Download Center.
Ability to configure number of missed heartbeats separately for on-premises nodes and Windows Azure nodes A new cluster property, InactivityCountAzure, is available to configure the number of missed heartbeats after which the cluster considers worker nodes that are deployed in Windows Azure unreachable. This property supplements the InactivityCount property that is available in earlier versions of HPC Pack and that now is used specifically to determine whether on-premises nodes are reachable. Because of possible latency when the cluster is attempting to reach Windows Azure nodes, the default value of InactivityCountAzure (10) is greater than the default value of InactivityCount (3). You can set the properties by using the cluscfg command or the Set-HpcClusterProperty HPC PowerShell cmdlet.
Enhanced HPC service logging To help with troubleshooting HPC Pack services and cluster processes, trace logs for the services are now collected in a compact binary format, and they replace the text service log files that previous versions of HPC Pack generated. For more information, see Using Service Log Files for HPC Pack.
Additional cluster roles for job submission and management Service Pack 1 introduces two new cluster roles, in addition to the existing cluster user and administrator roles, that allow specified members of the cluster to perform only the following job-related functions:
Job administrator Can perform all the functions of any job owner on the cluster, including create, submit, view, cancel, finish, modify, re-queue, export or import, and copy any job; and view, add, cancel, and modify any task.
Job operator Can view, cancel, finish, or re-queue any job.
You can add members to the cluster in specific roles by using HPC Cluster Manager (in Configuration, click Users). You can also specify the roles when you are using the HPC PowerShell cmdlets Add-HpcMember, Get-HpcMember, and Remove-HpcMember or when you are using the HPC Pack application programming interfaces (APIs). Additionally, members can be added in more than one role and are granted the union of the privileges of the roles.
Graceful preemption setting in Balanced mode To permit the graceful (not immediate) preemption of tasks when jobs are submitted in Balanced scheduling mode, you can set a new cluster property, PreemptionBalancedMode, to Graceful. This is an advanced setting that is intended only for certain cluster scenarios—for example, when you are running many service-oriented architecture (SOA) jobs that consist of many long-running tasks. Immediate and Graceful are the only valid options for this property. By default, PreemptionBalancedMode is set to Immediate, matching the preemption behavior in Balanced mode in previous releases.
The PreemptionType cluster parameter remains from previous releases and configures only preemption behavior in Queued scheduling mode.
Additionally, Service Pack 1 changes the way HPC Pack 2012 preempts tasks in SOA jobs when graceful preemption is configured in Queued scheduling mode. In Service Pack 1, a SOA job ends the tasks after the current request is finished, even if additional requests must be calculated. In previous versions of HPC Pack, a SOA job ends its tasks to release resources for other job only after all the requests are calculated.
Compatibility of HPC Services for Excel with Microsoft Excel 2013 In Service Pack 1, HPC Services for Excel in HPC Pack 2012 is compatible with Excel 2013 in addition to Microsoft Excel 2010.
Update of HPC cluster debugger add-ins for SOA and Message Passing Interface (MPI) applications The HPC debugger add-ins for Microsoft Visual Studio for SOA and MPI applications are updated for HPC Pack 2012 and for Microsoft Visual Studio 2012. For more information, see the download details at the Microsoft Download Center.