Export (0) Print
Expand All
1 out of 1 rated this helpful - Rate this topic

What's New in Windows HPC Server 2008 R2 Service Pack 3

Updated: May 2, 2012

Applies To: Windows HPC Server 2008 R2

This document lists the new features and changes that are available in Service Pack 3 (SP3) for Microsoft® HPC Pack 2008 R2. For information about downloading and installing SP3, see Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 3.

In this topic:

The following features are new for integration with Windows Azure:

  • Expanded file moving capabilities with hpcPack and hpcsync command line tools. In SP2, the hpcPack command line tool provided the ability to package and copy files to a Windows Azure storage account from an on-premises head node or a client computer (hpcpack upload), and hpcsync provided the ability to copy packages from storage to a set of Azure nodes. In SP3, the hpcpack and hpcsync command line tools include the following additional functionality to help move files in and out of Windows Azure (for more information, run hpcpack /? on a computer that has the SP3 HPC client utilities installed):

    • Hpcpack download. Copy files from a storage account (download) by using hpcpack download.

    • Hpcpack installed on Windows Azure VM nodes automatically. Hpcpack is installed automatically on Windows Azure VM nodes that you deploy (but not currently on Windows Azure worker nodes), so you can copy files from the storage account to an Azure node, or from an Azure node to the storage account. For example, you can run hpcpack upload or hpcpack download as part of a startup script for your Windows Azure deployment, or as part of a job.

    • Upload to or download from a specific storage container. Packages that are uploaded to the default storage container (hpcpackages) can be copied to all Azure nodes by running the hpcsync command. Hpcsync runs automatically when new nodes are deployed. You can upload files to a different storage container if you want to stage or persist files that should not be deployed to each Azure node. Use the /container parameter in the hpcpack upload and hpcpack download commands to specify a different storage container.

    • Finer control over what gets copied with hpcsync. When you run hpcsync manually or as part of a script or job, you can target a specific container, folder, or package in the storage account. You can specify a target folder as an optional argument for hpcsync (hpcsync <storageAccountName> <primaryKey> <targetFolder>), or use the /container, or /packagename parameters for hpcsync. The /packagename parameter accepts the wildcard character (*), for example /packagename:”myFiles*.dat”. If you do not specify any of these options, hpcsync will copy files from the default target container (hpcpackages).

    • Hpcpack and hpcsync can be run by cluster users. Hpcpack and Hpcsync no longer require cluster administrator credentials. To run these commands, the cluster user must have the storage account name and primary access key. Job owners can now stage files to the storage account, and include hpcpackupload or download commands in their job descriptions. For example, they can use hpcpack download to copy a specific input file for a task, and then use hpcpack upload to save output files back to storage.

    • Mount a VHD to your Windows Azure VM nodes directly from the storage account. You can upload a VHD to your Windows Azure storage account by using hpcpack upload (use the /container parameter to specify a new container instead of using the default container that is handled by hpcsync). Then, to mount the VHD file as a drive directly from the storage account, you can run hpcpack mount on your Azure VM nodes (but not currently on Windows Azure worker nodes). You can use hpcpack unmount to unmount the VHD.

    • Extract files automatically after download.Hpcpack download can extract zipped files after downloading, so there is no need to install unzipping tools. This is particularly useful if you are downloading files to Azure nodes. To extract files, specify the /unpack parameter.

    • Upload or download any file type.Hpcpack now supports any file type, not only files that are packaged using an Open Packaging Convention (OPC) format such as zip files.

    • Parallel upload and download.Hpcpack upload and download is parallelized to saturate network and boost performance.  

    • File transport over secured https channel. The default channel is over https so transport is secured by default. You can force the communication to http over port 80 if necessary by including the /usehttp parameter in the hpcsync command and most of the hpcpack commands. Hpcpack create, mount, and unmount do not support the /usehttp parameter.

    • Compression options for hpcpack create.Hpcpack create includes parameters to specify the compression level to use when creating a package. You can specify /0 (no compression), /1 (compression optimized for a balance between size and performance), or /9 (compression optimized for size).

  • Port 443 for HPC communication with Azure Nodes. In SP3, most HPC communication between the head node and Azure Nodes is done over port 443. This simplifies firewall settings when adding Azure Nodes to your HPC cluster. This includes communication related to deployment, job scheduling, SOA brokering, and file staging. This does not affect Remote Desktop connections to your Azure Nodes, which will continue to use port 3389. For more information, see Configure the network firewall.

  • Security updates in usage of management certificates for Azure Nodes. SP3 supports a best practices configuration of the Windows Azure management certificate on the head node and on client computers that need to connect to Windows Azure. Each cluster administrator now configures the management certificate and its private key in the Current User\Personal store. This helps restrict access to the private key, providing a more secure configuration than in previous versions of Windows HPC Server 2008 R2. In previous versions, the management certificate with the private key is configured in the Trusted Root Certification Authorities store of the computer, which makes it available to any user on the computer. The certificate configuration in SP2 is still supported, but it is recommended that you move your management certificates to the proper stores now supported in SP3. For more information, see Step 1: Configure the Management Certificate for Windows Azure.

The following features are new in job scheduling:

  • Configure task level preemption to minimize unnecessary task cancelation. In SP3, you can configure the immediate preemption policy so that preemption happens at the task level, rather than at the job level. Preemption allows higher priority jobs to take resources away from lower priority jobs. With the default immediate preemption settings, the scheduler will cancel an entire job if any of its resources are needed for a higher priority job. When you enable task level preemption, the scheduler will cancel individual tasks instead. For example, if a Normal priority job is running 100 tasks on 1 core each, and a High priority job is submitted that requires 10 cores, task level preemption will cancel 10 tasks, rather than canceling the entire job. This option can improve job throughput by minimizing the amount of rework that must be done due to preemption. For more information about scheduling policy, see Understanding Policy Configuration.

The following features are new in cluster management:

  • Harvest cycles from servers on your network. In SP3, you can harvest extra cycles from servers on your network that are running the Windows Server 2008 R2 operating system. These servers are not dedicated compute nodes, and can be used for other tasks. This process is much like adding workstation nodes to your cluster. They can automatically become available to run cluster jobs according to configurable usage thresholds, according to a weekly availability policy (for example, every night on weekdays and all day on weekends), or they can be brought online manually.

    noteNote
    The edition previously known as HPC Pack 2008 R2 for Workstations has been renamed HPC Pack 2008 R2 for Cycle Harvesting. If you already have deployed HPC Pack 2008 R2 for Workstations, you can continue to use it, and patches and service packs still apply to it. Installation on a server requires new media to be created by either downloading the new disc from your volume license website or by downloading the SP3 integration package and following the instructions that accompany it.

The following features are new for runtime and development:

  • Get node information through the HTTP web service APIs. SP3 adds new APIs to the HTTP web service that can be used to get information about nodes and node groups in the cluster. For more information about using the web service, see Creating and Managing Jobs with the Web Service Interface

  • Package your HPC applications as a Windows Azure service. The Windows Azure HPC Scheduler SDK enables developers to define a Windows Azure deployment that includes built-in job scheduling and resource management, runtime support for MPI, SOA, parametric sweep, and LINQ to HPC applications, web-based job submission interfaces, and persistent state management of job queue and resource configuration. Applications that have been built using the on-premises job submission API in Windows HPC Server 2008 R2 can use very similar job submission interfaces in the Windows Azure Scheduler. For more information, see Windows Azure Scheduler.

  • Preview release LINQ to HPC programming model for data-parallel applications. LINQ to HPC and the Distributed Storage Catalog (DSC) help developers write programs that use cluster-based computing to manipulate and analyze very large data sets. LINQ to HPC and the DSC include services that run on a Windows HPC cluster, as well as client-side features that are invoked by applications. Code samples are available in the SP3 SDK code sample download, and the programmer’s guide is available in the LINQ to HPC section on MSDN.

    ImportantImportant
    This will be the final preview of LINQ to HPC and we do not plan to move forward with a production release. In line with our announcement in October at the PASS conference, we will focus our effort on bringing Apache Hadoop to both Windows Server and Windows Azure. For more information, see Microsoft to develop Hadoop distributions for Windows Server and Azure and Microsoft Expands Data Platform to Help Customers Manage the ‘New Currency of the Cloud’.

See Also

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.