New Feature Evaluation Guide for Windows HPC Server 2008 R2 SP1

Updated: April 2011

Applies To: Windows HPC Server 2008 R2

This guide provides scenarios and steps to try new features in HPC Pack 2008 R2 Service Pack 1.

Before following the steps in this guide, review the Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 1. The release notes also include instructions and information for installing SP1.

This guide includes the following scenarios:

Integration with Windows Azure

  • Upload a SOA service to a Windows Azure storage account

  • Upload an XLL file to a Windows Azure storage account

  • Upload batch job assemblies to a Windows Azure storage account

  • Manually deploy uploaded packages to Windows Azure nodes

Cluster management

  • Configure availability of workstation nodes based on detected user activity

Job scheduling

  • Mark a task as critical

  • Submit new jobs from a running job

Integration with Windows Azure

The scenarios in this section help you try new Windows Azure integration features in HPC Pack 2008 R2 SP 1.

For information about deploying Windows Azure nodes, see Steps for Deploying Windows Azure nodes in a Windows HPC Server Cluster.

Note
Sessions that are running on Azure nodes are more likely to reach the message resend limits than sessions that are running on on-premises nodes, especially if the HPC Job Scheduler Service is running in Balanced mode. If you see messages fail on your Azure nodes, try increasing the message resend limit (the default value is 3). This attribute can be set in the load balancing settings in the service configuration file (the <loadBalancing> element is in the <Microsoft.Hpc.Broker> section). For more information, see the broker settings section in SOA Service Configuration Files in Windows HPC Server 2008 R2.

Upload a SOA service to a Windows Azure storage account

Scenario

You want to deploy or have deployed a set of Windows Azure nodes in your HPC cluster, and you want to upload SOA service files to the Windows Azure storage account.

Note
When you provision a set of Windows Azure nodes from HPC Cluster Manager, any applications or files that are on the storage account are automatically deployed to the Azure nodes. If you upload file packages to storage after the Azure nodes are started, you can use clusrun and HpcSync to manually deploy the files to the Azure nodes. For more information, see Manually deploy uploaded packages to Windows Azure nodes.

Goal

Create a package and upload SOA service files to a Windows Azure storage account.

Requirements

  • A head node with Windows HPC Server 2008 R2 SP 1 installed.

  • A Windows Azure subscription.

  • An Azure Worker Role node template created.

Important considerations

  • Azure nodes cannot access on-premises nodes or shares directly.

  • Services that get data through message requests can run on Azure with no changes to the service or the client. Services that require access to databases or other external data sources must include code that uses the Azure APIs to access data.

  • Tasks executing on an Azure node cannot instantiate a connection with the head node.

  • To submit SOA jobs to the cluster, you must have a WCF Broker Node configured and Online.

Steps

To package and upload your SOA service files to the Windows Azure storage account:

  1. If the SOA service is not already registered and deployed to the on-premises cluster, register the SOA service by placing a copy of the service configuration file in the service registration folder on the head node (typically this is C:\Program Files\Microsoft HPC Pack 2008 R2\ServiceRegistration). For detailed information, see Deploy and Edit the Service Configuration File.

  2. Copy the service configuration file, the service assembly, and any dependent DLLs to an empty folder. For example, to a folder named C:\AzurePackages\myServiceFiles.

  3. At a command prompt window, run HpcPack create and specify a name for your package and specify the folder that contains the files that you want to package.

    Important
    The name of the package must be the name of the SOA service (that is, the service name that the SOA client specifies in the SessionStartInfo constructor).
    For example, to package the content of C:\AzurePackages\myServiceFiles as myServiceName.zip:

    hpcpack create C:\AzurePackages\myServiceName.zip C:\AzurePackages\myServiceFiles

  4. Run hpcPack upload to upload the package to your Windows Azure storage account. You can specify an account by using the node template name, and optionally the name of the head node (if there is no default head node specified on your computer). For example:

    hpcpack upload C:\AzurePackages\myServiceName.zip /nodetemplate:myAzureNodeTemplate /scheduler:myHeadNode

  5. Run HpcPack list to verify that the package was uploaded. For example:

    Hpcpack list /nodetemplate:myAzureNodeTemplate /scheduler:myHeadNode

If your Azure nodes are already started, then you must deploy the uploaded packages before you can run your SOA jobs. See Manually deploy uploaded packages to Windows Azure nodes.

Expected results

  • You can run the service client with no changes, just as you would if the service were running on-premises.

  • When you run hpcPack list, you see information about the package that you uploaded.

Related Resources

^ Top of page

Upload an XLL file to a Windows Azure storage account

Scenario

You want to deploy or have deployed a set of Windows Azure nodes in your HPC cluster, and you want to upload XLL files to the Windows Azure storage account so that you can run your UDF offloading jobs on Azure nodes.

Note
When you provision a set of Windows Azure nodes from HPC Cluster Manager, any applications or files that are on the storage account are automatically deployed to the Azure nodes. If you upload file packages to storage after the Azure nodes are started, you can use clusrun and HpcSync to manually deploy the files to the Azure nodes. For more information, see Manually deploy uploaded packages to Windows Azure nodes.

Goal

Create a package and upload an XLL file to a Windows Azure storage account.

Requirements

  • A head node with Windows HPC Server 2008 R2 SP 1 installed.

  • A Windows Azure subscription.

  • An Azure Worker Role node template created.

  • A cluster-enabled XLL file.

Important considerations

  • Azure nodes cannot access on-premises nodes or shares directly.

  • To submit UDF offloading jobs to the cluster, you must have a WCF Broker Node configured and Online.

Steps

To package and upload your XLL files to the Windows Azure storage account:

  1. At a command prompt window, run HpcPack create to package your XLL. The name of the package must be the same as the name of the XLL file.

    For example, to package C:\myFiles\myXLL.xll as myXLL.zip (and save the package to a folder called AzurePackages):

    hpcpack create C:\AzurePackages\myXLL.zip C:\AzurePackages\myXLL

    Note
    If the XLL has dependencies on DLLs or other files, copy the XLL and its dependencies to a folder, and specify the name of the folder instead of the .xll file in the hpcPack create command. For example: hpcpack create C:\AzurePackages\myXLL.zip C:\myFiles\myXLLFolder
  2. Run hpcPack upload to upload the package to your Windows Azure storage account. You can specify an account by using the node template name, and optionally the name of the head node (if there is no default head node specified on your computer). For example:

    hpcpack upload C:\AzurePackages\myXLL.zip /nodetemplate:myAzureNodeTemplate /scheduler:myHeadNode

  3. Run HpcPack list to verify that the package was uploaded. For example:

    Hpcpack list /nodetemplate:myAzureNodeTemplate /scheduler:myHeadNode

If your Azure nodes are already started, then you must deploy the uploaded packages before you can run your UDF jobs. See Manually deploy uploaded packages to Windows Azure nodes.

Expected results

  • You can offload UDFs to the cluster from Excel 2010 with no changes, just as you would if the XLLs were deployed on-premises.

  • When you run hpcPack list, you see information about the package that you uploaded.

Related Resources

^ Top of page

Upload batch job assemblies to a Windows Azure storage account

Scenario

You want to deploy or have deployed a set of Windows Azure nodes in your HPC cluster, and you want to upload batch job assembly files to the Windows Azure storage account.

Note
When you provision a set of Windows Azure nodes from HPC Cluster Manager, any applications or files that were copied to the storage account using hpcPack upload are automatically deployed to the Azure nodes. If you upload file packages to storage after the Azure nodes are started, you can use clusrun and HpcSync to manually deploy the files to the Azure nodes. For more information, see Manually deploy uploaded packages to Windows Azure nodes.

Goal

Create a package and upload files to a Windows Azure storage account.

Requirements

  • A head node with Windows HPC Server 2008 R2 SP 1 installed.

  • A Windows Azure subscription.

  • An Azure Worker Role node template created.

Important considerations

  • Azure nodes cannot access on-premises nodes or shares directly. For example, an executable in a task cannot write data to a file on a share or redirect data to a file on a share. You can package input data with your executable and upload it to the worker nodes. Task output up to 4MB can go to the job scheduler database (accessible in the task’s output field).

  • Tasks executing on an Azure node cannot instantiate a connection with the head node. For example, a task cannot contact the job scheduler to get information about the job, and a task cannot spawn a new task or job.

Steps

To package and upload your files to the Windows Azure storage account:

  1. Create a folder on your head node for your packages, such as C:\AzurePackages.

  2. At a command prompt window, run HpcPack create and specify a name for your package and specify and the folder or files that you want to package. For example:

    hpcpack create C:\AzurePackages\packageName.zip C:\myExeFolder

    or

    hpcpack create C:\AzurePackages\packageName.zip c:\myExeFolder\myExe.exe, c:\myExeFolder\myExe.config

  3. Run hpcPack upload to upload the package to your Windows Azure storage account. You can specify an account by using the node template name, and optionally the name of the head node (if there is no default head node specified on your computer). For example:

    hpcpack upload C:\AzurePacks\packageName.zip /nodetemplate:myAzureNodeTemplate /scheduler:myHeadNode /relativePath myExeDir

    Note
    When you specify a relative path, you can use that in your command line when you submit the batch job. For example, job submit %CCP_PACKAGE_ROOT%\<relativePath>\myExe.exe.
  4. Run HpcPack list to verify that the package was uploaded. For example:

    Hpcpack list /nodetemplate:myAzureNodeTemplate /scheduler:myHeadNode

If your Azure nodes are already started, then you must deploy the uploaded packages before you can run your batch jobs. See Manually deploy uploaded packages to Windows Azure nodes.

Expected results

  • You can run a batch job on Azure Worker Role nodes in the same way that you run jobs on on-premises nodes. For example, job submit %CCP_PACKAGE_ROOT%\<relativePath>\myExe.exe.

  • When you run hpcPack list, you see information about the package that you uploaded.

Related Resources

^ Top of page

Manually deploy uploaded packages to Windows Azure nodes

Scenario

You have uploaded new batch assembly, XLL, or service file packages to the Azure storage account associated with a set of running Windows Azure nodes. You want to deploy the new files to the worker nodes.

Note
The worker nodes automatically run HpcSync when they are restarted. HpcSync copies all packages from the storage account to the worker nodes.

Goal

Deploy packages to the worker nodes.

Requirements

  • A head node with Windows HPC Server 2008 R2 SP1 installed.

  • A set of Windows Azure nodes joined to your HPC cluster.

  • One or more packages deployed to your Windows Azure storage account.

Steps

You can use clusrun and HpcSync to deploy the files from the Windows Azure storage account to the Windows Azure nodes.

For example:

clusrun /nodegroup:AzureWorkerNodes hpcsync

To see a list of folders or files that have been deployed to the Windows Azure nodes, you can run the following command:

clusrun /nodegroup:AzureWorkerNodes dir %CCP_PACKAGE_ROOT% /s

Expected results

The files are successfully deployed to the worker nodes.

Related Resources

^ Top of page

Cluster management

The scenarios in this section help you try new management features in HPC Pack 2008 R2 SP 1.

Configure availability of workstation nodes based on detected user activity

Scenario

A portion of your HPC workload consists of small, short-running jobs that can be stopped and re-started. These are ideal jobs to run on workstation nodes. You want to use the workstations on your network during working and non-working hours, but only if the workstation users are not actively using them. You don’t want the workstation to come online for HPC jobs if user activity is detected, and you want HPC jobs to immediately vacate the workstation when users become active again.

Goal

Add workstation nodes to your HPC cluster and configure the availability policy so that nodes come online when there is no user activity detected. In the availability policy, define the following:

  • The days and times during which you want to try to use workstations for HPC jobs.

  • The number of minutes without keyboard or mouse input after which a workstation is considered idle.

  • The CPU usage threshold (that is, usage must be under the threshold for the workstation to be considered idle).

Requirements

  • A cluster with HPC Pack 2008 R2 SP1 installed.

  • One or more workstation computers running the Windows 7 operating system.

  • The workstation computers and the head node computer must be joined to the same doMayn.

  • Administrative permissions on the cluster.

Steps

To configure availability of workstation nodes based on detected user activity:

  1. In HPC Cluster Manager, create a node template for the workstations:

    1. In Configuration, click Node Templates, and then click New.

    2. Use the Create Node Template Wizard to create a Workstation node template.

    3. On the Configure Availability Policy page, select Bring workstation nodes online and offline automatically and then click Configure Availability Policy.

    4. In the availability policy dialog box, select the days and times during which you want to try to use the workstations for HPC jobs.

    5. In the user activity detection options:

      • Select the checkbox and configure the number of minutes without keyboard or mouse input after which the workstation can be brought online for jobs.

      • Select the checkbox and configure the threshold for CPU usage. For example, if you select 30%, a workstation with a CPU usage over 30% will not be brought online for jobs. (This option is only available if the keyboard input option is selected.)

  2. Install HPC Pack 2008 R2 SP1 on the workstations and select the Join an existing HPC cluster by creating a new workstation node option.

  3. The nodes appear in Node Management as Unapproved. If the nodes do not appear, verify the network connection between the head node and the workstations.

  4. Assign the workstation template that you created to the nodes. The workstation node state should change from Unapproved to Offline. If you have an availability policy set, then the workstation will be brought online according to the policy.

Important
If the workstation nodes have HPC Pack 2008 R2 but do not have SP1 installed, the activity detection settings in the node template cannot be applied. The workstations will be brought online according to the days and times that you selected in the node template. If you have some workstation nodes with only HPC Pack 2008 R2 (version 3.0.xxxx.x) and some with SP1 installed (version 3.1.xxxx.x), you can create separate node templates with different availability policies. In HPC Cluster Manager, you can add the Version column in the node list view, and then sort the nodes by version number.

Expected results

  • Workstation nodes become available according to the configured availability policy and user activity detection options.

  • HPC workload exits quickly when a workstation becomes active (no noticeable delay for the workstation user).

  • Workstation nodes that do not have SP1 installed become available only according to the days and times that you selected in the node template. The user activity detection settings are not applied.

Related Resources

Adding Workstation Nodes in Windows HPC Server 2008 R2 Step-by-Step Guide

^ Top of page

Job scheduling

The scenarios in this section help you try new job scheduling features in HPC Pack 2008 R2 SP 1.

Mark a task as critical

Scenario

One or more specific tasks in your job are critical. If those tasks fail, you want the entire job to stop running and be marked as Failed.

Alternately, you might be running a SOA job (or other job with sub-tasks), and it is okay if a few instances of the service fail (this can happen if a particular sub-task is pre-empted or stopped while running for any reason). However, if enough sub-tasks fail, this could indicate a problem with the service itself, so you want the SOA job to stop and be marked as Failed.

Goal

Create a task with the failJobOnFailure task property set to true. This marks the task as critical, and causes the job to stop and be marked as Failed if the task fails.

Create a task with the FailJobOnFailureCount task property set to specify how many sub-tasks can fail before the task reaches a critical failure level and causes the job to stop and be marked as Failed.

Note
When a job is failed due to the failure of a critical task, the tasks in the job will be canceled (running tasks will be marked as Failed, queued tasks will reMayn marked as Queued), and the job will be marked as Failed.

Requirements

  • A cluster with HPC Pack 2008 R2 SP1 installed.

  • The SP1 client utilities installed on the client computer.

  • User permissions on the cluster.

Steps

To mark a task as critical, you can set the failJobOnFailure task property to true. For Parametric Sweep and Service tasks, you can additionally set the FailJobOnFailureCount to specify how many sub-tasks can fail before the job should be stopped and failed.

For example, you can use one of the following methods to mark a task as critical:

Using a command prompt window:

  • To mark a basic task as critical:

    Job add <jobID> /type:basic /failJobOnFailure:true myApp.exe

  • To mark a parametric task as critical, and specify that up to 5 sub-tasks can fail before the job should be stopped and failed:

    Job add <jobID> /parametric:1:100 /failJobOnFailure:true /failJobOnFailureCount:5 myApp.exe

Using HPC PowerShell:

  • To mark a basic task as critical:

    Add-HpcTask –jobID <jobID> -type Basic –failJobOnFailure $true –command “myApp.exe”

  • To mark a parametric task as critical, and specify that up to 5 sub-tasks can fail before the job should be stopped and failed:

    Add-HpcTask –jobID <jobID> -type ParametricSweep –start 1 –end 100 –failJobOnFailure $true –failJobOnFailureCount 5 –command “myApp.exe”

You can mark tasks as critical through the API with the ISchedulerTask.FailJobOnFailure and ISchedulerTask.FailJobOnFailureCount properties.

Note
If a job should fail when any of its tasks fail, you can use the failOnTaskFailure job property instead.

Expected results

If the critical task fails, the entire job stops and is marked as Failed.

Related Resources

Windows HPC Server 2008 R2 Cmdlets for Windows PowerShell

^ Top of page

Submit new jobs from a running job

Scenario

You want to build a dependency tree of jobs where a task in one job configures and submits new jobs.

Goal

Submit a job that configures and submits new jobs.

Requirements

  • A cluster with HPC Pack 2008 R2 SP 1 installed.

  • The SP1 client utilities installed on the client computer.

  • User permissions on the cluster.

  • Credential reuse enabled on the cluster.

Note
HPC Pack 2008 R2 SP1 includes a cluster property named DisableCredentialReuse. By default, this is set to false (that is, credential reuse is enabled).

Steps

In previous releases, to enable a job to submit new jobs, you had to either set your credentials on all compute nodes or provide your password through the task that submitted the new jobs. In this release, if you select to save your password (or set credentials with cluscfg setcreds), your credentials can be reused by your job.

To submit a job that submits new jobs:

  1. Create and submit a job that configures and submits new jobs.

    For example, to submit a parametric sweep that creates and submits four new jobs, type the following at a command prompt window:

    Job submit /parametric:1:4 echo hello1 ^& job submit echo hello2

  2. When prompted, enter your credentials and select the option to remember your password.

Expected results

The job that you submitted successfully submits new jobs.

Related Resources

^ Top of page

See Also

Windows HPC Server 2008 R2