Managing HPC Services for Excel on the Cluster


Applies To: Windows HPC Server 2008 R2

This topic provides guidelines and procedures that a cluster administrator can use to support HPC Services for Excel on a Windows HPC Server 2008 R2 cluster. To review system requirements, see Requirements for Running Excel Jobs on a Windows HPC 2008 R2 Cluster.

This topic includes the following sections:

There are several requirements for the nodes that you can use to run Excel jobs. Depending on the type of offloading and on your cluster configuration, some nodes in your cluster might not meet the requirements. You can manage the subsets of nodes in your cluster by creating node groups and job templates to route Excel jobs to the correct set of nodes.

Consider the following requirements for running Excel jobs:

  • You can run Excel jobs on compute nodes with the Enterprise Edition of HPC Pack 2008 R2 installed or on workstation nodes.

  • Microsoft Excel 2010 must be installed on the nodes that will run workbook offloading jobs (this is not the case for UDF offloading jobs).

  • 32-bit nodes cannot run 64-bit XLL files or workbooks that have 64-bit dependencies (such as XLLs or add-ins).

  • The nodes must be able to access the workbooks or XLL files and any dependencies that they have (such as DLLs, add-ins, or databases).

A node group is a named collection of nodes. Nodes can belong to more than one group. There are four default groups: HeadNodes, WCFBrokerNodes, ComputeNodes, and WorkstationNodes. Which of these groups a node belongs to is determined by the role of the node. You can create custom node groups to help monitor, manage, and diagnose all nodes in a group at once and to direct job resource allocation.

For example, the following list describes some node groups that you can create to help support Excel jobs on your cluster:

  • HaveExcelServices: This group contains compute nodes with the Enterprise Edition of HPC Pack 2008 R2 installed and workstation nodes.

  • HaveExcel2010: This group contains nodes that have Microsoft Excel 2010 installed.

  • 32bit: This group contains nodes with a 32-bit operating system.

  • 64bit: This group contains nodes with a 64-bit operating system.

The following list provides some examples of how you can use the suggested node groups:

  • Run all Excel jobs on nodes that belong to HaveExcelServices.

  • Run workbook offloading jobs on nodes that belong to both HaveExcelServices and HaveExcel2010.

  • Include the 32bit or 64bit in the job description or job template to ensure that the allocated nodes meet the job requirements or to ensure that 32-bit jobs do not occupy 64-bit nodes.

  • Run the Excel Workbook Configuration Test only on nodes that belong to HaveExcel2010 to avoid expected test failures (the test fails on nodes that do not have Excel 2010 installed).

  • Disable 64-bit service testing on nodes that belong to 32bit when you run the UDF Service Loading Test to avoid expected test failures (the 64-bit test fails on 32-bit nodes).

To create node groups

  1. In HPC Cluster Manager, click Node Management.

  2. In the Navigation Pane, click Nodes.

  3. In Heat Map or List view, select one or more nodes.

  4. Right-click your selection, point to Groups, then click New Group.

  5. In the Add Group dialog box, type a name and a description for the new group.

  6. Click OK to join the selected nodes to the new group. The new group appears in the Navigation Pane under By Group.

Nodes must be able to access the workbooks or XLL files and any dependencies that they have (such as DLLs, add-ins, or databases). You can copy files locally to each node or to a central file share that is accessible to all nodes. When you select a deployment strategy, consider the following:

  • Start-up time: Locally deployed files generally result in faster load times. Centrally deployed files can result in longer load times, especially on large clusters, busy networks, or when the files are large. For short-running Excel jobs, a longer start-up time decreases the performance improvements that Excel users experience. For long-running Excel jobs, the start-up time is amortized across the length of the job.

  • Frequency of updates: Updating the files that are locally deployed can be time consuming in a large cluster, especially if all the nodes are not online at the same time. Centrally deployed files are easier to update.

  • Users need to update the files: A cluster administrator must deploy files that are copied locally to compute nodes. If cluster users need to make updates to files, then you can set up a central file share and give permission to cluster users so that they can access the appropriate folders.

To deploy Excel workbooks to the cluster, copy and place the workbooks and any dependencies to a central file share (for example, \\fileshare\workbooks\myWorkbook.xlsm) or to a local directory on each node (for example, C:\workbooks\myWorkbook.xlsm). If the workbook has software dependencies, ensure that the software is installed on each node. Optionally, specify security settings on the workbooks or their parent folders to define which users can access the files and what level of permissions they have. For more information, see Managing Permissions (


You must communicate the location of the workbooks to the Excel users. When sending the calculation request, the Excel user must specify the path to the workbook. The Excel Service on the cluster looks for the workbook in the location that the user specifies.

To deploy XLL files to the cluster, copy and place the XLL file and any of its dependencies (such as DLLs) in the following folders on each node:

  • 32-bit XLL: %CCP_HOME%Bin\XLL32

  • 64-bit XLL: %CCP_HOME%Bin\XLL64

If the XLL file has a software dependency, ensure that the software is installed on each compute node.

Optionally, specify security settings on the XLL files, or their parent folders, to define which users can access the files. For more information, see Managing Permissions (

The expected location of the XLL files is defined by the path variable setting in the configuration files for the XLL container services. If you want to organize your XLL files into multiple folders or deploy XLL files to a central file share, you can add more folder paths to the path variable. For more information, see Advanced Service Configuration for HPC Services for Excel.

HPC Services for Excel includes two diagnostic tests: the UDF Service Loading Test and the Excel Workbook Configuration Test. These tests verify that UDF and workbook services can be loaded, initialized, and started on the specified nodes. The UDF diagnostic can also verify that a specified XLL file and its detected dependencies are present.

The following table describes the Excel tests:



UDF Service Loading Test

Verifies that the 32-bit and 64-bit container services can be loaded.

Parameters: You can disable testing on the 32-bit service or on the 64-bit service. For example, if you are running the test on 32-bit nodes, you can disable 64-bit service testing to avoid expected test failures (the 64-bit test fails on 32-bit nodes).

You can specify an XLL file to verify that the XLL was installed successfully. This loads the XLL container service, checks for the presence of the specified XLL, and lists the UDFs that are in the XLL.

Excel Workbook Configuration Test

Verifies that Excel 2010 is installed and activated on the specified nodes, that the Excel Service can be loaded, and that the Excel Service can launch Excel. This test fails if Excel 2010 is not installed on the selected nodes.

As an example of how to run the tests, the following procedure walks through how to run the UDF Service Loading Test and configure the test parameters.

To run the UDF Service Loading Test

  1. In HPC Cluster Manager, click Diagnostics.

  2. In the Navigation Pane, expand Tests, expand Microsoft, and then select Excel. The view pane lists the available Excel tests.

  3. Right-click UDF Service Loading Test, and then click Run. This opens the Run Diagnostic Tests dialog box.

  4. Select the nodes to test. For example, you can click Nodes in this group, and then select a node group from the drop-down list.

  5. Click Configure Test Parameters, and then in XLL file name, type the name of the XLL file that you want to verify.

  6. Optionally, enable or disable 32-bit or 64-bit service testing as appropriate.

  7. Click Run.

  8. In the navigation pane, click Test Results, and then select the UDF Service Loading Test to see the results.

The following screenshot illustrates the dialog box for running the Excel diagnostic tests in HPC Cluster Manager. In the screenshot, the Configure Test Parameters tab is selected, and an XLL file named myXLL has been specified as a parameter for the UDF Service Loading Test:

HPC Services for Excel Diagnostic Test dialog box.

UDF and workbook calculations run on the cluster as Service tasks. When a user first submits a calculation request from an Excel client, a job is submitted to the cluster with the Excel user’s credentials, and a session is started with a broker node. The session ID corresponds to the job ID. When the job starts running, the Excel client begins receiving calculation results. The session remains open until the user closes Excel, or until the session times out.

If the Excel job waits in the queue for a long time before it starts, the Excel user does not experience performance improvements in the speed of their spreadsheet calculations. You can optimize resource allocation for interactive service scheduling by configuring the HPC Job Scheduler to run in Balanced mode. In Balanced mode, the job scheduler attempts to start all incoming jobs as soon as possible at their minimum resource requirements. After all the jobs in the queue have their minimum resources, additional cluster resources are allocated to jobs based on their load and priority. Resource allocation is periodically rebalanced to fill idle resources and accommodate new jobs.

To run the job scheduler in Balanced mode

  1. In HPC Cluster Manager, in the Options menu, click Job Scheduler Configuration.

  2. In the Job Scheduler Configuration dialog box, select the Policy Configuration tab. (For more information about policy configuration, click the help link on the tab).

  3. In Scheduling Mode, select Balanced.

  4. Click OK to save your policy changes.

Excel users can specify a head node and a job template when they submit offloading jobs to the cluster. You can create specific job templates to manage resource allocation, job priority levels, and cluster access. Each job template consists of a list of job properties and associated value settings (defaults and value constraints), and an access control list that defines which users have permission to use the job template to submit jobs.

To ensure that Excel jobs are routed to the correct nodes, you can require one or more node groups in the Node Groups job template property. Any job that is submitted with that template can run only on nodes that belong to all the listed node groups. For example, the following procedure describes how to create a job template for workbook offloading jobs. This job template lists two custom node groups that are named HaveExcelServices and HaveExcel2010 as required values for the Node Groups property.

To create a job template for workbook offloading jobs

  1. In Configuration, in the Navigation Pane, click Job Templates.

  2. In the Actions pane, click New. The Generate Job Template Wizard appears.

  3. Type a name for the template, for example: WorkbookOffloading 

  4. On the Limit Node Groups tab, click Allow only the selected node groups, and then select the node groups named HaveExcelServices and HaveExcel2010.

  5. To define additional property values and constraints, select the check box on the last page of the wizard to open the job template in the template editor.

  6. After configuring the desired properties, right-click the job template in the views pane, then click Set Permissions. Add or remove group or users names as appropriate, and then click OK.


If a property is not specified in the job template, the HPC Job Scheduler Service applies the defaults and constraints from the Default job template.