Understanding Job and Task Properties

Applies To: Windows HPC Server 2008

The tables below contain a list of all the job and task properties that you can set. The properties define how jobs and tasks run.

Default values for properties are defined by the job template.

  • Job properties

  • Task properties

Job properties

Job Property Description

Job ID

The numeric ID of the job. The HPC Job Scheduler Service assigns this number when a job is created.

Job name

The name of the job.

Template

The name of the job template used to submit the job. A job template is a custom submission policy created by the cluster administrator to define the job parameters for an application. For more information, see Job Templates.

Project

The name of the project to which the job belongs.

Priority

The priority of the job. The value options are:

  • Lowest

  • BelowNormal

  • Normal

  • AboveNormal

  • Highest

Run time

The amount of time (dd:hh:mm) the job is allowed to run. If the task is still running after the specified run time is reached, it is automatically canceled by the HPC Job Scheduler Service.

Run until cancelled

If True, the job runs until it is canceled or until its run time expires. It does not stop when there are no tasks remaining.

Fail on task failure

If True, the failure of any task in the job causes the entire job to fail immediately.

Number of cores

The number of cores required by the job. You can set minimum and maximum values, or select Auto calculate to have the HPC Job Scheduler Service automatically calculate the minimum and maximum number of required cores based on the job’s tasks.

Number of sockets

The number of sockets required by the job. You can set minimum and maximum values, or select Auto calculate to have the HPC Job Scheduler Service automatically calculate the minimum and maximum number of required sockets based on the job’s tasks.

Number of nodes

The number of nodes required by the job. You can set minimum and maximum values, or select Auto calculate to have the HPC Job Scheduler Service automatically calculate the minimum and maximum number of required nodes based on the job’s tasks.

Exclusive

If True, no other jobs can run on a compute node at the same time as this job.

Node groups

A list of node groups. The job can only run on nodes that are members of all listed groups. For example, if you list the groups “Have Application X” and “Have Big Memory”, the node must belong to both groups.

Requested nodes

A list of nodes. The job can only run on nodes that are in this list.

Memory

The minimum amount of memory (in MB) that must be present on any node that the job is run on.

Cores per node

The minimum number of cores that must be present on any node that the job is run on.

Node ordering

The order to use when selecting nodes for the job. This property gives preference to nodes based on their available memory or core resources. The value options are:

  • More memory

  • More Cores

  • Less Memory

  • Less Cores

Licenses

A list of licenses that are required for the job. Values in this list can be validated by a job activation filter that is defined by the cluster administrator.

Preemptable

If True, the job can be preempted by a higher priority job. If False, the job cannot be preempted.

Task properties

Task Property Description

Task ID

The numeric ID of the task. The HPC Job Scheduler Service assigns this number when a task is created.

Task name

The name of the task.

Command line

The command that runs for the task. The path to the executable file is relative to the working directory for the task. For more information, see Understanding Application and Data Files.

Jobs that work with parallel tasks through Microsoft Message Passing Interface (MS-MPI) require the use of the mpiexec command, so commands for parallel tasks must be in the following format: mpiexec [mpi_options] <myapp.exe> [arguments], where myapp.exe is the name of the application to run.

Working directory

The working directory to be used while the task runs. For more information, see Understanding Application and Data Files.

Standard input

The path (relative to the working directory for the task) to the file from which the input of the task should be read. For more information, see Understanding Application and Data Files.

Standard output

The path (relative to the working directory for the task) to the file to which the output of the task should be written. For more information, see Understanding Application and Data Files.

Standard error

The path (relative to the working directory for the task) to the file to which the errors of the task should be written. For more information, see Understanding Application and Data Files.

Number of cores

The number of cores required by the task. You can set minimum and maximum values for this property.

Exclusive

If True, no other tasks can be run on a compute node at the same time as the task.

Rerunnable

If the task runs and fails and Rerunnable is True, the HPC Job Scheduler Service attempts to rerun the task. If Rerunnable is False, the task fails after the first run attempt fails.

Run time

The amount of time (dd:hh:mm) the task is allowed to run. If the task is still running after the specified run time is reached, it is automatically canceled by the HPC Job Scheduler Service.

Environment variables

Specifies the environment variables to set in the task's run-time environment. Environment variables must be separated by commas in the format: name1=value1.

Required nodes

Lists the nodes that must be assigned to the task and its job in order for the task to run. Each node in this list is entirely assigned to this task. That is, if a node has eight cores, all eight cores are assigned to the task.

Sweep start index*

The starting index for a parametric sweep task. The index can apply to the instances of your application, your working directory, and to your input, output, and error files, if specified. For the index to be applied, you must include the wildcard character (*) in the command line, and in the file names. For example, myTask.exe *, and myInput*.dat.

Sweep end index*

The ending index for a parametric sweep task. The index can apply to the instances of your application, your working directory, and to your input, output, and error files, if specified. For the index to be applied, you must include the wildcard character (*) in the command line, and in the file names. For example, myTask.exe *, and myInput*.dat.

Sweep increment

The amount to increment the parametric sweep index at each step of the sweep. The index can apply to the instances of your application, your working directory, and to your input, output, and error files, if specified. For the index to be applied, you must include the wildcard character (*) in the command line, and in the file names. For example, myTask.exe *, and myInput*.dat.

Additional references