job submit

Submits the specified job to run on an HPC cluster.

For examples of how to use this command, see Examples.

Syntax

job submit /id:<jobID> [/password:<password>]   
[/user:[<domain>\]<user_name>] [/scheduler:<name>]   
[/holduntil:[{<DateTime>|<minutes>}]]  
  
job submit /jobfile:<path>\<file_name> [/password:<password>]   
[/user:[<domain>\]<user_name>] [/scheduler:<name>] [/holduntil:[{<DateTime>|<minutes>}]]  
  
job submit [/askednodes:<node_list>] [/corespernode:<min>[-<max>]]   
[/customproperties:<property_list>] [/emailaddress:<address>]  
[/estimatedprocessmemory:<memory>)] [/env:<variable_and_value_list>]   
[/exclusive[:{true|false}]]   
[/faildependenttasks[:{true|false}]] [/failontaskfailure[:{true|false}]]   
[/holduntil:[{<DateTime>|<minutes>]] [/jobenv:<variable_and_value_list>]   
[/jobname:<job_name>] [/jobtemplate:<job_template_name>]   
[/license:<license_list>] [/memorypernode:<min>[-<max>]]   
[/name:<task_name>] [/nodegroup:<node_group_list>] [/nodegroupop:{Intersect|Uniform|Union}]  
[/notifyoncompletion[:{true|false}]] [/notifyonstart[:{true|false}]]  
{[/numcores:<min>[-<max>]] | [/numnodes:<min>[-<max>]] |   
[/numprocessors:<min>[-<max>]] | [/numsockets:<min>[-<max>]]}   
[/orderby:<primary>[,<secondary>]] [/parametric:<index_specification>]   
[/parentjobids:<jobID_list>] [/password:<password>]   
[/priority:<priority>] [/progress:<percent_progress>]   
[/progressmsg:<message>] [/projectname:<name>] [/requestednodes:<node_list>]   
[/rerunnable:[{true|false}]] [/runtime:{<time> | Infinite}]   
[/rununtilcanceled[:{true|false}]] [/singlenode[:{true|false}]] [/stderr:[<path>\]<file_name>]   
[/stdin:[<path>\]<file_name>] [/stdout:[<path>\]<file_name>]   
[taskexecutionretrylimit:<retry_limit>   
[/type:<type_name>] [/workdir:<folder>] [/password:<password>]   
[/user:[<domain>\]<user_name>] [/scheduler:<name>] <command> [<arguments>]  
[/validexitcodes:int|intStart..intEnd[,int|intStart..intEnd]*]  
  
job submit {/? | /help}  
  

Parameters

Parameter Description
/id:<jobID> Specifies the job identifier of the job that you want to submit. Use this parameter to submit a job that already exists and contains tasks. You can only submit jobs that have a state of configuring.
/jobfile:<path>\<file_name> Specifies the file name and path for a job XML file that contains settings to use for the job that you want to submit. Use this parameter to create a new job with the settings in the job XML file and submit it immediately.
/askednodes:<node_list> Deprecated. Use the /requestednodes parameter instead.
/corespernode:<min>[-<max>] Specifies the minimum and, optionally, the maximum number of cores that a node can have for the HPC Job Scheduler Service to consider the node as a candidate node on which to run the job. The job will not run on a node that has fewer cores than the minimum value or more cores than the maximum value that this parameter specifies. If all of the nodes in the cluster have a number of cores that falls outside the range that you specify for this parameter, an error occurs when you submit the job.
/customproperties:<property_list> Specifies the custom properties of the job in a format of <name1>=<value1>[;<name2>=<value2>...]. Custom properties are case insensitive, and will reflect the case used when they were first defined.

This parameter may only be used with a single task job, otherwise the properties will not be visible. If multiple tasks are required, use this parameter on the 'job new' operation instead.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/emailaddress:<address> Sends notifications for this job to this address.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/env:<variable_and_value_list> Specifies a list of environment variables to set in the run-time environment of the task and the values to assign to those environment variables. The list should have a format of <variable_name1>=<value1> [;<variable_name2>=<value2>.].

Alternatively, you can set multiple environment variables by including multiple /env parameters. Each must be a different argument with a format of <variable_name>=<value>.

To unset an environment variable, do not specify a value. For example, <variable_to_unset_name>=.
/estimatedprocessmemory:<memory> The maximum amount of memory in megabytes (MB) that each process in this job is estimated to consume.
/exclusive[:{true|false}] Specifies whether the HPC Job Scheduler Service should ensure that no other job runs on the same node as this job while this job runs.

A value of True indicates that the HPC Job Scheduler Service should ensure that no other job runs on the same node while this job runs.

A value of False indicates that this job can share compute nodes with other jobs.

When you specify the /exclusive parameter without a value, the job submit command behaves as if you specified a value of true. If you do not specify the /exclusive parameter, the job submit command behaves as if you specified a value of False.
/faildependenttasks[:{true|false}] Specifies that if a task fails or is canceled, all dependent tasks will fail.

If /faildependenttasks is declared but no value is given, True is assumed. If /faildependenttasks is not declared, False is assumed.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/failontaskfailure[:{true|false}] Specifies whether the HPC Job Scheduler Service should stop the job and fail the entire job immediately when a task in the job fails.

A value of True indicates that the HPC Job Scheduler Service should stop the job and fail the entire job immediately when a task in the job fails.

A value of False indicates that the HPC Job Scheduler Service should continue running the rest of the tasks in the job after any task in the job fail.

When you specify the /failontaskfailure parameter without a value, the job submit command behaves as if you specified a value of True. If you do not specify the /failontaskfailure parameter, the job submit command behaves as if you specified a value of False.
/holduntil:[{<DateTime>|<minutes>}] Specifies the date and time in local time or number of minutes until which the HPC Job Scheduler Service should wait before trying to start the job. If this parameter is not set, the job can start when resources are available.

The HPC Job Scheduler Service only runs the job at the date and time that this parameter specifies if the resources needed for the job are available. If the resources needed for the job are not available at that date and time, the job remains queued until the necessary resources become available.

You can specify the date and time in any format that the .NET Framework can parse for the current operating system culture. For information about how the .NET Framework parses date and time strings, see Parsing Date and Time Strings.

You can specify the /holduntil parameter for a job as long as the job is not running or completed.

The time specified using /holduntil is converted internally to UTC, and will not reflect local Daylight Savings Time.

If the minutes value is used, it must be an integer. The minutes to hold are converted to UTC at the time job modify is applied.

If the value for /holduntil is empty, any current holduntil value is erased and the job is no longer pending due to that parameter.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/jobenv:<variable_and_value_list> Specifies the environment variables that you want to set in the run-time environment of the job and the values to which you want to set those environment variables. The list should have a format of <variable_name1>=<value1> [;<variable_name2>=<value2>...].

Alternatively, you can set multiple environment variables by including multiple /jobenv parameters, each with a different argument with the format <variable_name>=<value>.

To unset an environment variable, do not specify a value. For example, ″<variable_to_unset_name>=″.

If you set or unset an environment variable for a job, that environment variable is also set or unset for each task in the job unless you override that environment variable setting for the task by specifying a new setting with the /env parameter.

This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.
/jobname:<job_name> Specifies a name to use for this job in command output and in the user interface.

The maximum length for the name of a job is 80 characters.
/jobtemplate:<job_template_name> Specifies the name of the job template to use for the job.

The maximum length for the name of a job template is 80 characters. By default, the job submit command uses the Default job template for the new job.
/license:<license_list> Specifies a list of features for which the job requires licenses, and the number of licenses required for each. Use a format of <license_name1>:<number1> [,<license_name2>:<number2>…] for this list. <number..> can be any positive integer, or * which will request the same number of licenses as cores, sockets, or nodes assigned to the job.

For example, /license1:10,license2:* means 10 licenses from license1 and N licenses from license2 where N is tied to the number of cores, nodes, or sockets associated with the job being submitted.

The list has a maximum length of 160 characters.
/memorypernode:<min>[-<max>] Specifies the minimum and, optionally, the maximum amount of memory in megabytes (MB) that a node can have for the HPC Job Scheduler Service to consider the node as a candidate node on which to run the job. The job will not run on a node that has less memory than the minimum value or more memory than the maximum value that this parameter specifies. If all of the nodes in the cluster have an amount of memory that falls outside the range that you specify for this parameter, an error occurs when you submit the job.
/name:<task_name> Specifies a name to use for this task in command output and in the user interface.

The maximum length for the name of a task is 80 characters.
/nodegroup:<node_group_list> Specifies the list of node groups on which this job can run in the format <node_group1_name>[,<node_group2_name>…]. The HPC Job Scheduler Service allocates resources to the job from nodes that belong to all of the node groups in the list by default, or to the nodes resulting from the operation of the /nodegroupop parameter, if specified, on the list of groups.

If you specify values for the /nodegroups and the /requestednodes parameters, the job runs only on the nodes in the list of nodes for the /requestednodes parameter that also belong to the list of nodes defined with the /nodegroup and /nodegroupop parameters.

The /nodegroup parameter ensures that there are nodes within the valid node list. However, if a job’s resource requirements from the above parameter cannot be met from within the node list, the job will fail during job submission.

If you specify parameter /requestednodes for a task and /nodegroups for the job, the job will fail during submission if the requested nodes are not in the nodes listed in conjunction with the /nodegroup and the /nodegroupop parameters.
/nodegroupop:{Intersect|Uniform|Union} Specifies the operator for the list specified by the /nodegroup parameter. Valid values are:

Intersect - Creates the list of nodes that are in all of the listed node groups.

Uniform - Causes the HPC Job Scheduler Service to try the node groups in order. If there are enough resources within the first node group, they are used. If not, the Scheduler tries each following node group until it finds one with enough resources. If enough resources are not found, the job remains queued.

Union - Creates the list of nodes that are in any of the node groups.

The default value for this parameter is Intersect.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/notifyoncompletion[:{true|false}] Specifies whether or not the HPC Job Scheduler Service should send email notification when then job ends.

A value of True indicates that the HPC Job Scheduler Service should send email notification when then job ends.

A value of False indicates that the HPC Job Scheduler Service should not send email notification when then job ends.

A job ends and notification is sent when the state of the job changes to Finished, Failed, or Canceled.

A cluster administrator must configure notification for the HPC cluster before you can receive notification about a job.

When you specify the /notifyoncompletion parameter without a value, the job new command behaves as if you specified a value of True. If you do not specify the /notifyoncompletion parameter, the job new command behaves as if you specified a value of False.

This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.
/notifyonstart[:{true|false}] Specifies whether or not the HPC Job Scheduler Service should send email notification when then job starts.

A value of True indicates that the HPC Job Scheduler Service should send email notification when then job starts.

A value of False indicates that the HPC Job Scheduler Service should not send email notification when then job starts.

A cluster administrator must configure notification for the HPC cluster before you can receive notification about a job.

When you specify the /notifyonstart parameter without a value, the job new command behaves as if you specified a value of True. If you do not specify the /notifyonstart parameter, the This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.

job new command behaves as if you specified a value of False.

This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.
/numcores:<min>[-<max>] Specifies the overall number of cores across the HPC cluster that the job requires in the format <minimum>[-<maximum>]. The job runs on at least the minimum number of cores and on no more than the maximum.

If you specify only one value, this command sets the maximum and minimum number of cores to that value.

If you specify a minimum value that exceeds the total number of cores available across the cluster, an error occurs when you submit the job.

The minimum and maximum values can only be positive integers or an asterisk (*). If you specify the minimum or maximum value as an asterisk, the HPC Job Scheduler Service automatically calculates the minimum or maximum number of cores at run time based on the minimum and maximum number of cores for the tasks in the job.

You cannot specify the /numcores parameter if you also specify the /numnodes, /numprocessors, or /numsockets parameter.
/numnodes:<min>[-<max>] Specifies the overall number of nodes across the HPC cluster that the job requires in the format <minimum>[-<maximum>]. The job runs on at least the minimum number of nodes and on no more than the maximum.

If you specify only one value, this command sets both the maximum and minimum number of nodes to that value.

If you specify a minimum value that exceeds the total number of nodes available across the cluster, an error occurs when you submit the job.

The minimum and maximum values can only be positive integers or an asterisk (*). If you specify the minimum or maximum value as an asterisk, the HPC Job Scheduler Service automatically calculates the minimum or maximum number of nodes at run time based on the minimum and maximum number of nodes for the tasks in the job.

You cannot specify the /numnodes parameter if you also specify the /numcores, /numprocessors, or /numsockets parameter.
/numprocessors:<min>[-<max>] Deprecated. Use the /numcores parameter instead.
/numsockets:<min>[-<max>] Specifies the overall number of sockets across the HPC cluster that the job requires in the format <minimum>[-<maximum>]. The job runs on at least the minimum number of sockets and on no more than the maximum.

If you specify only one value, this command sets both the maximum and minimum number of sockets to that value.

If you specify a minimum value that exceeds the total number of sockets available across the cluster, an error occurs when you submit the job.

The minimum and maximum values can only be positive integers or an asterisk (*). If you specify the minimum or maximum value as an asterisk, the HPC Job Scheduler Service automatically calculates the minimum or maximum number of sockets at run time based on the minimum and maximum number of sockets for the tasks in the job.

You cannot specify the /numsockets parameter if you also specify the /numcores, /numprocessors, or /numnodes parameter.
/orderby:<primary>[,<secondary>] Specifies the order that the HPC Job Scheduler Service should use to allocate nodes to the job in the format <primary_order>[,<secondary_order>]. The primary_order and secondary_order portions of the value can each be one of the following values:

memory - The HPC Job Scheduler Service sorts the nodes by the amount of memory they have available and allocates the job to nodes with more memory first.

-memory - The HPC Job Scheduler Service sorts the nodes by the amount of memory they have available and allocates the job to nodes with less memory first.

cores - The HPC Job Scheduler Service sorts the nodes by the number of cores they have available and allocates the job to nodes with more cores first.

-cores - The HPC Job Scheduler Service sorts the nodes by the number of cores they have available and allocates the job to nodes with fewer cores first.

When you specify a secondary order, the HPC Job Scheduler Service sorts the nodes according to the primary order first. For subsets of nodes that have the same amount of the resource that the primary order specifies, the HPC Job Scheduler Service then sorts the nodes within the subset using the secondary sort order. For example, if you specify memory,-cores, the HPC Job Scheduler Service sorts the nodes from the highest amount of memory to the lowest. Then, for subsets of nodes that have the same amount of memory, the HPC Job Scheduler Service uses the number of cores to break the tie, and sorts the nodes that have the same amount of memory from the fewest number of cores to the most.

The primary order and secondary order must refer to different types of resources. For example, memory,-cores is a valid combination of primary and secondary sort orders. Combinations such as memory,-memory and -cores,-cores are not valid.

The default order that the HPC Job Scheduler Service uses to allocate nodes to a job is cores,memory.
/parametric:<index_specification> Indicates that the new task is a parametric task. A parametric task runs the specified command multiple times, substituting the current index value for any asterisks (*) in the command line. The asterisk is also substituted when specified in the /stdin, /stdout, and /stderr parameters.

The index specification for this parameter defines the behavior of the index value. The format for the index specification is [<start>-]<end>[:<increment>]. The current index value starts at the starting index, and increases by the increment value each subsequent time that the command runs. When the current index exceeds the ending index, the task stops running the command. The starting index must be less than the ending index, and the increment value must be a positive integer.
/parentjobids: <jobID_list> Specifies the list of job IDs that the job will depend on in a format of <jobID1>[,<jobID2>...].

The job IDs must already exist.

The HPC Job Scheduler Service will schedule the job only when its parent jobs have completed and are all in a Finished state. If any parent job has not completed or has completed but is in a Canceled or Failed state, the job remains queued.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/password:<password> Specifies the password for the account under which the job should run. If you specify the /user parameter but not the /password parameter, the job submit command prompts you for the password and whether to store the password.
/priority:<priority> Specifies the priority for scheduling the job. For Windows HPC Server 2008 , the priority value can only be one of the following named values: Highest, AboveNormal, Normal, BelowNormal, or Lowest.

For Windows HPC Server 2008 R2, you can use any of the five named priority values that you could use in Windows HPC Server 2008. You can also use any number between 0 and 4000, with 0 as the lowest priority and 4000 as the highest. You can also specify the priority value as named_value+offset or named_value-offset. For the purpose of these final formats, the named priorities have the values in the following table, and the combination of the named value and offsets cannot be less than 0 or greater than 4000.

Highest -
4000

AboveNormal
- 3000

Normal
- 2000

BelowNormal -
1000

Lowest
- 0

The job template that the job uses specifies permissions that affect who can specify elevated priorities.

The HPC Job Scheduler Service places jobs with the same priority into the job queue in the order that users submit the jobs, unless a user requeues a job. If a user requeues a job, the HPC Job Scheduler Service places that job first among the jobs with the same priority.

The default priority for a job is Normal or 2000.
/progress:<percent_progress> Specifies the percentage of the job that is complete. This value must be between 0 and 100.

If you do not set the value of this property, the HPC Job Scheduler Service calculates the progress based on the percentage of tasks that are complete for the job. When you set this property for a job, the HPC Job Scheduler Service does not continue to update this property, so you must continue to update the property by using the job modify command.

This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.
/progressmsg:<message> Specifies a custom status message that you want to display for the job. The maximum length for this string is 80 characters.

To specify a status message that includes spaces, enclose the status message in quotation marks (").

This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.
/projectname:<name> Specifies a project name for the job that you can use for tracking jobs.

The maximum length for the project name is 80 characters.
/requestednodes:<node_list> Specifies a list of names for the nodes on which the job can run, in a format of <node1_name>[,<node2_name>…]. These nodes are candidates for the job, but not all of the nodes will necessarily run the job if the available resources on these nodes exceed the resources that the job requires. The HPC Job Scheduler Service allocates the top nodes according to the value of the /orderby parameter until the allocated nodes meet the value that you specified with the /numcores, /numsockets, /numprocessors, or /numnodes parameter.

If you do not specify the /requestednodes parameter, the HPC Job Scheduler Service considers all nodes as candidates that the HPC Job Scheduler Service can allocate to the job.

If you specify values for the /requestednodes and the /nodegroups parameters, the job runs only on the nodes in the list of nodes for the /requestednodes parameter that also belong to the list of nodes defined with the /nodegroup and /nodegroupop parameters.
/rerunnable[:{true|false}] Specifies whether the HPC Job Scheduler Service attempts to rerun the task if the task runs and fails.

A value of True indicates that the HPC Job Scheduler Service can attempt to rerun the task if the task is preempted or if it fails due to a cluster issue, such as a node becoming unreachable. The job scheduler does not attempt to rerun tasks that run to completion and return a with an unsuccessful exit code.

A value of False indicates that the HPC Job Scheduler Service should not attempt to rerun the task if the task begins but does not complete due to preemption or cluster issues. Instead it should move the task to the failed state immediately.

The cluster administrator can configure the number of times that the HPC Job Scheduler Service tries to rerun a rerunnable task before moving the task to the failed state.

If you do not specify the /rerunnable parameter, the command behaves as if you specified a value of True.
/runtime:{<time> | Infinite} Specifies the maximum amount of time the job should run. After the job runs for this amount of time, the HPC Job Scheduler Service cancels the job. You specify the amount of time in the format [[<days>:]<hours>:]<minutes>. You can also specify Infinite to indicate that the job can run for an unlimited amount of time.

If you specify only one part of the [[<days>:]<hours>:]<minutes> format, the command interprets the specified value as the number of minutes. For example, 12 indicates 12 minutes.

If you specify two parts of the format, the command interprets the left part as hours and the right part as minutes. For example, 10:30 indicates 10 hours and 30 minutes.

You can use one or more digits for each part of the format. The maximum value for each part is 2,147,483,647. If you do not specify the /runtime parameter, the default value is Infinite.
/rununtilcanceled[:{true|false}] Specifies whether the job continues to run and hold resources until the run-time limit expires or someone cancels the job.

A value of True indicates that the job continues to run and hold resources until the run-time limit expires or someone cancels the job. If you specify a value of True, you must specify minimum and maximum values for the /numcores, /numnodes, or /numsockets parameter, or an error occurs when you submit the job.

A value of False indicates that the job should stop and release its resources when all of the tasks in the job are complete.

When you specify the /rununtilcanceled parameter without a value, the job submit command behaves as if you specified a value of True. If you do not specify the /rununtilcanceled parameter, the job submit command behaves as if you specified a value of False.
/scheduler:<name> Specifies the host name or IP address of the head node for the cluster on which you want to submit the task. The value must be a valid computer name or IP address. If you do not specify the /scheduler parameter, this command uses the scheduler on the head node that the CCP_SCHEDULER environment variable specifies.
/singlenode[:{true|false}] Specifies that all resources will be allocated on one node.

If /singlenode is declared but no value is given, True is assumed. If /singlenode is not declared, False is assumed.

This parameter was introduced in HPC Pack 2012 and is not supported in previous versions.
/stderr:[<path>\]<file_name> Specifies the name for the file to which the task should redirect the standard error stream, including the full path or the path that is relative to the working directory for the file if the task should not redirect the standard error stream to a file in the working directory. If you specify a path that does not exist, the task fails.

If you do not specify the /stderr parameter, the task stores up to 4 kilobytes (KB) of data in the database for the HPC Job Scheduler Service that the Output property for the task specifies. Any output that exceeds 4 KB is lost.

The maximum length of value for this parameter is 160 characters.
/stdin:[<path>\]<file_name> Specifies the name for the file from which the task should receive standard input, including the full path or the path that is relative to the working directory for the file if the task should not receive standard input from a file in the working directory. If you specify a file or path that does not exist, the task fails.

The maximum length of value for this parameter is 160 characters.
/stdout:[<path>\]<file_name> Specifies the name for the file to which the task should redirect standard output, including the full path or the path that is relative to the working directory for the file if the task should not redirect standard output to a file in the working directory. If you specify a path that does not exist, the task fails.

If you do not specify the /stdout parameter, the task stores up to 4 kilobytes (KB) of data in the database for the HPC Job Scheduler Service that the Output property for the task specifies. Any output that exceeds 4 KB is lost.

The maximum length of value for this parameter is 160 characters.
/taskexecutionfailureretrylimit:<retry_limit> Specifies the maximum number of times a task in this job other than a node preparation or node release task will be automatically requeued after an application execution failure occurs.

This parameter was introduced in HPC Pack 2012 R2 Update 1. It is not available in previous versions.
/type:<type_name> Specifies a type for the task, which defines how to run the command for the task. The following are the types that you can specify:

Basic -
Runs a single instance of a serial application or a Message Passing Interface (MPI) application. An MPI application typically runs concurrently on multiple cores and can span multiple nodes.

NodePrep -
Runs a command or script on each compute node as it is allocated to the job. The Node Preparation task runs on a node before any other task in the job. If the Node Preparation task fails to run on a node, that node is not added to the job.

NodeRelease -
Runs a command or script on each compute node as it is released from the job. Node Release tasks run when the job is canceled by the user or by graceful preemption. Node Release tasks do not run when the job is canceled by immediate preemption.

ParametricSweep -
uns a command a specified number of times as indicated by the Start, End, and Increment values, generally across indexed input and output files. The steps of the sweep may or may not run in parallel, depending on the resources that are available on the HPC cluster when the task is running. When you specify the ParametricSweep type, you should use the /parametric parameter to specify the start, end, and increment values for the index. If you do not use /parametric parameter, the command runs once with an index of 0.

Service -
Runs a command or service on all the resources that are assigned to the job. New instances of the command start when the new resources are added to the job, or if a previously running instance exits and the resource that the previously running instance used is still allocated to the job. A service task continues to start new instances until the task is canceled, the maximum run time expires, or the maximum number of instances is reached. A service task can create up to 1,000,000 subtasks. Tasks that you submit through a service-oriented architecture (SOA) client run as service tasks. You cannot add a basic task or a parametric sweep task to a job that contains a service task.

The default value for this parameter is Basic unless you also specify the /parametric parameter. If you specify the /parametric parameter, the default value of the /type parameter is ParametricSweep.

If you specify the /type parameter with a value other than ParametricSweep, you cannot also specify the /parametric parameter.

This parameter was introduced in HPC Pack 2008 R2 and is not supported in previous versions.
/workdir:<folder> Specifies the working directory under which the task should run. The maximum length of value for this parameter is 160 characters.
<command> [<parameters>] Specifies the command line for the task, including the command or application name and any necessary parameters.

Unless defined within a /taskfile task XML file, a command must exist for the task to be added. The command will be executed relative to the working directory unless it contains a fully qualified path.
/user:[<domain>\]<user_name>] Specifies the user name and, optionally, the domain of the account under which the job should run. If you do not specify this parameter, the job runs under the account used to submit the job.
/validexitcodes:{int|intStart..intEnd} [,{int|intStart..intEnd}]* Specifies the exit codes to be used for checking whether tasks in a job successfully exit. /validexitcodes must be specified by discrete integers and integer ranges separated by commas.

min or max can be used as the start or end of an integer range. For example, 0..max represents all nonnegative integers.

This parameter can be overridden by declaring the /validexitcodes parameter specific to a task. All tasks that do not have this parameter explicitly declared will inherit the parameter from the job.

If /validexitcodes is not defined, 0 is the default valid exit code.
/? Displays Help at the command prompt.
/help Displays Help at the command prompt.

Remarks

  • The job submit command has three major forms. The /password, /user, and /scheduler parameters are common to all three forms.

  • The first form of the command also includes the /id parameter, and submits the existing job that has the specified job identifier.

  • The second form of the command includes the /jobfile parameter in addition to the common parameters. This form creates a new job with the settings and tasks that the job file specifies, and immediately submits that job.

  • The third form of the command includes the remaining parameters, except for those that display a Help message at the command prompt. This form of the command creates a new job with a single task to run the specified command, and then submits that job.

  • After you submit the job, the HPC Job Scheduler Service validates the job and enters the job in the job queue. The HPC Job Scheduler Service waits to start the job until sufficient resources become available.

  • Starting in Windows HPC Server 2008 R2 , you can specify that asterisks (*) in the command line for a parametric task should not be replaced with the current value of the parametric index by preceding the asterisk with three caret (^) characters. For example, if you use the job submit command to submit a job with one parametric task and specify the command line as echo *, the task prints the values of the parametric index. If you instead use the job add command to create a parametric task and specify the command line as echo ^^^*, the task prints an asterisk for each value of the parametric index.

Examples

To submit the job with a job identifier of 38, use the following command:

job submit 38  

To create a new job by using the settings in the file at C:\Jobs\MyJob.xml and then immediately submit the job, use the following command:

job submit /jobfile:C:\Jobs\MyJob.xml  

To create and submit a new single-task job with a job type of Basic, which sends email notification when the job starts, and has a status message of "Submitted", and which runs the vol command, use the following command:

job submit /notifyonstart:true /progressmsg:"Submitted" /type:Basic vol  

To create a new job, add a task to the job, and then submit the job, run a batch file that includes the following command, which uses the for command to get the identifier of the new job:

for /f "usebackq tokens=4 delims=. " %%i in (`job new`) do (  
job add %%i echo Hello World  
job submit /id:%%i  
)  

Additional references