Export (0) Print
Expand All

Appendix: Sample Commands

Updated: May 20, 2009

Applies To: Windows HPC Server 2008

The following tables provide sample commands that can be used to submit a variety of jobs, and they explain how these jobs are run by the HPC Job Scheduler Service.

Sample commands for simple and parametric jobs

 

Command Result

Job submit myapp.exe

Runs myapp.exe on a single processor. The undirected output is available in the Output property of the job task.

Job submit /workdir:\\headnode\MyFolder /stdout:data.out /stdin:data.in myapp.exe

Runs myapp.exe on a single processor by using a head node file share as the working directory. The input and output are redirected to and from files in that directory.

Job submit /numcores:4-8 /parametric:100 /stdin:input*.dat myapp.exe

Runs between four and eight simultaneous instances of the myapp.exe command. myapp.exe is executed 100 times on files input1.dat, input2.dat, input3.dat, through input4.dat.

Job submit /exclusive:true /requestednodes:MyNode01, MyNode02 myapp.exe

Runs myapp.exe on either MyNode01 or MyNode02. No other command can execute on that node when this command is running.

Job submit /NodeGroups:TestGroup myapp.exe

Runs myapp.exe on a node in the node group named TestGroup.

Sample commands for MPI jobs

 

Command Result

job submit /numcores:6 /stdout:myOutput.txt mpiexec myapp.exe

Runs myapp.exe on six processors with the standard output directed to the file myOutput.txt in the user’s home directory on the head node (as defined in the %USERPROFILE% environment variable on the head node).

job submit /numnodes:2 mpiexec myapp.exe

Runs one myapp.exe process on two compute nodes. The standard output is stored in the HPC Job Scheduler Service database in the Output field for the task.

job submit /numnodes:2 /requestednodes:Node1,Node2 mpiexec myapp.exe

Runs one myapp.exe process on two compute nodes: Node1 and Node2.

job submit /numnodes:16 mpiexec -cores 4 myapp.exe

Runs four myapp.exe processes on each of the 16 nodes assigned to the job regardless of the number of physical cores on each node.

job submit /numcores:24 /workdir:\\headnode\MyFolder mpiexec myapp.exe

Specifies a single, shared working directory for all 24 myapp.exe processes in the job.

job submit /numcores:128 mpiexec -env MPICH_NETMASK 157.59.0.0/255.255.0.0 myapp.exe

Runs 128 myapp.exe processes where the Message Passing Interface (MPI) data traffic is routed to a specific network in the cluster (157.59.x.x/255.255.0.0 in this example) by setting an environment variable (MPICH_NETMASK) on the MPI processes.

job submit /numcores:16 mpiexec –affinity myapp.exe

The Windows operating system optimizes the use of multicore systems by dynamically shifting workload to underutilized cores. However, this refinement can be detrimental to the performance of some HPC applications. The -affinity flag prevents the operating system from moving the MPI processes between cores. They run exclusively on the core that they started on.

Additional references

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback

Community Additions

ADD
Show:
© 2014 Microsoft