Export (0) Print
Expand All

Environment Variables for the mpiexec Command

Updated: February 19, 2014

Applies To: Microsoft HPC Pack 2008, Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2, Windows HPC Server 2008, Windows HPC Server 2008 R2

You can use a number of built-in environment variables to affect how a Message Passing Interface (MPI) application runs when you start the MPI application with the mpiexec command. These environment variables are classified in the following groups, depending on how the environment variable is set:

The environment variables in these groups have names that start with a prefix that matches the group name.

The environment variables with a prefix of MPIEXEC_ correspond to command-line parameters for the mpiexec command. You can use these environment variables to simplify the mpiexec command when you always want the effect of a particular parameter. For example, if you always want to set a timeout for jobs that run mpiexec commands, you can set the MPIEXEC_TIMEOUT environment variable to always impose a timeout without having to specify the /timeout parameter for each mpiexec command.

You normally set these environment variables before you run mpiexec, rather than using the /env or /genv parameter to set them. You can set them in several ways, including the following methods:

  • Running the cluscfg setenvs command.

  • Using a set command as part of the command line for the task that runs the mpiexec command.

  • Specifying the environment variable as a job environment variable or a task environment variable when you create the job and task that run the mpiexec command. For example, you could set them by using the /jobenv parameter of the job new command or the /env parameter of the job add command.

  • Setting the environment variable in a job template that users specify when creating jobs that include a task that runs the mpiexec command.

If one of these variables is set in the run-time environment of the mpiexec command, and you do not specify the corresponding parameters as part of the command line for the mpiexec command, the command behaves as if you specified the corresponding command-line parameters as part of the command line. If you explicitly specify the corresponding parameters as part of the command line for the mpiexec command, the explicit command line settings override the values of the environment variables.

 

Name Description

MPIEXEC_AFFINITY

For HPC Pack 2008 and HPC Pack 2008 R2: Specifies whether to set the affinity mask for each of the processes that the mpiexec command starts to a single core.

This environment variable corresponds to the /affinity parameter of the mpiexec command.

A value of 1 indicates that the affinity mask should be set for each of the processes that the mpiexec command starts to a single core. A value of 0 indicates that the affinity mask should not be set for each of the processes that the mpiexec command starts to a single core.

For more information about affinity, see Performance Tuning a Windows HPC Cluster for Parallel Applications.

For HPC Pack 2012: Specifies the algorithm and, optionally, the target (in the format <algorithm>:[<target>]) used by the mpiexec command to distribute rank processes to the compute cores.

This environment variable corresponds to the /affinity_layout parameter of the mpiexec command.

The following table contains values for <algorithm>.

 

Algorithm Description

Disabled = 0

Does not assign affinity to any process

Spread = 1

Distributes the processes as far as possible (default)

Sequential = 2

Distributes the processes per core sequentially

Balanced = 3

Distribute the processes over the available NUMA nodes

The following table contains values for <target>.

 

Target Description

L

Assigns each process to a logical core

P

Assigns each process to a physical core

N

Assigns each process to a NUMA node

MPIEXEC_TIMEOUT

Sets the amount of time that the job that runs the mpiexec command should run before timing out, in seconds.

This environment variable corresponds to the /timeout parameter of the mpiexec command.

MPIEXEC_TRACE

Traces Microsoft-Message Passing Interface (MS-MPI) events for the application. You can specify trace filters to enable tracing only for events of interest. List the event filter names or their equivalent hexadecimal values by using a comma-separated list enclosed in parentheses.

By default, trace logs are written to the directory for the user profile on each node. Use the MPIEXEC_TRACEFILE environment variable to specify an alternative trace file.

The following table shows the event filter names and the equivalent hexadecimal values that you can include in the list of filters:

 

Name Hexadecimal value Description

all

0xffffffff

All API and communication events

api

0x00007fff

All API events

pt2pt

0x00000001

Point-to-point APIs

poll

0x00000002

Point-to-point polling APIs, such as MPI_Iprobe and MPI_TestXXX

coll

0x00000004

Collective APIs

rma

0x00000008

One-sided APIs

comm

0x00000010

Communication APIs

errh

0x00000020

Error handler APIs

group

0x00000040

Group APIs

attr

0x00000080

Attribute APIs

dtype

0x00000100

Data type APIs

io

0x00000200

Input/output APIs

topo

0x00000400

Topology APIs

spawn

0x00000800

Dynamic process APIs

init

0x00001000

Initialization APIs

info

0x00002000

Information APIs

misc

0x00004000

Miscellaneous APIs

interconn

0x000f8000

All interconnectivity communication

icsock

0x00008000

Socket interconnectivity communication

icshm

0x00010000

Shared memory interconnectivity communication

icnd

0x00020000

NetworkDirect interconnectivity communication

This environment variable corresponds to the /trace parameter of the mpiexec command.

noteNote
This environment variable is deprecated as of HPC Pack 2012. You can control MPI tracing with an Event Tracing for Windows (ETW) tool such as Xperf or Logman.

MPIEXEC_TRACEFILE

Specifies the name of file to use for the trace log, including the path. The default file is %USERPROFILE%\mpi_trace_job_identifier.task_identifier.subtask_identifier.etl.

This environment variable corresponds to the /tracefile parameter of the mpiexec command.

noteNote
This environment variable is deprecated as of HPC Pack 2012. You can control MPI tracing with an Event Tracing for Windows (ETW) tool such as Xperf or Logman.

MPIEXEC_TRACE_MAX

Specifies the maximum size of the trace log file, in megabytes. You must have at least max_size megabytes of free space available on the drive that you specify for the trace file.

The binary tracing data is written by using a circular buffer, so when the data exceeds the maximum size of the file, the tracing data is overwritten starting at the beginning of the file. As a result, the tracing log file always contains the most recent max_size megabytes of tracing data from the MPI job.

Each binary record in the trace log file has a time stamp, so that log file viewers can display the information in chronological order regardless of the wrapping cause when the tracing data is overwritten.

The default value for this environment variable is 10240. Specify 0 to allow the creation of a trace log file of unrestricted size.

This environment variable corresponds to the /tracemax parameter of the mpiexec command.

This environment variable is supported only in HPC Pack 2008 R2. It is deprecated as of HPC Pack 2012.

The environment variables with a prefix of MPICH_ are environment variables that you can set by using the /env, /genv, or /genvlist parameters of the mpiexec command. You can also set them before the mpiexec command runs by using the same methods that you use to set the MPIEXEC environment variables.

 

Name Description

MPICH_DISABLE_ND

Specifies whether to turn off the use of Network Direct for connections between ranks.

A value of 1 turns off the use of Network Direct for connections between ranks. A value of 0 does not turn off the use of Network Direct for connections between ranks.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ DISABLE_ND instead.

MPICH_DISABLE_SHM

Specifies whether to turn off the use of shared memory for connections between ranks.

A value of 1 turns off the use of shared memory for connections between ranks. A value of 0 does not turn off the use of shared memory for connections between ranks.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_DISABLE_SHM instead.

MPICH_DISABLE_SOCK

Specifies whether to turn off the use of sockets for connections between ranks.

A value of 1 turns off the use of sockets for connections between ranks. A value of 0 does not turn off the use of sockets for connections between ranks.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ND_ENABLE_FALLBACK instead.

MPICH_NETMASK

Limits network communication that uses sockets or Network Direct to only those connections that match the specified network mask. You can use this setting to send MPI messaging traffic to the highest performing network for the HPC cluster.

Specify the network mask in the form IP_mask/subnet_mask. For example, if you specify a value of 10.0.0.5/255.255.255.0 for this environment variable, the MPI application only uses networks that match an IP address of 10.0.0.x.

MPICH_PROGRESS_SPIN_LIMIT

Sets the fixed limit of the spin count for the process engine. The possible values are between 0 and 65536. A value of 0 uses an adaptive limit for the spin count, and is the default value. For oversubscribed cores, use a low value for this setting, such as 16.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_PROGRESS_SPIN_LIMIT instead.

MPICH_SHM_EAGER_LIMIT

Specifies the message size in bytes above which the mpiexec command should use the rendezvous protocol for shared memory communication.

The range of possible values for this environment variable is 1500 to 2000000000. The default value is 128000.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ND_ENABLE_FALLBACK instead.

MPICH_SOCK_EAGER_LIMIT

Specifies the message size in bytes above which the mpiexec command should use the rendezvous protocol for socket communication.

The range of possible values for this environment variable is 1500 to 2000000000. The default value is 128000.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_SOCK_EAGER_LIMIT instead.

MPICH_SOCKET_BUFFER_SIZE

Specifies the sockets send and receive buffer sizes in bytes (SO_SNDBUF and SO_RCVBUF).

The default value is 32768.

MPICH_SOCKET_RBUFFER_SIZE

Specifies the sockets receive buffer size in bytes (SO_RCVBUF). Overrides any values specified by MPICH_SOCKET_BUFFER_SIZE.

The default value is 32768.

MPICH_SOCKET_SBUFFER_SIZE

Specifies the sockets send buffer size in bytes (SO_SNDBUF). Overrides any value specified by MPICH_SOCKET_BUFFER_SIZE.

The default value is 32768.

MPICH_ND_EAGER_LIMIT

Specifies the message size in bytes above which the mpiexec command should use the rendezvous protocol for Network Direct communication.

The range of possible values for this environment variable is 1500 to 2000000000. The default value is 128000.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ND_EAGER_LIMIT instead.

MPICH_ND_ENABLE_FALLBACK

Specifies whether to use sockets for connections between ranks when the use of Network Direct connections is turned on, but the Network Direct connection fails.

A value of 1 indicates that sockets should be used for connections between ranks when the use of Network Direct connections is turned on, but the Network Direct connection fails.

A value of 0 indicates that sockets should not be used for connections between ranks when the use of Network Direct connections is turned on, but the Network Direct connection fails.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ND_ENABLE_FALLBACK instead.

MPICH_ND_ZCOPY_THRESHOLD

Specifies the message size in bytes above which the mpiexec command should transfer data directly from the buffer for an application to the buffer for a remote application without an intermediate operating system copy on either side. This capability is known as zero copy, or zCopy.

Specify 0 to use the threshold that is specified by the Network Direct provider. Specify -1 to turn off zCopy transfers. The default value is 0.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ND_ZCOPY_THRESHOLD instead.

MPICH_ND_MR_CACHE_SIZE

Specifies the size in megabytes of the memory registration cache for Network Direct communication. The default value equals half of the size of physical memory divided by the number of cores.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_ND_MR_CACHE_SIZE instead.

MPICH_SOCKET_SBUFFER_SIZE

Specifies the size of the buffer space for sending data for socket communication in bytes. The default value is 32768.

For more information about the size of this buffer, see the description of the SO_SNDBUF in SOL_SOCKET Socket Options.

MPICH_CONNECT_RETRIES

Specifies the number of times to retry a connection between ranks when the connection uses sockets. The default value is 5.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_CONNECT_RETRIES instead.

MPICH_PORT_RANGE

Specifies the range of ports that the MPI application should use for socket communication. The format for the value is minimum_port_number,maximum_port_number. The default value is 0,65535.

MIPCH_ABORT_ON_ERROR

Specifies whether the MPI application should exit when the first error occurs.

A value of 1 indicates that the MPI application should exit when the first error occurs. A value of 0 indicates that the MPI application should not exit when the first error occurs.

MPICH_INIT_BREAK

Sets a breakpoint that is relative to MPI initialization for debugging the application.

 

Value Description

preinit

Adds a breakpoint before MPI is initialized on all ranks

all

Adds a breakpoint after MPI is initialized on all ranks

<list_of_ranks_or_ranges>

Adds a breakpoint after MPI is initialized on the ranks in the specified list. The list is a comma-separated list of ranks or ranges of ranks, such as 1,5-8.

noteNote
This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_INIT_BREAK instead.

MPICH_CONNECTIVITY_TABLE

Specifies whether to add a connectivity table to the standard output of the job that runs the mpiexec command. A connectivity table shows the mode of communication used between each of the ranks in the job.

A value of 1 indicates that the standard output of the job that runs the mpiexec command should include additional information about the mode of communication that is used between each of the ranks. This additional information includes a list of associations between ranks and nodes and a connectivity map.

A value of 0 indicates that the standard output of the job that runs the mpiexec command should not include additional information about the mode of communication that is used between each of the ranks.

The list of associations between ranks and nodes lists the ranks in ascending numeric order and indicates the name of the node that contains the core to which each rank corresponds.

The connectivity map is a table that is formatted as ASCII text. The columns in the table correspond to the target rank for the connection, and the column headings indicate the rank number for each target rank. The rows in table correspond to the source rank for the connection, and the row labels indicate the rank number for each source rank.

Each entry in the table contains a character that indicates the mode of communication that is used during the job that ran the mpiexec command between the source rank and the target rank that correspond to the row and column for the entry. The following table shows the characters that the connectivity map uses and the modes of communication that they represent.

 

Character Mode of connectivity

+

Shared memory

@

Network Direct

S

Winsock, either TCP or Winsock Direct

.

None. The MPI job did not attempt to establish a connection between the source rank and target rank.

noteNote
This environment variable was introduced inHPC Pack 2008 R2 . This environment variable is deprecated as of HPC Pack 2012. Use MSMPI_DISABLE_CONNECTIVITY_TABLE instead.

The environment variables with a prefix of MSMPI_ are environment variables that you can set by using the /env, /genv, or /genvlist parameters of the mpiexec command. You can also set them before the mpiexec command runs by using the same methods that you use to set the MPIEXEC environment variables.

 

Name Description

MSMPI_DISABLE_ND

Specifies whether to turn off the use of Network Direct for connections between ranks.

A value of 1 turns off the use of Network Direct for connections between ranks. A value of 0 does not turn off the use of Network Direct for connections between ranks.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_DISABLE_SHM

Specifies whether to turn off the use of shared memory for connections between ranks.

A value of 1 turns off the use of shared memory for connections between ranks. A value of 0 does not turn off the use of shared memory for connections between ranks.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_DISABLE_SOCK

Specifies whether to turn off the use of sockets for connections between ranks.

A value of 1 turns off the use of sockets for connections between ranks. A value of 0 does not turn off the use of sockets for connections between ranks.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_PROGRESS_SPIN_LIMIT

Sets the fixed limit of the spin count for the process engine. The possible values are between 0 and 65536. A value of 0 uses an adaptive limit for the spin count, and is the default value. For oversubscribed cores, use a low value for this setting, such as 16.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_SHM_EAGER_LIMIT

Specifies the message size in bytes above which the mpiexec command should use the rendezvous protocol for shared memory communication.

The range of possible values for this environment variable is 1500 to 2000000000. The default value is 128000.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_SOCK_EAGER_LIMIT

Specifies the message size in bytes above which the mpiexec command should use the rendezvous protocol for socket communication.

The range of possible values for this environment variable is 1500 to 2000000000. The default value is 128000.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_ND_EAGER_LIMIT

Specifies the message size in bytes above which the mpiexec command should use the rendezvous protocol for Network Direct communication.

The range of possible values for this environment variable is 1500 to 2000000000. The default value is 128000.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_ND_ENABLE_FALLBACK

Specifies whether to use sockets for connections between ranks when the use of Network Direct connections is turned on, but the Network Direct connection fails.

A value of 1 indicates that sockets should be used for connections between ranks when the use of Network Direct connections is turned on, but the Network Direct connection fails.

A value of 0 indicates that sockets should not be used for connections between ranks when the use of Network Direct connections is turned on, but the Network Direct connection fails.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_ND_ZCOPY_THRESHOLD

Specifies the message size in bytes above which the mpiexec command should transfer data directly from the buffer for an application to the buffer for a remote application without an intermediate operating system copy on either side. This capability is known as zero copy, or zCopy.

Specify 0 to use the threshold that is specified by the Network Direct provider. Specify -1 to turn off zCopy transfers. The default value is 0.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_ND_MR_CACHE_SIZE

Specifies the size in megabytes of the memory registration cache for Network Direct communication. The default value equals half of the size of physical memory divided by the number of cores.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_CONNECT_RETRIES

Specifies the number of times to retry a connection between ranks when the connection uses sockets. The default value is 5.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_INIT_BREAK

Sets a breakpoint that is relative to MPI initialization for debugging the application.

 

Value Description

preinit

Adds a breakpoint before MPI is initialized on all ranks

all

Adds a breakpoint after MPI is initialized on all ranks

*

Adds a breakpoint after MPI is initialized on all ranks

<list_of_ranks_or_ranges>

Adds a breakpoint after MPI is initialized on the ranks in the specified list. The list is a comma-separated list of ranks or ranges of ranks, such as 1,5-8.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_CONNECTIVITY_TABLE

Specifies whether to add a connectivity table to the standard output of the job that runs the mpiexec command. A connectivity table shows the mode of communication used between each of the ranks in the job.

A value of 1 indicates that the standard output of the job that runs the mpiexec command should include additional information about the mode of communication that is used between each of the ranks. This additional information includes a list of associations between ranks and nodes and a connectivity map.

A value of 0 indicates that the standard output of the job that runs the mpiexec command should not include additional information about the mode of communication that is used between each of the ranks.

The list of associations between ranks and nodes lists the ranks in ascending numeric order and indicates the name of the node that contains the core to which each rank corresponds.

The connectivity map is a table that is formatted as ASCII text. The columns in the table correspond to the target rank for the connection, and the column headings indicate the rank number for each target rank. The rows in table correspond to the source rank for the connection, and the row labels indicate the rank number for each source rank.

Each entry in the table contains a character that indicates the mode of communication that is used during the job that ran the mpiexec command between the source rank and the target rank that correspond to the row and column for the entry. The following table shows the characters that the connectivity map uses and the modes of communication that they represent.

 

Character Mode of connectivity

+

Shared memory

@

Network Direct

S

Winsock, either TCP or Winsock Direct

.

None. The MPI job did not attempt to establish a connection between the source rank and target rank.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_SOCK_COMPRESSION_THRESHOLD

Attempts to compress all messages communicated using the sockets channel that are larger, in bytes, than the specified threshold. Threshold values that are below the minimum will be rounded up to the minimum threshold of 512.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_HA_COLLECTIVE

Enables awareness for the MPI library of the hierarchy of the interconnect. This facilitates better performance on all or a subset of the collective operations.

 

Value Description

all

Enable hierarchy awareness for Broadcast, Barrier, Reduce, and Allreduce operations.

<collective>

Enable hierarchy awareness for one or more of Broadcast, Barrier, Reduce, and Allreduce. The collectives are specified in the form a[,b]*; where a, b are one of Bcast, Barrier, Reduce, or Allreduce. Any combination of operations may be specified.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_COLLECTIVE

Specifies a series of trials run by the MPI library to determine what data size to use for various algorithms that make up a collective operation.

 

Value Description

all

Tune all collective operations that have multiple algorithms

<collective>

Tunes specified collective operations to optimize their performance. Collectives are specified in the form of a[,b]* where a, b are one of Bcast, Reduce, Allreduce, Gather, Allgather, Reducescatter, and Alltoall. Any combination of operations may be specified.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_PRINT_SETTINGS

When set in concert with MSMPI_TUNE_COLLECTIVE, directs the MPI library to produce values that are determined to be optimal for selecting the available collective algorithms.

 

Value Description

optionfile

Prints the values determined for optimal performance in <var> <value> format, one on each line. The resulting file can be used with the /optionfile argument.

cluscfg

Prints the values determined for optimal performance in a script format that will set the environment via the cluscfg command.

mpiexec

Prints the values determined for optimal performance in a block of /env flags that can be passed to the mpiexec command.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_SETTINGS_FILE

Writes the output of tuning to the specified file when used in concert with MSMPI_TUNE_COLLECTIVE. The default is to write the output on the console. The output is always written by rank 0.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_TIME_LIMIT

Changes the default time limit, in seconds, that is used to run the trials to optimize the collective operations when set in concert with MSMPI_TUNE_COLLECTIVE. This time limit is a suggestion to the MPI library and is not enforced as a hard limit. very collective that is tuned is run a minimum of five times for each data size. The default time limit is 60 seconds.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_ITERATION_LIMIT

Changes the default maximum number of trials for each data size and algorithm when set in concert with MSMPI_TUNE_COLLECTIVE. The default iteration limit is 10000. The minimum value is five (5).

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_SIZE_LIMIT

Changes the default maximum data size when set in conjunction with MSMPI_TUNE_COLLECTIVE,

Sets the maximum data size, in bytes, to attempt for time trials of collective algorithms. Every data size that is a power of two that is less than the size limit is tested. The default size limit is 16777216. The minimum value is one (1).

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_TUNE_VERBOSE

Directs the MPI Library to produce verbose output while running the trials to optimize the collective operations. This value is used with MSMPI_TUNE_COLLECTIVE. Verbose output is off by default. All output is written to the console by rank 0.

 

Value Description

0

Verbose output is turned off.

1

Print data tables.

2

Debug output.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_PRECONNECT

Directs the MPI library to attempt to establish connections between processes.

 

Value Description

all

All processes will be fully connected after MPI is initialized

*

All processes will be fully connected after MPI is initialized

<range>

Each process in <range> will be connected to all other processes after MPI is initialized. The rank range is in the form a,c-e; where a c and e are decimal integers.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_DUMP_MODE

Directs the MPI library to generate dump files of processes when MPI errors are encountered. A process that encounters multiple errors will overwrite the dump file, and only the state at the time of the last error is recorded.

 

Value Description

0

No dump files are generated when MPI errors are encountered

1

Processes that encounter MPI errors generate a minidump

2

All processes in the job generate a minidump when any process terminates due to an MPI error

3

Processes that encounter an MPI error generate a full memory dump

4

All processes in the job generate a full memory dump when any process terminates due to an MPI error

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_DUMP_PATH

Sets the minidump path when set in conjunction with MSMPI_DUMP_MODE, Assuming rank is the process' rank in MPI_COMM_WORLD, the dump files are stored at the provided path with the following names: mpi_dump <rank>.dmp.

If the environment variable is not set, the dump files are stored at the default path of %USERPROFILE%.

When used in a Windows HPC environment, jobid.taskid.taskinstanceid is added before the rank.

noteNote
This environment variable was introduced in HPC Pack 2012 and is not supported in previous versions.

MSMPI_JOB_CONTEXT

Specifies this instance of mpiexec to a node manager entity to authorize the launch of the MPI job on the requested set of resources. The value of the string is passed to the entity as part of start up.

You can specify this environment variable when Microsoft MPI is used in a managed setting outside of Microsoft® HPC Pack.

noteNote
This environment variable was introduced in HPC Pack 2012 R2 and is not supported in previous versions.

MSMPI_PRINT_ENVIRONMENT_BLOCK

Directs the MPI library to write the contents of its environment block to the standard output stream or to the file specified in the MSMPI_PRINT_ENVIRONMENT_BLOCK_FILE environment variable.

 

Value Description

0

The contents of the environment block are not written

1

The contents of the environment block are written

noteNote
This environment variable was introduced in HPC Pack 2012 R2 and is not supported in previous versions.

MSMPI_PRINT_ENVIRONMENT_BLOCK_FILE

When set in concert with MSMPI_PRINT_ENVIRONMENT_BLOCK, specifies the prefix of a file name to which the environment block is written instead of to the standard output stream.

The file name is the specified prefix with the PID appended: prefix_<PID>.txt, where <PID> is the process identifier assigned by the operating system. The file is written to the current working directory of the process unless the prefix contains a path or an environment variable that expands to a path.

noteNote
This environment variable was introduced in HPC Pack 2012 R2 and is not supported in previous versions.

Environment variables with the PMI_ prefix are environment variables that the smpd.exe process sets in the run-time environment of the MPI application, rather than environment variables that you set in the run-time environment yourself or that you specify when you run the mpiexec command.

 

Name Description

PMI_APPNUM

Indicates the zero-based position in which the current MPI application was specified in the mpiexec command. You can specify that mpiexec should start multiple MPI applications by separating the application names and their associated parameters with colons (:).

PMI_RANK

Indicates the rank of the current process among all of the processes that are created by the mpiexec command that created the current process.

PMI_SIZE

Indicates the total number of processes that are created by the mpiexec command that created the current process.

PMI_SMPD_KEY

Indicates the local identifier that smdp.exe uses for the current process.

PMI_SPAWN

Indicates whether the current MPI application was started by another MPI application.

A value of 1 indicates that the current MPI application was started by another MPI application. A value of 0 indicates that the current MPI application was not started by another MPI application.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft