Chapter 2. Getting Started

This chapter provides procedures for building MPI applications on Linux and IRIX systems. It provides examples of the use of the mpirun(1) command to launch MPI jobs. It also provides procedures for building and running SHMEM applications.

Compiling and Linking IRIX MPI Programs

To use the 64-bit MPI library, choose one of the following commands, specifying the mpi library using the -l compiler option on the compiler command line::

% CC -64 compute.C -lmpi++ -lmpi
% cc -64 compute.c -lmpi
% f77 -LANG:recursive=on -64 compute.f -lmpi
% f90 -LANG:recursive=on -64 compute.f -lmpi

To use the 32-bit MPI library, choose one of the following commands:

% CC -n32 compute.C -lmpi++ -lmpi
% cc -n32 compute.c -lmpi
% f77 -n32 compute.f -lmpi
% f90 -n32 compute.f -lmpi

If the Fortran 90 compiler version 7.2.1 or later is installed, for compile-time checking of MPI subroutine calls, you can add the -auto_use option as follows:

% f90 -auto_use mpi_interface -LANG:recursive=on -64 compute.f -lmpi
% f90 -auto_use mpi_interface -n32 compute.f -lmpi

If your program does not perform MPI-2 one-sided operations like put and get to a local Fortran variable or array with the SAVE attribute, you can omit the -LANG:recursive=on option. Note that MPI-2 one-sided communication is not supported for the 32-bit MPI library, and so -LANG:recursive=on is not needed with -n32.

If the MPI application also uses OpenMP directives, you should link the application with the libraries listed in the following order:

% CC -mp -64 compute.C -lmp -lmpi++ -lmpi
% cc -mp -64 compute.c -lmp -lmpi
% f77 -mp -64 compute.f -lmp -lmpi
% f90 -mp -64 compute.f -lmp -lmpi

This order is not required, but under certain cases this order leads to better application performance. For further information about using hybrid applications, see “Tuning MPI/OpenMP Hybrid Codes” in Chapter 6.

If the MPI application uses the SGI pthreads library, use the following library order when linking the application:

% CC -64 compute.C -lmpi++ -lmpi -lpthread
% cc -64 compute.c -lmpi -lpthread

This order is necessary because the SGI MPI library contains internal initialization routines that might be required to be run prior to other initialization routines. The SGI libpthread.so library has one of these initialization routines that can conflict with the MPI routines. Use the linkage order shown above to ensure that they do not conflict.

Compiling and Linking Linux MPI Programs

The default locations for the include files, the .so files, the .a files, and the mpirun command are pulled in automatically. Once the MPT RPM is installed as default, the commands to build an MPI-based application using the .so files are as follows:

  • To use the 64-bit MPI library on Linux systems, choose one of the following commands:

    % g++ -o myprog myprog.C -lmpi++ -lmpi
    % gcc -o myprog myprog.c -lmpi
    % g77 -I/usr/include -o myprog myprog.f -lmpi 

  • To compile programs on Linux with the Intel compiler, use the following commands:

    % efc -o myprog myprog.f -lmpi        (Fortran)
    % ecc -o myprog myprog.c -lmpi        (C)

    For Linux the libmpi++.so library is not binary compatible with code generated by g++ 3.0 compilers. For this reason an additional library is supported for g++ 3.0 users as well as Intel C++ 8.0 users. The library is libg++3mpi++.so and can be linked in by using -lg++3mpi++ instead of -lmpi++.


    Note: You must use the Intel compiler to compile Fortran 90 programs on Linux systems.


  • To compile Fortran programs on Linux with the Intel compiler, enabling compile-time checking of MPI subroutine calls, insert a USE MPI statement near the beginning of each subprogram to be checked and use the following command:

    % efc -I/usr/include -o myprog myprog.f -lmpi        


    Note: The above command line assumes a default installation; if you have installed MPT into a non-default location, replace /usr/include with the name of the relocated directory.



    Note: At the time this manual was written, the MPI.mod file included in MPT 1.9 was unusable by Intel efc compiler versions 8 and beyond. The supplied MPI.mod file is generated with efc version 7.1, build 20030605 and is accepted only by efc version 7 compilers.


Using mpirun to Launch an MPI Application

You must use the mpirun command to start MPI applications. For complete specification of the command line syntax, see the mpirun(1) man page. This section summarizes the procedures for launching an MPI application.

Launching a Single Program on the Local Host

To run an application on the local host, enter the mpirun command with the -np argument. Your entry must include the number of processes to run and the name of the MPI executable file.

The following example starts three instances of the mtest application, which is passed an argument list (arguments are optional):

% mpirun -np 3 mtest 1000 "arg2"

Launching a Multiple Program, Multiple Data (MPMD) Application on the Local Host

You are not required to use a different host in each entry that you specify on the mpirun command. You can launch a job that has multiple executable files on the same host. In the following example, one copy of prog1 and five copies of prog2 are run on the local host. Both executable files use shared memory.

% mpirun -np 1 prog1 : 5 prog2

Note that for IRIX systems running MPMD applications, the executable files must be compiled as either 32-bit or 64-bit applications.

Launching a Distributed Application

You can use the mpirun command to launch a program that consists of any number of executable files and processes and you can distribute the program to any number of hosts. A host is usually a single machine, or it can be any accessible computer running Array Services software. For available nodes on systems running Array Services software, see the /usr/lib/array/arrayd.conf file.

You can list multiple entries on the mpirun command line. Each entry contains an MPI executable file and a combination of hosts and process counts for running it. This gives you the ability to start different executable files on the same or different hosts as part of the same MPI application.

The examples in this section show various ways to launch an application that consists of multiple MPI executable files on multiple hosts.

The following example runs ten instances of the a.out file on host_a:

% mpirun host_a -np 10 a.out

When specifying multiple hosts, you can omit the -np option and list the number of processes directly. The following example launches ten instances of fred on three hosts. fred has two input arguments.

% mpirun host_a, host_b, host_c 10 fred arg1 arg2

The following example launches an MPI application on different hosts with different numbers of processes and executable files:

% mpirun host_a 6 a.out : host_b 26 b.out

Using MPI-2 Spawn Functions to Launch an Application

To use the MPI-2 process creation functions MPI_Comm_spawn or MPI_Comm_spawn_multiple, you must specify the universe size by specifying the -up option on the mpirun command line. For example, the following command starts three instances of the mtest MPI application in a universe of size 10:

% mpirun -up 10 -np 3 mtest

By using one of the above MPI spawn functions, mtest can start up to seven more MPI processes.

When running MPI applications on partitioned Altix systems which use the MPI-2 MPI_Comm_spawn or MPI_Comm_spawn_multiple functions, it may be necessary to explicitly specify the partitions on which additional MPI processes may be launched. See the section "Launching Spawn Capable Jobs on Altix Partitioned Systems" on the mpirun(1) man page.

Compiling and Running SHMEM Applications on IRIX Systems

To compile a 64-bit SHMEM application on IRIX systems, choose one of the following commands:

% CC -64 compute.C -lsma
% cc -64 compute.c -lsma
% f77 -LANG:recursive=on  -64 compute.f -lsma
% f90 -LANG:recursive=on  -64 compute.f -lsma

To use the 32-bit SHMEM library, choose one of the following commands:

% CC -n32 compute.C -lsma
% cc -n32 compute.c -lsma
% f77 -LANG:recursive=on  -n32 compute.f -lsma
% f90 -LANG:recursive=on  -n32 compute.f -lsma


Note: It is generally not recommended to compile SHMEM applications as 32-bit executable files.


If the Fortran 90 compiler version 7.2.1 or later is installed, to get compile-time checking of MPI subroutine calls, you can add the -auto_use option as follows:

% f90 -auto_use shmem_interface -LANG:recursive=on -64 compute_shmem.f -lsma
% f90 -auto_use shmem_interface -LANG:recursive=on -n32 compute_shmem.f -lsma

If your program does not perform SHMEM one-sided operations like put and get to a local Fortran variable or array with the SAVE attribute, you can omit the -LANG:recursive=on option. This option prevents the compiler from holding these variables in registers across a subroutine call.

You do not need to use mpirun to launch SHMEM applications unless the MPI library was also linked with the application. Use the NPES environment variable to specify the number of SHMEM processes to use when running a SHMEM executable file. For example, the following command runs shmem_app on 32 processes:

% setenv NPES 32
% ./shmem_app

If MPI is also used in the executable file, you must use mpirun to launch the application, as if it were an MPI application.

Compiling and Running SHMEM Applications on Linux Systems

To use the 64-bit SHMEM library on Linux systems, choose one of the following commands:

% g++ compute.C -lsma
% gcc compute.c -lsma
% g77 -I/usr/include compute.f -lsma

To compile SHMEM programs on Linux systems with the Intel compiler, use the following commands:

% ecc compute.C -lsma
% ecc compute.c -lsma
% efc compute.f -lsma

Unlike IRIX systems, with Linux systems you must use mpirun to launch SHMEM applications. The NPES variable has no effect on SHMEM programs running on Linux. To request the desired number of processes to launch, you must set the -np option on mpirun.

On Linux, the SHMEM programming model supports single host SHMEM applications, as well as SHMEM applications that span multiple partitions. To launch a SHMEM application on more than one partition, use the multiple host mpirun syntax, such as the following:

% mpirun hostA, hostB -np 16 ./shmem_app

For more information, see the intro_shmem(3) man page.