- Objectives of this Tutorial
- Introduces you to the fundamentals of MPI by ways of F77, F90 and C examples
- Shows you how to compile, link and run MPI code
- Covers additional MPI routines that deal with virtual topologies
- Cites references
- What is MPI?
- MPI stands for Message Passing Interface and its standard is set by the Message Passing Interface Forum
- It is a library of subroutines/functions, NOT a language
- MPI subroutines are callable from Fortran and C
- Programmer writes Fortran/C code with appropriate MPI library calls, compiles with Fortran/C compiler, then links with Message Passing library
- Why MPI?
- For large problems that demand better turn-around time (and access to more memory)
- For Fortran “dusty deck”, often it would be very time-consuming to rewrite code to take advantage of parallelism. Even in the case of SMP, as are the SGI PowerChallengeArray and Origin2000, automatic parallelizer might not be able to detect parallelism.
- For distributed memory machines, such as a cluster of Unix work stations or a cluster of NT/Linux PCs.
- Maximize portability; works on distributed and shared memory architectures.
- In a user code, wherever MPI library calls occur, the following header file must be included:
- MPI is initiated by a call to MPI_Init. This MPI routine must be called before any other MPI routines and it must only be called once in the program.
- MPI processing ends with a call to MPI_Finalize.
- Essentially the only difference between MPI subroutines (for Fortran programs) and MPI functions (for C programs) is the error reporting flag. In fortran, it is returned as the last member of the subroutine’s argument list. In C, the integer error flag is returned through the function value. Consequently, MPI fortran routines always contain one additional variable in the argument list than the C counterpart.
- C’s MPI function names start with “MPI_” and followed by a character string with the leading character in upper case letter while the rest in lower case letters. Fortran subroutines bear the same names but are case-insensitive.
There are essentially two different paradigms in MPI programming, SPMD (Single Program Multiple Data) and MPMD (Multiple Programs Multiple Data). The example programs shown below employ the SPMD paradigm, i.e., an identical copy of the same program is used for each of the processes.
While each example below is self-contained, it is highly recommended that the readers follow them in chronological order so that many of the finer points can be suitably demonstrated or explained in progression.
- Example 1. Basics of Numerical Integration
- You can download the above examples, along with the appropriate makefiles and batch scripts for the four parallel systems maintained by SCV.
The Research Computing Services group (RCS) at Boston University maintains the Shared Computing Cluster (SCC), a large Linux cluster located in Holyoke, MA. Provided below are the links to the instructions on compilation and running jobs for the SCC.
In addition to the basic MPI routines demonstrated above, there are many other routines for various applications. Some of the more frequently used routines, grouped according to their functionalities, are discussed below:
There are a number of MPI references available.
- Books :
- Parallel Programming with MPI by P. S. Pacheco, Morgan Kaufmann, 1997
- Using MPI by W. Gropp, E. Lusk and A. Skjellum, The MIT Press, 1994
- Online Documents:
- MPI: The Complete Reference by M. Snir, et. al., The MIT Press, 1996
- MPI: A Message-Passing Interface Standard Version 4.0, MPI Forum, June 2022.
Here is the complete list of MPI routines and constants at the Argonne National Laboratory.
Your suggestions and comments are welcomed; please send them to firstname.lastname@example.org.