Many computational science and engineering problems, be it through differential, integral or other methods, reduce at the end to either a series of matrix, or some forms of grid, operations. The dimensions of the matrices, or grids, are often determined by the physical problems. For the purposes of multiprocessing, frequently these matrices or grids are partitioned, or domain-decomposed, so that each partition (or subdomain) is assigned to a process. One such example is a mxn matrix decomposed into p qxn submatrices, each assigned to be worked on by one of the p processes. In this case, each process represents one distinct submatrix in a straightforward manner. However, an algorithm might dictate that the matrix be decomposed into a pxq logical grid whose elements are themselves each a rxs matrix. This requirement might be due to a number of reasons: efficiency considerations, ease in code implementation, code clarity, to name a few. Although it is possible to still refer to each of these pxq subdomains by a linear rank number, it is obvious that a mapping of the linear process rank to a 2d virtual rank numbering would facilitate a much clearer and natural computational representation. To address the needs of this and other topological layouts, the MPI library provides two types of topology routines: cartesian and graph topologies. Only cartesian topology and the associated routines will be discussed.

Some of the MPI cartesian topology routines are:

- MPI_Cart_create

– creates a cartesian grid representation for the participating processes
- MPI_Cart_coords

– maps a linear rank number into its equivalent multi-dimensional

cartesian grid rank representaion
- MPI_Cart_rank

– finds the reverse of MPI_Cart_coords
- MPI_Cart_shift

– finds the two neighbors of calling process along a specified cartesian direction
- MPI_Cart_sub

– splits for example a 2D virtual process grid into individual

rows or columns

- MPI_Cart_get

– to retrieve pertinent information on cartesian communicator

#### Virtual Topology Examples

- Example 3. Laplace Solver (in F90) using MPI virtual topology routines
- Example 4. Matrix Transpose (in F90) using MPI virtual topology routines