Introduction to numerical linear algebra in parallel

by Dr Susana Gómez

Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), México

Linear algebra is at the core of most scientific computing applications.  Solving linear systems of equations, either with dense or sparse coefficient matrices, can be by far the most time-consuming aspect of solving a problem computationally.  Thus, since the advent of parallel computers, the efficient parallel programming of linear algebra operations has been a central concern of many researchers.

Gaussian elimination is the most basic technique, whereby one equation is used to cancel an unknown from another repeatedly. The process generates a triangular system that can be solved immediately by substitution.  The crucial question is how to perform these operations efficiently in parallel.

Directions of steepest descent in the conjugate gradient method
Directions of steepest descent in the conjugate gradient method

Iterative methods of solution are required when Gaussian elimination takes too much time due to the size of the system. This is often the case in scientific computing.  Roughly in increasing order of effectiveness, the basic methods are:  Jacobi iteration, Gauss-Seidel, successive overrelaxation, and Krylov subspace methods such as conjugate gradient.

This course will cover the aspects of efficient parallel programming of:

  • inner products and matrix-vector products
  • Gaussian elimination
  • Jacobi and Gauss-Seidel iterative methods
  • Conjugate gradient methods

Supplementary material