Introduction to MPI, Part One
On contemporary computers, speeding up computations is most often achieved by employing multiprocessors concurrently on shared‐memory multi‐cored nodes or multiprocessor distributed‐memory clusters. MPI is a library of communication functions to enable and enhance multiprocessing on these computer architectures. This tutorial introduces many of the basic MPI functions through practical examples. Working knowledge of C or Fortran is required to attend the course. Basic knowledge of Unix/Linux will be helpful. Please remember to sign up for all three sessions to complete the tutorial.
If you try to sign up for this tutorial and find that it is full, please send email to firstname.lastname@example.org and we will do our best to accommodate you.