Introduction to MPI (Hands-on)
Many programs can be sped up by using additional CPU cores. To do this the execution needs to be parallelized and distributed across multiple cores. While "shared-memory" approaches like OpenMP allow you to use many cores on a single machine, if the program can still benefit from additional cores then a "distributed-memory" approach like MPI is needed to use multiple machines/nodes. MPI provides a way to communicate between machines and distribute work/data so that they can work cooperatively. This tutorial will take a hands-on approach at writing several simple MPI programs and along the way demonstrate basic MPI functionality.
Prior parallel programming experience for attendees is important. Programs will be written in Fortran so prior experience in Fortran is helpful, but the syntax is straightforward so C/C++ experience can be enough. It is strongly recommended that the “Introduction to Parallel Programming Concepts” tutorial be taken first for those new to parallel software development.
If you try to sign up for this tutorial and find that it is full, please send email to rcs-tutorial@bu.edu and we will do our best to accomodate you.