On contemporary computers, speeding up computations is most often achieved by employing multiprocessors concurrently on shared‐memory multi‐cored nodes or multiprocessor distributed‐memory clusters. MPI is a library of communication functions to enable and enhance multiprocessing on these computer architectures. This tutorial introduces many of the basic MPI functions through practical examples. Working knowledge of C or Fortran is required to attend the course. Basic knowledge of Unix/Linux will be helpful. Please remember to sign up for all four sessions to complete the tutorial. |
Speaker(s): Kadin Tseng |
When |
Tuesday, Jun 17, 2014
at 1:00pm
until 3:00pm
on Tuesday, Jun 17, 2014
Register by 6/17/2014.
|
Where |
111 Cummington Mall
(MCS B27)
|
Who |
Admission is free
|
Contact |
IS&T RCS
|
|
|