This course intends to deliver some basic knowledge of the Message-Passing-Interface (MPI) library for distributed memory parallel computations.
Program: basic as well as intermediate-level construct such as basic send / receive communications, global reduction operations, blocking and collective calls, MPI datatypes, MPI I/O.
Prerequisites: Good knowledge of Unix or Linux-based operative systems, C or C++ programming basics (for loops, conditionals, I/O).
Timetable: Lectures will be held in Aula B (entering the building, it’s the 1st room on your right) on the following dates:
- Wednesday, March 1 11:00-13:00
- Thursday, March 2 11:00-13:00
- Wednesday, March 8 11:00-13:00
- Thursday, March 9 11:00-13:00
- Wednesday, March 15 11:00-13:00
- Thursday, March 16 11:00-13:00
Lectures may be subject to change. Please bring your laptop with you.
Lecture material:
- Lecture I: Introduction to the Message Passing Interface (MPI) (.pdf) Online recorded lecture: here
- Lecture II: Point to Point Communications & Application Examples (.pdf) [codes: ring.c] Online recorded lecture: here
- Lecture III: Collective Communications & Application to the 1D Heat Equation (.pdf) [codes: pi.c, heat_eq.c] Online recorded lecture: here
- Lecture IV: Derived datatypes (.pdf) [codes: tools.c, subarray.c] Online recorded lecture: here
- Lecture V: MPI I/O (.pdf) [codes: write_arr1D.c; write_arr2D.c] Online recorded lecture: here
- Lecture VI: Laplace Equation in 2D (.pdf) (serial code: laplace2D_serial.c; parallel code: laplace2D.c; Gnuplot script: laplace2D.gp) Online recorded lecture: here