Introduction to Parallel Programming Using MPI

Europe/Warsaw
Online

Online

Description

In collaboration with NCC Latvia and RTU HPC Center, we organize an introductory course on Message Passing Interface (MPI). This session introduces the fundamentals of parallel programming using the MPI, a standard for writing programs that run on distributed memory systems. Attendees will explore the Single Program Multiple Data (SPMD) model, core MPI concepts such as communicators and ranks, and essential communication techniques including point-to-point, collective, and non-blocking operations. The training also covers launching MPI applications on HPC systems and presents hybrid approaches that integrate MPI with other parallel paradigms. Practical exercises, including a distributed inner product and a halo exchange, provide hands-on experience with key concepts.

By the end of this session, participants will be able to:

  • Explain the need for message passing in parallel computing and the basics of the SPMD model.

  • Understand and use key MPI constructs: communicators, ranks, and messages.

  • Implement point-to-point and collective communication patterns in MPI programs.

  • Apply non-blocking communication to overlap computation with communication.

  • Integrate MPI with other parallel programming models in hybrid architectures.

  • Compile and execute MPI applications on high-performance computing (HPC) systems.

  • Develop simple distributed-memory parallel algorithms through hands-on MPI exercises.

This course is organized by HPC Competences Centres in Latvia and Poland within the EuroCC2 project in collaboration with RTU HPC Center.

The course will take place online. The link to the streaming platform will be provided to the registered participants only.

The exercises can be followed during the hands-on part in two ways:

  • By remotely accessing ICM UW computational facility – for the interested participants: Please apply for an ICM account at https://granty.icm.edu.pl/account_applications/new (please make sure to do this by April 20 at the latest, to allow us time to set up your access credentials and a training allocation; in the ID verification field, please indicate the training title),
  • Using your own computer with a GCC (g++) compiler and MPI library installed.

Prerequisites: Basic C++ programming, Access to a GNU/Linux system with MPI installed.

Time & date: Wednesday, April 23, 2025, 9:30-13:30 CEST

Instructor: Jakub Gałecki, ICM UW

PLEASE REGISTER HERE

 

             

    • 09:30 13:30
      Introduction to Parallel Programming Using MPI 4h
      • Introduction: the need for message passing, the SPMD model, working with distributed memory
      • Basic concepts: communicator, rank, message
      • Point-to-point communication: the basic building block of MPI programs
      • Collective communication: expressing distributed parallel algorithms and common communication patterns
      • Non-blocking communication: how to overlap computation and communication
      • Hybrid parallelism: leveraging MPI to scale other parallel paradigms
      • Launching MPI programs on HPC machines
      • Hands-on exercise: distributed inner product
      • Hands-on exercise: halo exchange
      Speaker: Jakub Gałecki (ICM UW)