Skip to main content

About MPI and Distributed Programming

So far, you've been working with parallel programming, which involves multiple threads executing instructions in parallel and concurrently, all accessing the same memory space within a single computing machine with multiple processors.

We can extend the concept of parallel programming by using multiple computing machines connected within a network, in a concept known as distributed programming.

Unlike parallel programming, in distributed programming, there is no concept of shared memory. The question is: how can one machine know what data another machine in the network has? The solution is message passing, carried out by the machines (nodes) within a network. Message passing serves two purposes:

  • communication: a node (sender) sends data through a communication channel to another node (receiver)
  • synchronization: a message cannot be received until it has been sent

MPI (Message Passing Interface) represents a standard for message passing, developed by the MPI Forum, and is based on the model of message-passing processes.

A process represents a running program and can be defined as a basic unit capable of executing one or more tasks within an operating system. Unlike threads, a process has its own address space (its own memory area), and it can have multiple threads running within it, sharing the process's resources.

In the context of working with C/C++, MPI is a library with functionalities implemented in a header called mpi.h. For compilation, MPI has a specific compiler:

  • mpicc, for working with C
  • mpic++, for working with C++

In both languages, to run an MPI program, we use the mpirun command, along with the -np parameter, where we specify the number of processes running within the distributed program.

Example:

  • Compilation:
    • C: mpicc hello.c -o hello
    • C++: mpic++ hello.cpp -o hello
  • Execution: mpirun -np 4 hello - running with 4 processes
caution

If you try to run the mpirun command with a number of processes greater than the number of physical cores available on your processor, you may receive an error indicating that you don't have enough available slots. You can avoid this error by adding the --oversubscribe parameter when running mpirun.

caution

MPI Installation: To work with MPI, you need to install the MPI library on Linux using the following command:

sudo apt install openmpi-bin openmpi-common openmpi-doc libopenmpi-dev
caution

If you are working with MPI on WSL, you may encounter a warning, which can be resolved using the hints provided here.