Skip to main content

Other messaging functions

MPI_Ssend and MPI_Issend

The MPI_Ssend function is a synchronized variant of the MPI_Send function, which can release the communication before the destination process confirms that it has successfully received all the data sent by the source process.

MPI_Ssend guarantees that communication will be released when the destination process confirms that it has successfully received all data sent by the source process, so MPI_Ssend can be considered a safer variant of *MPI_Send *.

Note that MPI_Send behaves like MPI_Ssend when sending large data.

A disadvantage of this function over MPI_Send is that MPI_Ssend is more prone to deadlock situations.

An example of a deadlock is the following situation:

if (rank == 0) {
MPI_Send(&num1, SIZE, MPI_INT, 1, 0, MPI_COMM_WORLD);
// MPI_Ssend(&num1, SIZE, MPI_INT, 1, 0, MPI_COMM_WORLD);
MPI_Recv(&num2, SIZE, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
} else {
MPI_Send(&num2, SIZE, MPI_INT, 0, 0, MPI_COMM_WORLD);
// MPI_Ssend(&num2, SIZE, MPI_INT, 1, 0, MPI_COMM_WORLD);
MPI_Recv(&num1, SIZE, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}

MPI_Issend is the non-blocking variant of the MPI_Ssend function.

MPI_Bsend and MPI_Ibsend

The MPI_Bsend function is a variant of the MPI_Send function, where the user creates a buffer (using the MPI_Buffer_attach function), through which messages are exchanged. Only messages that were sent using MPI_Bsend end up in this buffer.

This function is useful for situations when we have deadlock and when sending large data, similar to non-blocking communication. The difference with non-blocking communication is that MPI_Bsend guarantees, when the function returns a result, that the data has been fully copied into the buffer, while MPI_Isend does not.

MPI_BSEND_OVERHEAD represents a memory overhead, which is created when calling MPI_Bsend or MPI_Ibsend.

MPI_Ibsend is the non-blocking variant of the MPI_Bsend function.

Example of use:

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>

int main (int argc, char *argv[])
{
int numtasks, rank;
int size = 100;
int arr[size];

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

if (rank == 0) {
for (int i = 0; i < size; i++) {
arr[i] = i;
}

printf("Process with rank [%d] has the following array:\n", rank);
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");

// declare the buffer size
int buffer_attached_size = MPI_BSEND_OVERHEAD + size * sizeof(int);
// create the space used for the buffer
char* buffer_attached = malloc(buffer_attached_size);
// create the MPI buffer used for sending messages
MPI_Buffer_attach(buffer_attached, buffer_attached_size);

MPI_Bsend(&arr, size, MPI_INT, 1, 1, MPI_COMM_WORLD);
printf("Process with rank [%d] sent the array.\n", rank);

// the buffer used for sending messages is detached and destroyed
MPI_Buffer_detach(&buffer_attached, &buffer_attached_size);
free(buffer_attached);
} else {
MPI_Status status;
MPI_Recv(&arr, size, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
printf("Process with rank [%d], received array with tag %d.\n",
rank, status.MPI_TAG);

printf("Process with rank [%d] has the following array:\n", rank);
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

MPI_Finalize();
}

MPI_Rsend and MPI_Irsend

MPI_Rsend represents a variant of the MPI_Send function, which can be used after the execution of the receive function (only then can this function be used, although we can also use MPI_Send then).

An example use case of this function is when a process A calls MPI_Recv or MPI_Irecv before a barrier and another process B calls MPI_Rsend (where B sends data to process A ).

MPI_Irsend is the non-blocking variant of the MPI_Rsend function.

Example of use:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {
int size, rank, value;

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
MPI_Barrier(MPI_COMM_WORLD);

value = 12345;
printf("[P0] MPI process sends value %d.\n", value);
MPI_Rsend(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
} else {
MPI_Request request;
MPI_Status status;
int flag;
MPI_Irecv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);

MPI_Barrier(MPI_COMM_WORLD);

MPI_Test(&request, &flag, &status);

if (flag) {
printf("[P1] The receive operation is over\n");
} else {
printf("[P1] The receive operation is not over yet\n");
MPI_Wait(&request, &status);
}
printf("[P1] MPI process received value %d.\n", value);
}

MPI_Finalize();
}

MPI_Sendrecv

Within the MPI_Sendrecv function, the send and receive operations are combined, which are blocking in this case, where more precisely a message is sent to a process and another message is received from a process (it can be the same process to which the message was sent or a different process).

MPI_Sendrecv is useful in situations where deadlock can occur, for example chained or message-cycle sending, where each process first sends, then receives, which leads to cyclic data dependency, resulting in deadlock .

Example of use:

#include <mpi.h>
#include <stdio.h>
#include <string.h>

int main(int argc, char *argv[]) {
int numtasks, rank, dest, source, count, tag = 1;
char inmsg, outmsg;
MPI_Status status;

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

if (rank == 0) {
// process 0 sends to process 1 and then waits for response
dest = 1;
source = 1;

outmsg = '0';
MPI_Sendrecv(&outmsg, 1, MPI_CHAR, dest, tag,
&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status);
} else if (rank == 1) {
// process 1 waits for message from process 0, then sends response
dest = 0;
source = 0;

outmsg = '1';
MPI_Sendrecv(&outmsg, 1, MPI_CHAR, dest, tag,
&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status);
}

// use the status variable to find out details about the data exchange
MPI_Get_count(&status, MPI_CHAR, &count);
printf("Process %d received %d char(s) from process %d with tag %d: %c\n",
rank, count, status.MPI_SOURCE, status.MPI_TAG, inmsg);

MPI_Finalize();
}