HPC - MPI Datatypes

Moreno Marzolla

Last updated: 2022-11-08

When you need to decompose a rectangular domain of size \(R \times C\) among \(P\) MPI processes, you usually employ a block decomposition by rows (Block, *): the first process handles the first \(R / P\) lines; the second handles the next \(R / P\) lines, and so on. Indeed, in the C language matrices are stored in row-major order, and a (Block, *) decomposition has each partition as a contiguous sub-array so that data transfers are greatly simplified.

In this exercise, on the other hand, we consider a column-wise decomposition in order to use MPI derived datatypes.

Write a program where processes 0 and 1 keep a local matrix of size \(\textit{SIZE} \times (\textit{SIZE}+2)\) that include two columns of ghost cells (also called halo). In mpi-send-col.c we set SIZE=4, but the program must work with any value.

Process 0 and 1 initialize their local matrices as follows.

Process 0 must send the rightmost column to process 1, where it is inserted into the leftmost ghost area:

Similarly, process 1 must send the leftmost column to process 0, where it is placed into the rightmost ghost area.

You should define a suitable datatype to represent a matrix column, and use two MPI_Sendrecv() operations to exchange the data.

To compile:

    mpicc -std=c99 -Wall -Wpedantic mpi-send-col.c -o mpi-send-col.c

To execute:

    mpirun -n 2 ./mpi-send-col.c

Files