/****************************************************************************
*
* mpi-send-col.c - MPI Datatypes
*
* Copyright (C) 2017--2022 by Moreno Marzolla
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
****************************************************************************/
/***
% HPC - MPI Datatypes
% Moreno Marzolla
% Last updated: 2022-11-08
When you need to decompose a rectangular domain of size $R \times C$
among $P$ MPI processes, you usually employ a block decomposition by
rows (Block, \*): the first process handles the first $R / P$ lines;
the second handles the next $R / P$ lines, and so on. Indeed, in the C
language matrices are stored in row-major order, and a (Block, \*)
decomposition has each partition as a contiguous sub-array so that
data transfers are greatly simplified.
In this exercise, on the other hand, we consider a column-wise
decomposition in order to use MPI derived datatypes.
Write a program where processes 0 and 1 keep a local matrix of size
$\textit{SIZE} \times (\textit{SIZE}+2)$ that include two columns of
_ghost cells_ (also called _halo_). In
[mpi-send-col.c](mpi-send-col.c) we set _SIZE_=4, but the program must
work with any value.
Process 0 and 1 initialize their local matrices as follows.
![](mpi-send-col1.svg)
Process 0 must send the _rightmost_ column to process 1, where it is
inserted into the _leftmost_ ghost area:
![](mpi-send-col2.svg)
Similarly, process 1 must send the _leftmost_ column to process 0,
where it is placed into the _rightmost_ ghost area.
![](mpi-send-col3.svg)
You should define a suitable datatype to represent a matrix column,
and use two `MPI_Sendrecv()` operations to exchange the data.
To compile:
mpicc -std=c99 -Wall -Wpedantic mpi-send-col.c -o mpi-send-col.c
To execute:
mpirun -n 2 ./mpi-send-col.c
## Files
- [mpi-send-col.c](mpi-send-col.c)
***/
#include
#include
#include
/* Matrix size */
#define SIZE 4
/* Initialize matrix m with the values k, k+1, k+2, ..., from left to
right, top to bottom. m must point to an already allocated block of
(size+2)*size integers. The first and last column of m is the halo,
which is set to -1. */
void init_matrix( int *m, int size, int k )
{
int i, j;
for (i=0; i