MPI - Distributed Memory Parallelism

Last updated on 2024-02-06 | Edit this page

Overview

Questions

  • How do you utilize more than one shared memory node?

Objectives

  • Demonstrate how to submit a job on multiple nodes
  • Demonstrate that a program with distributed memory parallelism can be run on a shared memory node

Introduction


Hello World!

R

suppressMessages(library(pbdMPI))

my_rank = comm.rank()
nranks = comm.size()
msg = paste0("Hello World! My name is Rank", my_rank,
             ". We are ", nranks, " identical siblings.")
cat(msg, "\n")

finalize()

BASH

#!/bin/bash
#SBATCH -J hello
#SBATCH -A CSC143
#SBATCH -p batch
#SBATCH --nodes=1
#SBATCH -t 00:40:00
#SBATCH --mem=0
#SBATCH -e ./hello.e
#SBATCH -o ./hello.o
#SBATCH --open-mode=truncate

cd ~/R4HPC/code_5
pwd

## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
module load r
echo -e "loaded R with FlexiBLAS"
module list

mpirun --map-by ppr:32:node Rscript hello_world.R

BASH

#!/bin/bash
#PBS -N hello
#PBS -l select=1:ncpus=32
#PBS -l walltime=00:05:00
#PBS -q qexp
#PBS -e hello.e
#PBS -o hello.o

cd ~/R4HPC/code_5
pwd

module load R
echo "loaded R"

mpirun --map-by ppr:32:node Rscript hello_world.R

Key Points

  • One can run a distributed memory program on a shared memory node