MPI & MPICH
Presenter: Naznin Fauzia CSE 788.08 Winter 2012
MPI & MPICH Presenter: Naznin Fauzia CSE 788.08 Winter 2012 - - PowerPoint PPT Presentation
MPI & MPICH Presenter: Naznin Fauzia CSE 788.08 Winter 2012 Outline MPI-1 standards MPICH-1 MPI-2 MPICH-2 MPI-3 Overview MPI (Message Passing Interface) Specification for a standard library for message passing
Presenter: Naznin Fauzia CSE 788.08 Winter 2012
passing
workstation clusters.
implementations
implementation library).
computation and communication and offload to communication co-processor, where available.
communication failures. Such failures are dealt with by the underlying communication subsystem.
#include <mpi.h> int main(int argc, char **argv){ /* Initialize MPI */ MPI_Init(&argc, &argv); /* Find out my identity in the default communicator */ int my_rank; MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); int number ; if (my_rank == 0) { number = -1; MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); } else if (my_rank == 1) { MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); printf("Process 1 received number %d from process 0\n", number); } /* Shut down MPI */ MPI_Finalize(); return 0; }
scatter, gather, reduce )
communication contexts are obtained, and how the two are bound together into a communicator (i.e., MPI_COMM_WORLD)
groups (a linearly ordered set) to richer topological structures such as multi- dimensional grids.
functions, constants, and types.
inquiries of the current MPI environment
without the need for access to the MPI source code
University
CM-5, Ncube-2, Cray T3D
Convex Exempler, the Sequent Symmetry
multiple vendors)
macros and function
level message passing library
passing h/w
received)
environment (i.e., how many tasks are there)
same structure
(intra) or disjoint (inter)
Intel and Convex)
starting jobs
mpirun -np 12 myprog
mpirun –np 64 myprog –myarg 13 < data.in > results.out mpirun –np 64 –stdin data.in myprog –myarg 13 > results.out
mpicc –c myprog.c
processes with access to a shared X display
rank order
standard
customers
datasets (~MB)
datasets (~TB)
1 2 3
common file
FILE P0 P1 P2 P(n-1)
available for RMA by other processes
communicator
MPI_Send MPI_Recv MPI_Get MPI_Put
running parallel application
they die, establish communication between two processes
routine
any MPI routine
“global rank” (i.e., rank in MPI_COMM_WORLD)
possible virtual connections to processes
allocation or collective window creation) on memory
storage) must be permitted, even if the behavior is undefined
should be flexible
needed for efficient algorithms
Standard - W. Gropp et al
bonn.de/teaching/seminare/technum/pdfs/iseringhausen_mpi2.pdf