A quick introduction to MPI (Message Passing Interface) Julien - - PowerPoint PPT Presentation

a quick introduction to mpi message passing interface
SMART_READER_LITE
LIVE PREVIEW

A quick introduction to MPI (Message Passing Interface) Julien - - PowerPoint PPT Presentation

Introduction Basics Point-to-point communication Collective communications Conclusion A quick introduction to MPI (Message Passing Interface) Julien Braine Laureline Pinault cole Normale Suprieure de Lyon, France M1IF - APPD 2019-2020


slide-1
SLIDE 1

Introduction Basics Point-to-point communication Collective communications Conclusion

A quick introduction to MPI (Message Passing Interface)

Julien Braine Laureline Pinault

École Normale Supérieure de Lyon, France

M1IF - APPD 2019-2020

1 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-2
SLIDE 2

Introduction Basics Point-to-point communication Collective communications Conclusion

Introduction

2 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-3
SLIDE 3

Introduction Basics Point-to-point communication Collective communications Conclusion

Standardized and portable message-passing system. Started in the 90’s, still used today in research and industry. Good theoretical model. Good performances on HPC networks (InfiniBand ...).

3 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-4
SLIDE 4

Introduction Basics Point-to-point communication Collective communications Conclusion

De facto standard for communications in HPC applications.

4 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-5
SLIDE 5

Introduction Basics Point-to-point communication Collective communications Conclusion

APIs:

C and Fortran APIs. C++ API deprecated by MPI-3 (2008).

Environment:

Many implementations of the standard (mainly OpenMPI and MPICH) Compiler (wrappers around gcc) Runtime (mpirun)

5 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-6
SLIDE 6

Introduction Basics Point-to-point communication Collective communications Conclusion

Basics

6 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-7
SLIDE 7

Introduction Basics Point-to-point communication Collective communications Conclusion

Compiling:

mpicc -std=c99 <fichier.c> -o <executable>

Executing:

mpirun -n <nb procs> <executable> <args>

7 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-8
SLIDE 8

Introduction Basics Point-to-point communication Collective communications Conclusion

Compiling:

mpicc -std=c99 <fichier.c> -o <executable>

Executing:

mpirun -n <nb procs> <executable> <args> Exercise Write a hello world program. Compile it and execute it with mpi with 8 processes. What do you get ?

7 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-9
SLIDE 9

Introduction Basics Point-to-point communication Collective communications Conclusion

Program structure

1 #include <mpi.h> 2 3 i n t main ( i n t argc , char ∗ argv [ ] ) 4 { 5 // S e r i a l code 6 7 MPI_Init(&argc, &argv); 8 9 // P a r a l l e l code 10 11 MPI_Finalize(); 12 13 // S e r i a l code ; 14 }

8 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-10
SLIDE 10

Introduction Basics Point-to-point communication Collective communications Conclusion

Rank and number of processes Getting the number of processes:

i n t MPI_Comm_size(MPI_Comm comm, i n t ∗ s i z e ) ;

Getting the rank of a process:

i n t MPI_Comm_rank(MPI_Comm comm, i n t ∗ rank ) ;

9 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-11
SLIDE 11

Introduction Basics Point-to-point communication Collective communications Conclusion

Rank and number of processes Getting the number of processes:

i n t MPI_Comm_size(MPI_Comm comm, i n t ∗ s i z e ) ;

Getting the rank of a process:

i n t MPI_Comm_rank(MPI_Comm comm, i n t ∗ rank ) ;

For now: comm will always be MPI_COMM_WORLD

9 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-12
SLIDE 12

Introduction Basics Point-to-point communication Collective communications Conclusion

Hello World

Recap of basic MPI

#i n c l u d e <mpi . h> i n t MPI_Init ( i n t argc , char ∗∗ argv ) ; i n t MPI_Finalize ( ) ; i n t MPI_Comm_size(MPI_Comm comm, i n t ∗ s i z e ) ; i n t MPI_Comm_rank(MPI_Comm comm, i n t ∗ rank ) ; MPI_Comm MPI_COMM_WORLD;

Exercise Write a program such that each process prints:

Hell o from p r o c e s s <rank>/<number>

Test it. In what order do they print ?

10 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-13
SLIDE 13

Introduction Basics Point-to-point communication Collective communications Conclusion

Point-to-point communication

11 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-14
SLIDE 14

Introduction Basics Point-to-point communication Collective communications Conclusion

Introduction

Communication between two identified processes: a sender and a receiver. a process performs a sending operation the other process performs a matching receive operation There are different types of send and receive routines used for different purposes. Synchronous send Blocking send / blocking receive Non-blocking send / non-blocking receive Combined send/receive

12 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-15
SLIDE 15

Introduction Basics Point-to-point communication Collective communications Conclusion

Introduction

Communication between two identified processes: a sender and a receiver. a process performs a sending operation the other process performs a matching receive operation There are different types of send and receive routines used for different purposes. Synchronous send Blocking send / blocking receive Non-blocking send / non-blocking receive Combined send/receive

12 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-16
SLIDE 16

Introduction Basics Point-to-point communication Collective communications Conclusion

Sending data (blocking asynchronous send)

i n t MPI_Send( const void ∗ data , i n t count , MPI_Datatype datatype , i n t d e s t i n a t i o n , i n t tag , MPI_Comm communicator ) ;

data : adress space of the process that send the data count : number of data elements of a particular type to be sent datatype : type of the data, such as MPI_CHAR, MPI_INT,. . . destination : the rank of the receiving process tag : identify a message (for us : use 0 most of the time) communicator : use MPI_COMM_WORLD

13 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-17
SLIDE 17

Introduction Basics Point-to-point communication Collective communications Conclusion

Examples

i n t MPI_Send( const void ∗ data , i n t count , MPI_Datatype datatype , i n t d e s t i n a t i o n , i n t tag , MPI_Comm communicator ) ;

Examples: Send your rank to the process number 0:

MPI_Send(&rank , 1 , MPI_INT , 0 , 0 , MPI_COMM_WORLD) ;

Send a float array A of size n to the process number 1:

MPI_Send(A, n , MPI_FLOAT, 1 , 0 , MPI_COMM_WORLD) ;

14 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-18
SLIDE 18

Introduction Basics Point-to-point communication Collective communications Conclusion

Receiving data (blocking asynchronous receive)

i n t MPI_Recv( void ∗ data , i n t count , MPI_Datatype datatype , i n t source , i n t tag , MPI_Comm communicator , MPI_Status∗ s t a t u s ) ;

data, count, datatype, communicator : as for MPI_Send source : rank of the originating process (MPI_ANY_SOURCE) tag : identifier of the message you are waiting (MPI_ANY_TAG) status : a predefined structure MPI_Status that contains some information about the received message

15 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-19
SLIDE 19

Introduction Basics Point-to-point communication Collective communications Conclusion

Example

i n t MPI_Recv( void ∗ data , i n t count , MPI_Datatype datatype , i n t source , i n t tag , MPI_Comm communicator , MPI_Status∗ s t a t u s ) ;

Example: Receive an integer array of size n from process number 0 and store it into a buffer:

MPI_Recv( b u f f er , n , MPI_INT , 0 , 0 , MPI_COMM_WORLD, MPI_STATUS_IGNORE) ;

16 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-20
SLIDE 20

Introduction Basics Point-to-point communication Collective communications Conclusion

Exchange data

Recap of basic MPI

i n t MPI_Recv ( v o i d ∗ data , i n t n , MPI_Datatype t , i n t src , i n t tag , MPI_Comm comm, MPI_Status∗ s ) ; i n t MPI_Send ( c o n s t v o i d ∗ data , i n t n , MPI_Datatype t , i n t dest , i n t tag , MPI_Comm comm ) ; MPI_Datatype MPI_INT ; MPI_Status MPI_STATUS_IGNORE; i n t MPI_ANY_SOURCE; i n t MPI_ANY_TAG;

Exercise

Let each process generate a random number and print "<process rank> : <random value>". Have each process receive from the previous process the sum of the random values of the previous processes (i.e process 0 sends to process 1 it’s random value, process 1 sends the sum of the value received by process 0 and its random value, . . . ) The last process prints the total sum Remark : There is a simpler more efficient way to do this in MPI

17 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-21
SLIDE 21

Introduction Basics Point-to-point communication Collective communications Conclusion

Other functions

MPI_Ssend : Synchronous blocking send MPI_Isend : Asynchronous non-blocking send MPI_Irecv : Asynchronous non-blocking receive MPI_Sendrecv : Simultaneous send and receive MPI_Wait : Blocks until a specified non-blocking send or receive operation has completed MPI_Probe : Performs a blocking test for a message MPI_Get_count : Returns the source, tag and number of elements of datatype received . . .

18 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-22
SLIDE 22

Introduction Basics Point-to-point communication Collective communications Conclusion

Collective communications

19 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-23
SLIDE 23

Introduction Basics Point-to-point communication Collective communications Conclusion

Introduction

Types of Collective Operations: Synchronization Data Movement (eg. broadcast, scatter, gather) Collective Computation (eg. reduce) ALL processes within a communicator must participate in any collective operation (programmer’s responsibility).

20 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-24
SLIDE 24

Introduction Basics Point-to-point communication Collective communications Conclusion

Synchronization

i n t MPI_Barrier (MPI_Comm communicator ) ;

Blocks until all process have reached this routine.

21 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-25
SLIDE 25

Introduction Basics Point-to-point communication Collective communications Conclusion

Synchronization

i n t MPI_Barrier (MPI_Comm communicator ) ;

Blocks until all process have reached this routine. Exercise Write a program that produces such an output when executed with 4 processes (i.e. they each print "step 1" and "step 2", but each "step 2" is after the "step 1"of all processes) :

[ 0 ] step 1 [ 3 ] step 1 [ 2 ] step 1 [ 1 ] step 1 [ 3 ] step 2 [ 0 ] step 2 [ 2 ] step 2 [ 1 ] step 2

21 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-26
SLIDE 26

Introduction Basics Point-to-point communication Collective communications Conclusion

Collective operations

22 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-27
SLIDE 27

Introduction Basics Point-to-point communication Collective communications Conclusion

Broadcast

i n t MPI_Bcast ( void ∗ data , i n t count , MPI_Datatype datatype , i n t root , MPI_Comm communicator ) ;

Broadcasts a message from the process with rank root to all other processes of the group.

23 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-28
SLIDE 28

Introduction Basics Point-to-point communication Collective communications Conclusion

Scatter

i n t MPI_Scatter ( const void ∗ sendbuf , i n t sendcount , MPI_Datatype sendtype , void ∗ recvbuf , i n t recvcount , MPI_Datatype recvtype , i n t root , MPI_Comm communicator ) ;

Distributes distinct messages from a single source task to each task in the group.

24 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-29
SLIDE 29

Introduction Basics Point-to-point communication Collective communications Conclusion

Gather

i n t MPI_Gather ( const void ∗ sendbuf , i n t sendcount , MPI_Datatype sendtype , void ∗ recvbuf , i n t recvcount , MPI_Datatype recvtype , i n t root , MPI_Comm communicator ) ;

Gathers distinct messages from each task in the group to a single destination task.

25 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-30
SLIDE 30

Introduction Basics Point-to-point communication Collective communications Conclusion

Reduce

i n t MPI_Reduce( const void ∗ sendbuf , void ∗ recvbuf , i n t count , MPI_Datatype datatype , MPI_Op

  • perator ,

i n t root , MPI_Comm communicator ) ;

Applies a reduction operation on all tasks in the group and places the result in one task. Operators: MPI_MAX, MPI_MIN, MPI_SUM, MPI_PROD, MPI_LAND, MPI_BAND, . . .

26 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-31
SLIDE 31

Introduction Basics Point-to-point communication Collective communications Conclusion

Other functions

MPI_Allgather : Each task in the group, in effect, performs a

  • ne-to-all broadcasting operation within the group

MPI_Allreduce : Applies a reduction operation and places the result in all tasks in the group MPI_Reduce_scatter : Same but split the result MPI_Alltoall : Each task in a group performs a scatter

  • peration, sending a distinct message to all the tasks in the

group . . .

27 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI

slide-32
SLIDE 32

Introduction Basics Point-to-point communication Collective communications Conclusion

To go further https://computing.llnl.gov/tutorials/mpi/

28 / 32 Julien Braine, Laureline Pinault M1IF - Presentation MPI