a quick introduction to mpi message passing interface
play

A quick introduction to MPI (Message Passing Interface) M1IF - - PowerPoint PPT Presentation

A quick introduction to MPI (Message Passing Interface) M1IF - AlgoPar Hadrien Croubois Aurlien Cavelan cole Normale Suprieure de Lyon, France 1 / 1 Hadrien Croubois, Aurlien Cavelan M1IF - Presentation MPI Introduction 2 / 1


  1. A quick introduction to MPI (Message Passing Interface) M1IF - AlgoPar Hadrien Croubois Aurélien Cavelan École Normale Supérieure de Lyon, France 1 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  2. Introduction 2 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  3. What is MPI ? Standardized and portable message-passing system. Started in the 90’s, still used today in research and industry. Minimalistic : good theoretical model. Minimalistic : good performances on HPC networks (InfiniBand ...). 3 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  4. What is MPI ? De facto standard for communications in HPC applications. 4 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  5. Programming model APIs: C and Fortran APIs. C++ API deprecated by MPI-3 (2008). Environment: Many implementations of the standard (mainly OpenMPI and MPICH) Compiler (wrappers around gcc) Runtime (mpirun) 5 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  6. Programming model Compiling: gcc mpicc → g++ mpic++ / mpicxx → gfortran mpifort → Executing: mpirun -n <nb procs> <executable> <args> ex : mpirun -n 10 ./a.out note: mpiexec and orterun are synonyms of mpirun see man mpirun for more details 6 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  7. MPI context Context limits All MPI call must be nested in the MPI context delimited by MPI_Init and MPI_Finalize . 1 #i n c l u d e <mpi . h> 2 3 main ( i n t argc , char ∗ argv [ ] ) i n t 4 { 5 MPI_Init(&argc , &argv ) ; 6 7 // . . . 8 9 MPI_Finalize ( ) ; 10 11 return 0; 12 } 7 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  8. MPI context Context awarness 1 #i n c l u d e <s t d i o . h> 2 #i n c l u d e <mpi . h> 3 4 i n t main ( i n t argc , char ∗ argv [ ] ) 5 { 6 i n t rank , s i z e ; 7 8 MPI_Init(&argc , &argv ) ; 9 10 MPI_Comm_rank(MPI_COMM_WORLD, &rank ) ; 11 MPI_Comm_size(MPI_COMM_WORLD, &s i z e ) ; 12 13 p r i n t f ( " Hel lo ␣from␣ proc ␣%d␣/␣%d\n" , rank , s i z e ) ; 14 15 MPI_Finalize ( ) ; 16 17 return 0; 18 } 8 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  9. Synchronization Code: p r i n t f ( "[%d ] ␣ step ␣1\n" , rank ) ; MPI_Barrier (MPI_COMM_WORLD) ; p r i n t f ( "[%d ] ␣ step ␣2\n" , rank ) ; Output: [ 0 ] step 1 [ 1 ] step 1 [ 2 ] step 1 [ 3 ] step 1 [ 3 ] step 2 [ 0 ] step 2 [ 2 ] step 2 [ 1 ] step 2 9 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  10. Point to point communication 10 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  11. Send and Receive Sending data: i n t MPI_Send( const void ∗ data , count , i n t MPI_Datatype datatype , d e s t i n a t i o n , i n t tag , i n t MPI_Comm communicator ) ; Receiving data: i n t MPI_Recv( void ∗ data , i n t count , MPI_Datatype datatype , i n t source , i n t tag , MPI_Comm communicator , MPI_Status∗ s t a t u s ) ; 11 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  12. Exemple 1 i n t rank , s i z e ; 2 MPI_Comm_rank(MPI_COMM_WORLD, &rank ) ; 3 MPI_Comm_size(MPI_COMM_WORLD, &s i z e ) ; 4 5 i n t number ; 6 switch ( rank ) 7 { 8 case 0: 9 number = − 1; 10 MPI_Send(&number , 1 , MPI_INT , 1 , 0 , MPI_COMM_WORLD) ; 11 break ; 12 case 1: 13 MPI_Recv(&number , 1 , MPI_INT , 0 , 0 , MPI_COMM_WORLD, 14 MPI_STATUS_IGNORE) ; 15 p r i n t f ( " r e c e i v e d ␣number : ␣%d\n" , number ) ; 16 break ; 17 } 12 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  13. Asynchronious communications Sending data: i n t MPI_Isend ( const void ∗ data , i n t count , MPI_Datatype datatype , i n t d e s t i n a t i o n , i n t tag , MPI_Comm communicator , MPI_Request∗ r e q u e s t ) ; Receiving data: MPI_Irecv ( void ∗ data , i n t count , i n t MPI_Datatype datatype , source , i n t tag , i n t MPI_Comm communicator , MPI_Request∗ r e q u e s t ) ; 13 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  14. Other function MPI_Probe, MPI_Iprobe MPI_Test, MPI_Testany, MPI_Testall MPI_Cancel MPI_Wtime, MPI_Wtick 14 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  15. Simple datatypes MPI_SHORT short int MPI_INT int MPI_LONG long int MPI_LONG_LONG long long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_UNSIGNED_LONG_LONG unsigned long long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE char 15 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  16. Complexe datatypes Composed structure: Structures; Array. Possibilities are almost limitless ... ... but sometimes difficult to setup. 16 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  17. Collective communications 17 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  18. One to All Broadcast 18 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  19. One to All Broadcast i n t MPI_Bcast ( void ∗ data , i n t count , MPI_Datatype datatype , i n t root , MPI_Comm communicator ) ; 19 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  20. One to All Scatter 20 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  21. One to All Scatter i n t MPI_Scatter ( const void ∗ sendbuf , i n t sendcount , MPI_Datatype sendtype , void ∗ recvbuf , recvcount , i n t MPI_Datatype recvtype , root , i n t MPI_Comm communicator ) ; 21 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  22. All to One Reduce 22 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  23. All to One Reduce i n t MPI_Reduce( const void ∗ sendbuf , void ∗ recvbuf , i n t count , MPI_Datatype datatype , MPI_Op operator , i n t root , MPI_Comm communicator ) ; 23 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  24. All to One Gather 24 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  25. All to One Gather i n t MPI_Gather ( const void ∗ sendbuf , i n t sendcount , MPI_Datatype sendtype , void ∗ recvbuf , recvcount , i n t MPI_Datatype recvtype , root , i n t MPI_Comm communicator ) ; 25 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  26. All to All Allreduce 26 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  27. All to All AllReduce MPI_Allreduce ( const void ∗ sendbuf , i n t void ∗ recvbuf , count , i n t MPI_Datatype datatype , MPI_Op operator , MPI_Comm communicator ) ; 27 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  28. All to All Allgather 28 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  29. All to All AllGather i n t MPI_Allgather ( const void ∗ sendbuf , i n t sendcount , MPI_Datatype sendtype , void ∗ recvbuf , i n t recvcount , MPI_Datatype recvtype , MPI_Comm communicator ) ; 29 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  30. All to All Alltoall 30 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  31. All to All Alltoall i n t MPI_Alltoall ( const void ∗ sendbuf , i n t sendcount , MPI_Datatype sendtype , void ∗ recvbuf , i n t recvcount , MPI_Datatype recvtype , MPI_Comm communicator ) ; 31 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  32. Custom communicators 32 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  33. MPI_COMM_WORLD can be split into smaller, more appropriate communicators. i n t MPI_Comm_split(MPI_Comm communicator , i n t color , i n t key , MPI_Comm∗ newcommunicator ) ; 33 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

  34. Exemple 1 . . . 2 i n t rank , s i z e ; 3 MPI_Init(&argc , &argv ) ; 4 MPI_Comm_rank(MPI_COMM_WORLD, &rank ) ; 5 MPI_Comm_size(MPI_COMM_WORLD, &s i z e ) ; 6 7 hrank , vrank ; i n t 8 hsize , v s i z e ; i n t 9 MPI_Comm hcomm , vcomm ; 10 MPI_Comm_split(MPI_COMM_WORLD, rank%p , rank , &vcomm ) ; 11 MPI_Comm_split(MPI_COMM_WORLD, rank /p , rank , &hcomm ) ; 12 MPI_Comm_rank(hcomm , &hrank ) ; 13 MPI_Comm_size(hcomm , &h s i z e ) ; 14 MPI_Comm_rank(vcomm , &vrank ) ; 15 MPI_Comm_size(vcomm , &v s i z e ) ; 16 . . . 34 / 1 Hadrien Croubois, Aurélien Cavelan M1IF - Presentation MPI

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend