message passing programming with mpi what is mpi
play

Message Passing Programming with MPI What is MPI? Message Passing - PowerPoint PPT Presentation

Message Passing Programming with MPI What is MPI? Message Passing Programming with MPI 1 Message Passing Programming with MPI 2 MPI Forum Goals and Scope of MPI First message-passing interface standard. MPIs prime goals are:


  1. Message Passing Programming with MPI What is MPI? Message Passing Programming with MPI 1 Message Passing Programming with MPI 2 MPI Forum Goals and Scope of MPI ❑ ❑ First message-passing interface standard. MPI’s prime goals are: ❑ To provide source-code portability. Sixty people from forty different organisations. To allow efficient implementation. ❑ Users and vendors represented, from the US and Europe. ❑ It also offers: ❑ Two-year process of proposals, meetings and review. A great deal of functionality. ❑ Message Passing Interface document produced. Support for heterogeneous parallel architectures. Message Passing Programming with MPI 3 Message Passing Programming with MPI 4

  2. Header files MPI Function Format ❑ C: error = MPI_Xxxxx(parameter, ...); ❑ C: #include <mpi.h> MPI_Xxxxx(parameter, ...); ❑ Fortran: ❑ Fortran: include ‘mpif.h’ CALL MPI_XXXXX(parameter, ..., IERROR) Message Passing Programming with MPI 5 Message Passing Programming with MPI 6 Handles Initialising MPI ❑ C: ❑ MPI controls its own internal data structures. int MPI_Init(int *argc, char ***argv) ❑ MPI releases `handles’ to allow programmers to refer to these. ❑ Fortran: ❑ C handles are of defined typedefs. MPI_INIT(IERROR) ❑ Fortran handles are INTEGER s. INTEGER IERROR ❑ Must be the first MPI procedure called. Message Passing Programming with MPI 7 Message Passing Programming with MPI 8

  3. MPI_COMM_WORLD Rank ❑ Communicators How do you identify different processes in a communicator? MPI_COMM_WORLD MPI_Comm_rank(MPI_Comm comm, int *rank) 0 1 MPI_COMM_RANK(COMM, RANK, IERROR) INTEGER COMM, RANK, IERROR 4 2 3 ❑ The rank is not the PE number. 5 6 Message Passing Programming with MPI 9 Message Passing Programming with MPI 10 Size Exiting MPI ❑ ❑ How many processes are contained within a C: communicator? int MPI_Finalize() MPI_Comm_size(MPI_Comm comm, int *size) ❑ Fortran: MPI_COMM_SIZE(COMM, SIZE, IERROR) MPI_FINALIZE(IERROR) INTEGER COMM, SIZE, IERROR INTEGER IERROR ❑ Must be the last MPI procedure called. Message Passing Programming with MPI 11 Message Passing Programming with MPI 12

  4. Exercise: Hello World The minimal MPI program ❑ Write a minimal MPI program which prints ``hello world’’. ❑ Compile it. Messages ❑ Run it on a single processor. ❑ Run it on several processors in parallel. ❑ Modify your program so that only the process ranked 0 in MPI_COMM_WORLD prints out. ❑ Modify your program so that the number of processes is printed out. Message Passing Programming with MPI 13 Message Passing Programming with MPI 14 Messages MPI Basic Datatypes - C ❑ A message contains a number of elements of some MPI Datatype C datatype particular datatype. MPI_CHAR signed char MPI_SHORT signed short int ❑ MPI datatypes: MPI_INT signed int Basic types. MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char Derived types. MPI_UNSIGNED_SHORT unsigned short int ❑ Derived types can be built up from basic types. MPI_UNSIGNED unsigned int ❑ MPI_UNSIGNED_LONG unsigned long int C types are different from Fortran types. MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE MPI_PACKED Message Passing Programming with MPI 15 Message Passing Programming with MPI 16

  5. MPI Basic Datatypes - Fortran MPI Datatype Fortran Datatype MPI_INTEGER INTEGER Point-to-Point MPI_REAL REAL Communication MPI_DOUBLE_PRECISION DOUBLE PRECISION MPI_COMPLEX COMPLEX MPI_LOGICAL LOGICAL MPI_CHARACTER CHARACTER(1) MPI_BYTE MPI_PACKED Message Passing Programming with MPI 17 Message Passing Programming with MPI 18 Point-to-Point Communication Communication modes 1 2 5 dest Sender mode Notes 3 4 Synchronous send Only completes when the receive has source 0 completed. communicator Buffered send Always completes (unless an error occurs), irrespective of receiver. ❑ Communication between two processes. Standard send Either synchronous or buffered. Ready send Always completes (unless an error ❑ Source process sends message to destination process. occurs), irrespective of whether the ❑ receive has completed. Communication takes place within a communicator. Receive Completes when a message has ❑ Destination process is identified by its rank in the arrived. communicator. Message Passing Programming with MPI 19 Message Passing Programming with MPI 20

  6. MPI Sender Modes Sending a message ❑ C: int MPI_Ssend(void *buf, int count, OPERATION MPI CALL MPI_Datatype datatype, Standard send MPI_SEND int dest, int tag, Synchronous send MPI_SSEND MPI_Comm comm) Buffered send MPI_BSEND ❑ Fortran: Ready send MPI_RSEND Receive MPI_RECV MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG,COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG INTEGER COMM, IERROR Message Passing Programming with MPI 21 Message Passing Programming with MPI 22 Receiving a message Synchronous Blocking Message-Passing ❑ C: ❑ Processes synchronise. int MPI_Recv(void *buf, int count, MPI_Datatype datatype, ❑ Sender process specifies the synchronous mode. int source, int tag, ❑ Blocking – both processes wait until the transaction has MPI_Comm comm, MPI_Status *status) completed. ❑ Fortran: MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE),IERROR Message Passing Programming with MPI 23 Message Passing Programming with MPI 24

  7. For a communication to succeed: Wildcarding ❑ ❑ Sender must specify a valid destination rank. Receiver can wildcard. ❑ ❑ Receiver must specify a valid source rank. To receive from any source – MPI_ANY_SOURCE ❑ ❑ The communicator must be the same. To receive with any tag – MPI_ANY_TAG ❑ ❑ Tags must match. Actual source and tag are returned in the receiver’s status parameter. ❑ Message types must match. ❑ Receiver’s buffer must be large enough. Message Passing Programming with MPI 25 Message Passing Programming with MPI 26 Communication Envelope Commmunication Envelope Information ❑ Envelope information is returned from MPI_RECV as Sender’s Address status ❑ Information includes: For the attention of : Destination Address Source: status.MPI_SOURCE or status(MPI_SOURCE) Data Item 1 Tag: status.MPI_TAG or status(MPI_TAG) Item 2 Item 3 Count: MPI_Get_count or MPI_GET_COUNT Message Passing Programming with MPI 27 Message Passing Programming with MPI 28

  8. Received Message Count Message Order Preservation ❑ C: 1 int MPI_Get_count(MPI_Status *status, 2 5 MPI_Datatype datatype, int *count) 3 4 ❑ Fortran: 0 communicator MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR ❑ Messages do not overtake each other. ❑ This is true even for non-synchronous sends. Message Passing Programming with MPI 29 Message Passing Programming with MPI 30 Exercise - Ping pong Timers ❑ ❑ Write a program in which two processes repeatedly pass a C: message back and forth. double MPI_Wtime(void); ❑ ❑ Insert timing calls to measure the time taken for one Fortran: message. DOUBLE PRECISION MPI_WTIME() ❑ Investigate how the time taken varies with the size of the message. ❑ Time is measured in seconds. ❑ Time to perform a task is measured by consulting the timer before and after. ❑ Modify your program to measure its execution time and print it out. Message Passing Programming with MPI 31 Message Passing Programming with MPI 32

  9. Deadlock 1 2 5 3 Non-Blocking 4 Communications 0 communicator Message Passing Programming with MPI 33 Message Passing Programming with MPI 34 Non-Blocking Communications Non-Blocking Send ❑ Separate communication into three phases: ❑ Initiate non-blocking communication. 1 2 ❑ Do some work (perhaps involving other communications?) 5 in ❑ Wait for non-blocking communication to complete. 3 4 out 0 communicator Message Passing Programming with MPI 35 Message Passing Programming with MPI 36

  10. Non-Blocking Receive Handles used for Non-blocking Comms ❑ datatype – same as for blocking ( MPI_Datatype or INTEGER ). ❑ 1 communicator – same as for blocking ( MPI_Comm or 2 5 INTEGER ). in ❑ request – MPI_Request or INTEGER. 3 ❑ 4 A request handle is allocated when a communication is initiated. out 0 communicator Message Passing Programming with MPI 37 Message Passing Programming with MPI 38 Non-blocking Receive Non-blocking Synchronous Send ❑ ❑ C: C: int MPI_Issend(void* buf, int count, int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int src, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, int tag, MPI_Comm comm, MPI_Request *request) MPI_Request *request) int MPI_Wait(MPI_Request *request, int MPI_Wait(MPI_Request *request, MPI_Status *status) MPI_Status *status) ❑ Fortran: ❑ Fortran: MPI_ISSEND(buf, count, datatype, dest, MPI_IRECV(buf, count, datatype, src, tag, comm, request, ierror) tag,comm, request, ierror) MPI_WAIT(request, status, ierror) MPI_WAIT(request, status, ierror) Message Passing Programming with MPI 39 Message Passing Programming with MPI 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend