Introduction to Parallel Computing Irene Moulitsas Programming - - PowerPoint PPT Presentation

introduction to parallel computing
SMART_READER_LITE
LIVE PREVIEW

Introduction to Parallel Computing Irene Moulitsas Programming - - PowerPoint PPT Presentation

Introduction to Parallel Computing Irene Moulitsas Programming using the Message-Passing Paradigm MPI Background MPI : Message Passing Interface Began in Supercomputing 92 Vendors IBM, Intel, Cray Library writers PVM


slide-1
SLIDE 1

Introduction to Parallel Computing

Irene Moulitsas

Programming using the Message-Passing Paradigm

slide-2
SLIDE 2

MPI Background

MPI : Message Passing Interface Began in Supercomputing ’92

Vendors

IBM, Intel, Cray

Library writers

PVM

Application specialists

National Laboratories, Universities

slide-3
SLIDE 3

Why MPI ?

One of the oldest libraries Wide-spread adoption. Portable. Minimal requirements on the underlying

hardware

Explicit parallelization

Intellectually demanding Achieves high performance Scales to large number of processors

slide-4
SLIDE 4

MPI Programming Structure

Asynchronous

Hard to reason Non-deterministic behavior

Loosely synchronous

Synchronize to perform interactions Easier to reason

SPMD

Single Program Multiple Data

slide-5
SLIDE 5

MPI Features

Communicator Information Point to Point communication Collective Communication Topology Support Error Handling

slide-6
SLIDE 6

Six Golden MPI Functions

MPI is 125 functions MPI has 6 most used functions

slide-7
SLIDE 7

MPI Functions: Initialization

Must be called by all processes MPI_SUCCESS

“mpi.h”

slide-8
SLIDE 8

MPI Functions: Communicator

MPI_Comm

MPI_COMM_WORLD

slide-9
SLIDE 9

Hello World !

slide-10
SLIDE 10

Hello World ! (correct)

slide-11
SLIDE 11

MPI Functions: Send, Recv

source

MPI_ANY_SOURCE

MPI_Status

MPI_SOURCE MPI_TAG MPI_ERROR

slide-12
SLIDE 12

MPI Functions: Datatypes

slide-13
SLIDE 13

Send/Receive Examples

slide-14
SLIDE 14

Blocking Non-Buffered Communication

slide-15
SLIDE 15

Send/Receive Examples

slide-16
SLIDE 16

Blocking Buffered Communication

slide-17
SLIDE 17

Send/Receive Examples

slide-18
SLIDE 18

MPI Functions: SendRecv

slide-19
SLIDE 19

MPI Functions: ISend, IRecv

Non-blocking MPI_Request

slide-20
SLIDE 20

MPI Functions: Test, Wait

MPI_Test tests if operation finished. MPI_Wait blocks until operation is finished.

slide-21
SLIDE 21

Non-Blocking Non-Buffered Communication

slide-22
SLIDE 22

Example

slide-23
SLIDE 23

Example

slide-24
SLIDE 24

Example

slide-25
SLIDE 25

MPI Functions: Synchronization

slide-26
SLIDE 26

Collective Communications

One-to-All Broadcast All-to-One Reduction All-to-All Broadcast & Reduction All-Reduce & Prefix-Sum Scatter and Gather All-to-All Personalized

slide-27
SLIDE 27

MPI Functions: Broadcast

slide-28
SLIDE 28

MPI Functions: Scatter & Gather

slide-29
SLIDE 29

MPI Functions: All Gather

slide-30
SLIDE 30

MPI Functions: All-to-All Personalized

slide-31
SLIDE 31

MPI Functions: Reduction

slide-32
SLIDE 32

MPI Functions: Operations

slide-33
SLIDE 33

MPI Functions: All-reduce

Same as MPI_Reduce, but all processes receive

the result of MPI_Op operation.

slide-34
SLIDE 34

MPI Functions: Prefix Scan

slide-35
SLIDE 35

MPI Names

slide-36
SLIDE 36

MPI Functions: Topology

slide-37
SLIDE 37

Performance Evaluation

Elapsed (wall-clock) time

slide-38
SLIDE 38

Matrix/Vector Multiply