A View on MPI's Recent Past, Present, and Future Argonne National - - PowerPoint PPT Presentation

a view on mpi s recent past present and future
SMART_READER_LITE
LIVE PREVIEW

A View on MPI's Recent Past, Present, and Future Argonne National - - PowerPoint PPT Presentation

spcl.inf.ethz.ch @spcl_eth 25 Years of MPI Symposium A View on MPI's Recent Past, Present, and Future Argonne National Lab/EuroMPI/USA Conference, Chicago, IL Torsten Hoefler (on behalf of nobody, neither my institution, nor myself, not MPI


slide-1
SLIDE 1

spcl.inf.ethz.ch @spcl_eth

25 Years of MPI Symposium

A View on MPI's Recent Past, Present, and Future

Argonne National Lab/EuroMPI/USA Conference, Chicago, IL

Torsten Hoefler (on behalf of nobody, neither my institution, nor myself, not MPI collectives WG!)

Thanks for organizing this!

“Abstract is good, but ... a bit much like a technical talk?”

slide-2
SLIDE 2

spcl.inf.ethz.ch @spcl_eth

2

My personal journey with MPI

Disposable income distribution <17k <20k <23k <26k <29k <33k

slide-3
SLIDE 3

spcl.inf.ethz.ch @spcl_eth

▪ MPI_I<collective>(args, MPI_Request *req);

3

Nonblocking collective operations – first discussed in MPI-1!

EuroPVM/MPI’06 Speedup for Jacobi/CG six years later ten years later

slide-4
SLIDE 4

spcl.inf.ethz.ch @spcl_eth

4

But wait, nonblocking barriers, seriously?

… turns out to be very useful after all:

Dynamic Sparse Data Exchange

MPI_Ibarrier() MPI_Issend()

slide-5
SLIDE 5

spcl.inf.ethz.ch @spcl_eth

▪ Just datatypes for collectives – default collectives are “contiguous”, neighbor collectives are user-defined

5

Neighborhood Collectives

1994  2004 2004  2014

We need to focus on optimizing MPI-3 now (similar issues for RMA etc.)

slide-6
SLIDE 6

spcl.inf.ethz.ch @spcl_eth

6

State of MPI today – programming has changed dramatically

until 10 years ago today’s programming And the domain scientists?

slide-7
SLIDE 7

spcl.inf.ethz.ch @spcl_eth

7

HPC community codes towards the end of Moore’s law (i.e., age of acceleration)

‘07: Fortran + MPI ‘12: Fortran + MPI + C++ (DSL) + CUDA ‘13: Fortran + MPI + C++ (DSL) + CUDA + OpenACC ‘??: Fortran + MPI + C++ (DSL) + CUDA + OpenACC + XXX

What is with the MPI community and how can we help?

slide-8
SLIDE 8

spcl.inf.ethz.ch @spcl_eth

8

MPI’s own Innovator’s Dilemma

MPI+X

▪ We should have a bold research strategy to go forward!

up down Distributed CUDA Run MPI right

  • n your GPU

(SC’16)

streaming Processing in Network

CUDA for Network Cards (SC’17)

Rethink MPI! Replace MPI?

Data-Centric Parallel Programming Turn MPI’s principles into a language!

MPI for Big Data Distributed Join Algorithms on Thousands of Cores (VLDB’17)

slide-9
SLIDE 9

spcl.inf.ethz.ch @spcl_eth

9

Let’s move MPI to new heights!

https://spcl.inf.ethz.ch/Jobs/ Torsten