lrts a portable high performance low level communication
play

LRTS: A Portable High Performance Low-level Communication Interface - PowerPoint PPT Presentation

LRTS: A Portable High Performance Low-level Communication Interface Yanhua Sun 1 ale 1 Laxmikant(Sanjay) V. K 1 University of Illinois at Urbana-Champaign sun51@illinois.edu April 15, 2013 Yanhua Sun U of Illinois at Urbana-Champaign 1/24


  1. LRTS: A Portable High Performance Low-level Communication Interface Yanhua Sun 1 ale 1 Laxmikant(Sanjay) V. K´ 1 University of Illinois at Urbana-Champaign sun51@illinois.edu April 15, 2013 Yanhua Sun U of Illinois at Urbana-Champaign 1/24

  2. Motivation What the vendors provide Modern supercomputers, especially networks, are complicated Yanhua Sun U of Illinois at Urbana-Champaign 2/24

  3. Motivation What the vendors provide Modern supercomputers, especially networks, are complicated What the programming models require Global address space models Message passing model Message driven (active message) models Yanhua Sun U of Illinois at Urbana-Champaign 2/24

  4. Motivation What the vendors provide Modern supercomputers, especially networks, are complicated What the programming models require Global address space models Message passing model Message driven (active message) models A minimum set of functions to implement runtime systems Yanhua Sun U of Illinois at Urbana-Champaign 2/24

  5. Outline Goal of LRTS Charm++ architecture on LRTS Core APIs and extended APIs Performance of micro benchmarks and NAMD Future work Yanhua Sun U of Illinois at Urbana-Champaign 3/24

  6. Goals of LRTS Goal = Completeness + Productivity + Portability + Performance Yanhua Sun U of Illinois at Urbana-Champaign 4/24

  7. Goal of LRTS Completeness Sufficient to run Charm++ Productivity Require no knowledge of Charm++ to port Charm++ developers : easy to add new features (Replica) Portability Functions should not dependend on specific machines Performance Space for optimization Yanhua Sun U of Illinois at Urbana-Champaign 5/24

  8. Charm++ Architecture Applications Libs Langs CHARM++ Programming Model Converse Runtime System more Machine DCMF TCP/IP MPI uGNI machine Implementation layers Yanhua Sun U of Illinois at Urbana-Champaign 6/24

  9. Charm++ Architecture NAMD ChaNGa openAtom Applications Contigation Charm++ MSA Chrisma Libs Langs all libraries CHARM++ Programming Model Converse Runtime System more DCMF TCP/IP MPI uGNI machine layers Yanhua Sun U of Illinois at Urbana-Champaign 7/24

  10. Charm++ Architecture NAMD ChaNGa openAtom Applications Contigation Charm++ MSA Chrisma Libs Langs all libraries SDAG Chare Chare Array CHARM++ Programming Model entry methods Converse Runtime System more DCMF TCP/IP MPI uGNI machine layers Yanhua Sun U of Illinois at Urbana-Champaign 8/24

  11. Charm++ Architecture NAMD ChaNGa openAtom Applications Contigation Charm++ MSA Chrisma Libs Langs all libraries SDAG Chare Chare Array entry methods CHARM++ Programming Model load balancing projections message scheduler threads Converse Runtime System seed load balancer communication more converse initialization DCMF TCP/IP MPI uGNI machine Converse queues layers Yanhua Sun U of Illinois at Urbana-Champaign 9/24

  12. Charm++ Architecture Based on LRTS NAMD ChaNGa openAtom Applications Contigation Charm++ MSA Chrisma Libs Langs all libraries SDAG Chare Chare Array entry methods CHARM++ Programming Model load balancing projections message scheduler threads Converse Runtime System seed load balancer converse initialization Converse queues LRTS non/SMP implementation commom broadcast more machine specific DCMF TCP/IP MPI uGNI machine init communication layers Yanhua Sun U of Illinois at Urbana-Champaign 10/24

  13. Charm++ Naming Rules CkFoo (most used for Charm++ programmers) CmiFoo (converse programs) LrtsFoo (only for vendors) Yanhua Sun U of Illinois at Urbana-Champaign 11/24

  14. Messaging Flow Non SMP mode - one process per core (hardware thread) SMP mode - one thread per core (hardware thread) Intra-node communication by passing pointers Dedicated communication thread Yanhua Sun U of Illinois at Urbana-Champaign 12/24

  15. Messaging Flow Non SMP mode - one process per core (hardware thread) SMP mode - one thread per core (hardware thread) Intra-node communication by passing pointers Dedicated communication thread Node 0 Node 1 Thread 0 Thread 1 Message Message queue queue Communication thread sending message queue s Network Yanhua Sun U of Illinois at Urbana-Champaign 12/24

  16. Messaging Flow non SMP mode - one process per core (hardware thread) SMP mode - one thread per core (hardware thread) Intra-node communication by passing pointers Dedicated communication thread Node 0 Node 1 Thread 0 Thread 1 Message Message queue queue Receive message Receive message Communication thread sending message queue s Network Yanhua Sun U of Illinois at Urbana-Champaign 13/24

  17. Core APIs required to run Charm++ Startup and Shutdown void LrtsInit(int *argc, char ***argv, int *numNodes, int *myNodeID) void LrtsExit() void LrtsBarrier() Yanhua Sun U of Illinois at Urbana-Champaign 14/24

  18. Core APIs - P2P communication Sending messages CmiCommHandle LrtsSendFunc(int destNode, int destPE, int size, char *msg, int mode); Different protocols for message size Buffering scheme in machine layer Yanhua Sun U of Illinois at Urbana-Champaign 15/24

  19. Core APIs - P2P communication Sending messages CmiCommHandle LrtsSendFunc(int destNode, int destPE, int size, char *msg, int mode); Different protocols for message size Buffering scheme in machine layer LrtsAdvanceCommunication void LrtsAdvanceCommunication(int whileidle); Sending buffered messages Polling network Yanhua Sun U of Illinois at Urbana-Champaign 15/24

  20. Core APIs - P2P communication Sending messages CmiCommHandle LrtsSendFunc(int destNode, int destPE, int size, char *msg, int mode); Different protocols for message size Buffering scheme in machine layer LrtsAdvanceCommunication void LrtsAdvanceCommunication(int whileidle); Sending buffered messages Polling network void handleOneRecvedMsg(int size, char *msg) Yanhua Sun U of Illinois at Urbana-Champaign 15/24

  21. Extended APIs - Memory Memory Management void* LrtsAlloc(int n bytes) void LrtsFree(void *msg) Pinned memory pool - uGNI L2Atomic queues for freed messages Yanhua Sun U of Illinois at Urbana-Champaign 16/24

  22. Extended APIs - Persistent Messages Persistent messages Communication partners and sizes do not change Yanhua Sun U of Illinois at Urbana-Champaign 17/24

  23. Extended APIs - Persistent Messages Persistent messages Communication partners and sizes do not change RDMA support (uGNI, PAMI, Ibverbs) void LrtsSendPersistentMsg(PersistentHandle h, int destNode, int size, void *msg) Yanhua Sun U of Illinois at Urbana-Champaign 17/24

  24. Extended APIs - Collectives void LrtsBroadcast() common implementation + specific Yanhua Sun U of Illinois at Urbana-Champaign 18/24

  25. Extended APIs - Collectives void LrtsBroadcast() common implementation + specific Spanning Tree Hypercube All asynchronous functions Yanhua Sun U of Illinois at Urbana-Champaign 18/24

  26. Status of LRTS Cray machines with uGNI : XE, XK, XC Sun etal, A uGNI-Based Asynchronous Message-driven Runtime System for Cray Supercomputers with Gemini Interconnect, IPDPS 2012 Sun etal, Optimizing Fine-grained Communication in a Biomolecular Simulation Application on Cray XK6, SC 2012 IBM machines : BlueGene/P with DCMF; BlueGene/Q with PAMI Kumar etal, Acceleration of an Asynchronous Message Driven Programming Paradigm on IBM Blue Gene/Q, IPDPS 2013 Machines supporting MPI Infiniband clusters Yanhua Sun U of Illinois at Urbana-Champaign 19/24

  27. Performance - Latency on BGQ 12 32 Bytes 1024 Bytes 8192 Bytes 10 8 Time(us) 6 4 2 PAMI PAMI-LRTS PAMI SMP PAMI-LRTS SMP Charm++ architectures Yanhua Sun U of Illinois at Urbana-Champaign 20/24

  28. Performance - Bandwidth on BGQ 4 1024 Bytes 32K Bytes 1M Bytes 3.5 3 Bandwidth(GBytes/sec) 2.5 2 1.5 1 0.5 0 PAMI PAMI-LRTS PAMI SMP PAMI-LRTS SMP Charm++ architectures Yanhua Sun U of Illinois at Urbana-Champaign 21/24

  29. Application Performance NAMD Apoa1(92k atoms) with PME every 4 steps on BGQ 12 32 Nodes (2048 hw threads) 64 Nodes (4096 hw threads) 10 Timestep(ms/step) 8 6 4 2 PAMI PAMI-LRTS PAMI SMP PAMI-LRTS SMP Charm++ architectures Yanhua Sun U of Illinois at Urbana-Champaign 22/24

  30. 100M-atom Simulation on State-of-art Machines Best performance on Blue Waters is 8.9ms/step with 25k nodes 13ms/step on Titan with 18k nodes 17.9ms/step on Bluegene/Q with 16K nodes Yanhua Sun U of Illinois at Urbana-Champaign 23/24

  31. Conclusion and Future work Conclusion LRTS interface simplifies the runtime implementation on new hardware LRTS maintain good performance Yanhua Sun U of Illinois at Urbana-Champaign 24/24

  32. Conclusion and Future work Conclusion LRTS interface simplifies the runtime implementation on new hardware LRTS maintain good performance Future work Message buffering and scheduling Fault tolerance interface Implement other runtime system - Unistack Yanhua Sun U of Illinois at Urbana-Champaign 24/24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend