service oriented
play

Service-Oriented Programming in MPI Sarwar Alam, Humaira Kamal and - PowerPoint PPT Presentation

Service-Oriented Programming in MPI Sarwar Alam, Humaira Kamal and Alan Wagner University of British Columbia Network Systems Security Lab Overview Problem: How to provide data structures to MPI? Fine-Grain MPI Service-Oriented


  1. Service-Oriented Programming in MPI Sarwar Alam, Humaira Kamal and Alan Wagner University of British Columbia Network Systems Security Lab

  2. Overview Problem: How to provide data structures to MPI? Fine-Grain MPI Service-Oriented Programming Performance Tuning

  3. Issues Composition • Abstraction • Cohesive • Low coupling Properties Hierarchical Scalability Communication Load- Slackness balancing

  4. Fine-Grain MPI

  5. MPI • Advantages • Efficient over many fabrics • Rich communication library • Disadvantages • Bound to OS processes • SPMD programming model • Course-grain

  6. Fine-Grain MPI Program: OS processes with co-routines (fibers) MPI process Multicore Node • Full- fledged MPI “processes” • Combination of OS-scheduled and user-level light- weight processes inside each process

  7. Fine-Grain MPI Node 2 Node 1 • One model, inside and between nodes • Interleaved Concurrency • Parallel: same node between nodes

  8. Integrated into MPICH2 Composition • Abstraction Properties • Cohesive • Low coupling Hierarchical Scalability Communication Slackness Load-balancing

  9. System Details

  10. Executing FG-MPI Programs mpiexec – nfg 2 – n 8 myprog • Example of SPMD MPI program • with 16 MPI processes, • assuming two nodes with quad-core. 8 pairs of processes executing in parallel, where each pair interleaves execution

  11. Decoupled from Hardware mpiexec – nfg 350 – n 4 myprog • Fit the number of processes to the problem rather than the number of cores

  12. Flexibility mpiexec – nfg 1000 – n 4 myprog mpiexec – nfg 500 – n 8 myprog mpiexec – nfg 750 – n 4 myprog: -nfg 250 – n 4 myprog • Move the boundary between light-weight user scheduled concurrency, and processes running in parallel.

  13. Scalability mpiexec – nfg 30000 – n 8 myprog • Can have hundreds and thousands of MPI processes. mpiexec – nfg 16000 – n 6500 myprog • 100 Million processes on 6500 cores Composition • Abstraction • Cohesive Properties • Low coupling Hierarchical Scalability Communication Slackness Load-balancing

  14. Service-Oriented Programming • Linked List Structure • Keys in sorted order • Similar • Distributed hash table • Linda Tuple Spaces

  15. Ordered Linked-List An MPI process in ordered list Minimum key value Rank of MPI process of items stored in with next larger key next MPI process values Next MPI process in Previous MPI 43 3 ordered list process in 27 ordered list Data associated with Stores one or more key key values

  16. Ordered Linked-List L28 L0 L12 L56 L75 L21 L18 L43 A38 A45 A3

  17. Ordered Linked-List

  18. INSERT

  19. DELETE

  20. FIND

  21. Ordered Linked-List L28 L0 L12 L56 L75 L21 L18 L43 F65 F30 A12 L56 L75

  22. Shortcuts Local Process Ecosystem Key value Rank (ptr) Free Ranks 24 23 15 30 2012 34 M10 5510 28 L15 F24 L28 F30 A12 L34 Local non-communication operations are ATOMIC

  23. Re-incarnation Local Process Ecosystem Free Ranks 24 F28 F24 F30 30 Recv() 28 send() F24 L28 M10 A12 L34 L15 Composition • Abstraction • Cohesive Properties • Low coupling Local non-communication operations Hierarchical Scalability Communication are ATOMIC Slackness Load-balancing

  24. Granularity • Added the ability for each process to manage a collection of consecutive items. • Changes to INSERT, changes into a SPLIT operation • Changes to DELETE, on delete of last item • List Traversal consists of: • Jumping between processes • Jumping co-located processes • Search inside a process

  25. Properties • Total Ordered – operations are ordered by the order they arrive at the root • Sequentially Consistent – each application process keeps a hold-back queue to return results in order • No consistency – operations can occur in any order Composition Properties • Abstraction • Cohesive • Low coupling Hierarchical Scalability Communication Slackness Load-balancing

  26. Performance Tuning • G (granularity) the number of keys stored in each process. • K (asynchrony) the number of messages in the channel between list processes. • W (workload) the number of outstanding operations

  27. Steady-StateThroughput Fixed list size, evenly distributed over O x M core 16,000 operations/sec 5793 operations/sec

  28. Granularity (G) Fixed-size machine (176 cores), Fixed list size (2^20) 10X larger Sequentially Consistent No-consistency Moving work from INSIDE a process to BETWEEN processes

  29. W and K W : Number of outstanding requests (workload) K : Degree of Asynchrony Composition • Abstraction • Cohesive Properties • Low coupling Hierarchical Scalability Communication Load-balancing Slackness

  30. Conclusions • Reduced coupling and increased cohesion • Scalability within clusters of multicore • Performance tuning controls • Adapt to hierarchical network fabric • Distributed systems properties pertaining to consistency

  31. Thank-You

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend