optimizing collective communication on multicores
play

Optimizing Collective Communication on Multicores Rajesh Nishtala 1 - PowerPoint PPT Presentation

Optimizing Collective Communication on Multicores Rajesh Nishtala 1 Katherine Yelick 1 1 University of California, Berkeley (2009) 1 / 57 Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors John M. Mellor-Crummey, Michael


  1. Optimizing Collective Communication on Multicores Rajesh Nishtala 1 Katherine Yelick 1 1 University of California, Berkeley (2009) 1 / 57

  2. Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors John M. Mellor-Crummey, Michael L.Scott (1991) 2 / 57

  3. PGAS Languages ◮ Focus on Partitioned Global Address Space languages 3 / 57

  4. Partitioned Addresspace one address space T 1 T 2 T n ... 4 / 57

  5. One Sided Communication read T 1 T 2 T n ... write 5 / 57

  6. PGAS Languages ◮ UPC, Unified Parallel C ◮ CAF, Co-array Fortran ◮ Titanium, a Java dialect 6 / 57

  7. Context ◮ The gap between processors and memory systems is still enormous 7 / 57

  8. http://images.bit-tech.net/content_images/2007/11/the_secrets_of_pc_memory_part_1/hei.png 8 / 57

  9. ◮ Today: processors don’t get faster, but we see more and more processors on a single chip 9 / 57

  10. Processor GHz Cores (Threads) Sockets Intel Clovertown 2.66 8 (8) 2 AMD Barcelona 2.3 32 (32) 8 Sun Niagara 2 1.4 32 (256) 4 Table: Experimental Platforms Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 10 / 57

  11. Sun Niagara 2 http://www.rz.rwth-aachen.de/aw/cms/rz/Themen/hochleistungsrechnen/ rechnersysteme/beschreibung_der_hpc_systeme/ultrasparc_t2/ rba/ultrasparc_t2_architectural_details/?lang=de 11 / 57

  12. ◮ The number of processors on a chip grows at an exponential pace Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 12 / 57

  13. Intel Single-Chip Cloud Computer (48 Cores) http://techresearch.intel.com/ProjectDetails.aspx?Id=1 13 / 57

  14. ◮ Communication in its most general form is the movement of data within cores, between cores or within memory systems Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 14 / 57

  15. CPU CPU CPU CPU CPU CPU CPU CPU RAM RAM RAM RAM CPU CPU CPU CPU CPU CPU CPU CPU communication network 15 / 57

  16. Collective Communication ◮ Communication-intensive problems often involve global communication 16 / 57

  17. Broadcast ! 1 2 ! ! 3 4 17 / 57

  18. Gather ! 1 2 ! ! 3 4 18 / 57

  19. ◮ These operations are thought of as collective communication operations 19 / 57

  20. Example: Sum of Vector Elements 1 2 3 4 5 6 7 8 9 10 20 / 57

  21. Example: Sum of Vector Elements ◮ Create workers 1 2 3 4 5 6 7 8 9 10 W 1 W 2 W 3 W 4 W 5 21 / 57

  22. Example: Sum of Vector Elements ◮ Every worker sums up it’s part of the vector 1 2 3 4 5 6 7 8 9 10 3 7 11 15 19 22 / 57

  23. Example: Sum of Vector Elements ◮ The main thread gathers the partial results and sums them up 1 2 3 4 5 6 7 8 9 10 3 7 15 19 11 55 23 / 57

  24. Example: Sum of Vector Elements Pseudocode (main thread): double [ ] vector = read_vector ( ) ; Thread [ ] workers = spwan_workers ( ) ; start_workers ( workers ) ; double r e s u l t = c a l c u l a t e _ r e s u l t ( workers ) ; 24 / 57

  25. Example: Sum of Vector Elements Pseudocode (main thread): double [ ] vector = read_vector ( ) ; Thread [ ] workers = spwan_workers ( ) ; start_workers ( workers ) ; wait_until_everything_finished(workers); double r e s u l t = c a l c u l a t e _ r e s u l t ( workers ) ; 25 / 57

  26. Barrier ◮ Synchronization method for a group of threads ◮ A thread can only continue it’s execution after every thread has called the barrier 26 / 57

  27. 1 2 3 4 5 6 7 8 9 10 3 7 11 15 19 55 27 / 57

  28. Collective Communication Operation “ ... group of threads works together to perform a global communication operation ... ” 28 / 57

  29. Reduce ◮ Divide a problem into smaller subproblems ◮ Every thread contributes it’s part to the solution ◮ Example: Calculate the smallest entry of a vector 29 / 57

  30. Flat vs. Tree ◮ For communication among threads, different topologies can be used 30 / 57

  31. Flat Topology ◮ Example: we have a reduce operation ◮ in the end the main thread W main has to wait for every worker thread W 1 ,..., W 7 W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 31 / 57

  32. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main 32 / 57

  33. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W main 33 / 57

  34. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W main W main 34 / 57

  35. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W main W main W main 35 / 57

  36. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W main W main W main W main 36 / 57

  37. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W main W main W main W main W main 37 / 57

  38. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W main W main W main W main W main W main 38 / 57

  39. Tree Topology ◮ Example: we have a reduce operation ◮ in the end the main thread W main has to wait for every worker thread W 1 ,..., W 7 W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 39 / 57

  40. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W 2 W 4 W 6 40 / 57

  41. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W 2 W 4 W 6 W main W 4 41 / 57

  42. W main W 1 W 2 W 3 W 4 W 5 W 6 W 7 W main W 2 W 4 W 6 W main W 4 W main 42 / 57

  43. Analysis Figure: Barrier Performance Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 43 / 57

  44. Barrier Implementation #define N 4 pthread_t threads [N ] ; vo l a ti l e int ready [N ] ; vo l a ti l e int go [N ] ; b a r r i e r ( int id ) { void i f ( id == 0) { / / wait f o r each thread for ( int i = 1; i < N; i ++) while ( ready [ i ] == 0 ) ; / / reset the ready flags for ( int i = 0; i < N; i ++) ready [ i ] = 0; / / signal each thread for ( int i = 0; i < N; i ++) go [ i ] = 1; } else { ready [ id ] = 1; / / wait u n t i l thread i s signalled while ( go [ id ] == 0 ) ; go [ id ] = 0; } } 44 / 57

  45. Experiment: Barrier Implementation 45 / 57

  46. ◮ Strict synchronization: Data movement can only start after all threads have entered the collective and must be completed before the first thread exits the collective Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 46 / 57

  47. Strict Synchronization v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 T 1 T 2 T 3 T 4 T 5 T 6 T 7 47 / 57

  48. Loosening Synchronization Requirements ◮ Loose synchronization: Data movement can begin as soon as any thread has entered the collective and continue until the last thread leaves the collective Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 48 / 57

  49. Loose Synchronization v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 T 1 T 2 T 3 T 4 T 5 T 6 T 7 49 / 57

  50. v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 T 1 T 2 T 3 T 4 T 5 T 6 T 7 50 / 57

  51. v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 T 1 T 2 T 3 T 4 T 5 T 6 T 7 51 / 57

  52. v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 T 1 T 2 T 3 T 4 T 5 T 6 T 7 52 / 57

  53. (32 cores, 256 threads) Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 53 / 57

  54. (8 cores, 8 threads) Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 54 / 57

  55. (32 cores, 32 threads) Nishtala, R., Yelick, K. Optimizing Collective Communication on Multicores 55 / 57

  56. Summary ◮ Best strategy depends on the hardware and on the problem ◮ Using a library that can automatically adapt to a given situation can bring a great performance improvement, since hand tuning takes far too long 56 / 57

  57. Words on the Paper ◮ Very high level ◮ Description of the problem without concrete solution ◮ No implementation ◮ Plots aren’t always clear and precise 57 / 57

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend