vorlesung datenstrukturen und algorithmen
play

Vorlesung Datenstrukturen und Algorithmen Letzte Vorlesung 2018 - PowerPoint PPT Presentation

Vorlesung Datenstrukturen und Algorithmen Letzte Vorlesung 2018 Felix Friedrich, 30.5.2018 Map/Reduce Sorting Networks Prfung 1 MAP AND REDUCE AND MAP/REDUCE 2 Summing a Vector Accumulator Divide and conquer + + + + + + + + +


  1. Vorlesung Datenstrukturen und Algorithmen Letzte Vorlesung 2018 Felix Friedrich, 30.5.2018 Map/Reduce Sorting Networks Prüfung 1

  2. MAP AND REDUCE AND MAP/REDUCE 2

  3. Summing a Vector Accumulator Divide and conquer + + + + + + + + + + + + Q: Why is the result the + same? + A: associativity: + (a+b) + c = a + (b+c) 3

  4. Summing a Vector Is this correct? Divide and conquer + + + + + + + + + + + + + + + Only if the operation is commutative: a + b = b + a 4

  5. Reductions Simple examples: sum, max Reductions over programmer-defined operations – operation properties (associativity / commutativity) define the correct executions – supported in most parallel languages / frameworks – powerful construct 5

  6. C++ Reduction  std::accumulate (requires associativity)  std::reduce (requires commutativity, from C++17, can specify execution policy) std::vector<double> v; ... double result = std::accumulate( v.begin(), v.end(), 0.0, [](double a, double b){return a + b;} ); 6

  7. Elementwise Multiplication Map Multiply x x x x x x x x + 7

  8. Scalar Product Map Multiply x x x x x x x x + Reduce Accumulate + + + + + + + + 8

  9. C++ Scalar Product (map + reduce) // example data std::vector<double> v1(1024,0.5); auto v2 = v1; std::vector<double> result(1024); // map std::transform(v1.begin(), v1.end(), v2.begin(), result.begin(), [](double a, double b){return a*b;}); // reduce double value = std::accumulate(result.begin(), result.end(), 0.0); // = 256 9

  10. Map & Reduce = MapReduce Combination of two parallelisation patterns result = 𝑔 in 1 ⊕ 𝑔 in 2 ⊕ 𝑔(in 3 ) ⊕ 𝑔(in 4 ) 𝑔 = map ⊕ = reduce (associative) Examples: numerical reduction, word count in document, (word, document) list, maximal temperature per month over 50 years (etc.) 10

  11. Motivating Example Maximal Temperature per Month for 50 years • Input: 50 * 365 Days / Temperature pairs • Output: 12 Months / Max Temperature pairs Assume we (you and me) had to do this together. How would we distribute the work? What is the generic model? How would we be ideally prepared for different reductions (min, max, avg)? 11

  12. Maximal Temperature per Month: Map data 01 / 5 05/20 01 / 8 05/19 03/14 01 / 8 05/20 03/18 .... ... ... ... ... ... ... ... ... .... 03 / 12 07/28 03 / 14 07/38 .... ... each map-process gets day/temperature pairs and maps them to month/temperature pairs 12

  13. Maximal Temperature per Month: Shuffle 01 / -5 05/20 01 / -8 05/19 03/14 01 / -8 05/20 03/18 .... ... ... ... ... ... ... ... ... .... 03 / 12 07/28 03 / 14 07/38 .... ... Jan Feb Mar April Dec 01 / -5 02 / 13 03 / 13 04 / 23 12 / 0 data gets sorted / shuffled by month 01 / -8 02 / 14 03 / 14 04 / 24 12 / -2 01 / -8 02 / 12 03 / 12 04 / 22 12/ 2 .... .... .... .... .... 13

  14. Maximal Temperature per Month: Reduce Jan Feb Mar April Dec 01 / -5 02 / 13 03 / 13 04 / 23 12 / 0 01 / -8 02 / 14 03 / 14 04 / 24 12 / -2 01 / -8 02 / 12 03 / 12 04 / 22 12/ 2 each reduce process gets its own .... .... .... .... .... month with values and applies the reduction (here: max value) to it Dec Jan Feb Mar April 20 18 20 22 28 14

  15. Map/Reduce A strategy for implementing parallel algorithms.  map : A master worker takes the problem input, divides it into smaller sub-problems, and distributes the sub-problems to workers (threads).  reduce : The master worker collects sub-solutions from the workers and combines them in some way to produce the overall answer. 15

  16. Map/Reduce Frameworks and tools have been written to perform map/reduce.  MapReduce framework by Google  Hadoop framework by Yahoo!  related to the ideas of Big Data and Cloud Computing  also related to functional programming (and actually not that new) and available with the Streams concept in Java (>=8)  Map and reduce are user-supplied plug-ins, the rest is provided by the frameworks. Jeffrey Dean and Sanjay Ghemawat. 2008. MapReduce: simplified data processing on large clusters. Commun. ACM 51, 1 (January 2008), 16 107-113. DOI=10.1145/1327452.1327492 http://doi.acm.org/10.1145/1327452.1327492

  17. MapReduce on Clusters You may have heard of Google’s “map/reduce” or Amazon's Hadoop Idea: Perform maps/reduces on data using many machines  The system takes care of distributing the data and managing fault tolerance  You just write code to map one element (key-value part) and reduce elements (key-value pairs) to a combined result Separates how to do recursive divide-and-conquer from what computation to perform  Old idea in higher-order functional programming transferred to large-scale distributed computing 17

  18. Example Count word occurences in a very large file File = GBytes how are you today do you like the weather outside I do I do I wish you the very best for your exams. 18

  19. Mappers key/value pairs (e.g. position / string) <0, "how are you today"> part 1 mapper 1 <15, "do you like ..."> DISTRIBUTED <35, "I do"> huge mapper 2 part 2 <39, "I do"> file <43, "I wish you the very best"> part 3 mapper 3 <70, "for your exams"> 19

  20. Mappers input output key/value key/value pairs pairs (e.g. position / string) (word, count) <0, "how are you today"> <"how",1> mapper 1 <"are",1> <15, "do you like ..."> <"you",1> ... <35, "I do"> <"I",1> mapper 2 <"do",1> <39, "I do"> <"I",1> <"do",1> <"I",1> <43, "I wish you the very best"> mapper 3 <"wish",1> <70, "for your exams"> <"you",1> ... 20

  21. Shuffle / Sort key/value unique key/value pairs pairs (shuffled) (word, count) (word, counts) <"how",1> mapper 1 <"are",1> <"are",1> <"you",1> reducer 1 DISTRIBUTED <"best",1> ... ... <"I",1> mapper 2 <"do",1> <"do",1,1,1> <"I",1> reducer 2 <"for",1> <"do",1> .... <"I",1> mapper 3 <"wish",1> <"you",1> ... 21

  22. Reduce input output unique key/value target file(s) pairs (shuffled) (word, counts) are 1 <"are",1> best 1 reducer 1 <"best",1> you 3 ... ... <"do",1,1,1> do 3 reducer 2 <"for",1> for 1 .... I 3 ... 22

  23. SORTING NETWORKS 23

  24. Lower bound on sorting Horrible algorithms: Ω ( n 2 ) Simple algorithms: O( n 2 ) Fancier Bogo Sort algorithms: Stooge Sort O( n log n ) Comparison Insertion sort lower bound: Selection sort  ( n log n ) Bubble Sort Shell sort Heap sort … Merge sort Quick sort (avg) … 24

  25. Comparator x min(x,y) < y max(x,y) x min(x,y) shorter notation: y max(x,y) 25

  26. void compare(int&a, int&b, boolean dir) { if (dir==(a,b)){ std::swap(a,b); } } a a > < b b 26

  27. Sorting Networks 1 1 5 1 4 3 5 1 3 3 4 3 4 4 4 5 3 5 27

  28. Sorting Networks are Oblivious (and Redundant) 𝑦 1 𝑦 2 𝑦 3 𝑦 4 Oblivious comparison tree 1:2 3:4 3:4 1:3 1:4 2:3 2:4 2:4 2:4 2:3 2:3 1:4 1:4 1:3 1:3 2:3 4:3 2:1 4:1 2:4 3:4 2:1 3:1 1:3 4:3 1:2 4:2 1:4 3:4 1:2 3:2 redundant cases 28

  29. Recursive Construction : Insertion 𝑦 1 𝑦 2 𝑦 3 . . . sorting network . . . . . . 𝑦 𝑜−1 𝑦 𝑜 𝑦 𝑜+1 29

  30. Recursive Construction: Selection 𝑦 1 𝑦 2 𝑦 3 . . . sorting . . . network . . . 𝑦 𝑜−1 𝑦 𝑜 𝑦 𝑜+1 30

  31. Applied recursively.. insertion sort bubble sort with parallelism: insertion sort = bubble sort ! 31

  32. Question How many steps does a computer with infinite number of processors (comparators) require in order to sort using parallel bubble sort? Answer: 2n – 3 Can this be improved ? How many comparisons ? Answer: (n-1) n/2 How many comparators are required (at a time)? Answer: n/2 Reusable comparators: n-1 32

  33. Improving parallel Bubble Sort Odd-Even Transposition Sort: 0 9 8 2 7 3 1 5 6 4 1 8 9 2 7 1 3 5 6 4 2 8 2 9 1 7 3 5 4 6 3 2 8 1 9 3 7 4 5 6 4 2 1 8 3 9 4 7 5 6 5 1 2 3 8 4 9 5 7 6 6 1 2 3 4 8 5 9 6 7 7 1 2 3 4 5 8 6 9 7 8 1 2 3 4 5 6 8 7 9 1 2 3 4 5 6 7 8 9 33

  34. void oddEvenTranspositionSort(std::vector<T>& v, boolean dir) { for (int i = 0; i<v.size(); ++i){ for (int j = i % 2; j+1<n; j+=2) compare(v[i],v[j],dir); } } 34

  35. Improvement? Same number of comparators (at a time) I n a massively parallel setup, bubble sort is Same number of comparisons thus not too bad. But less parallel steps (depth): n But it can go better... 35

  36. Parallel sorting depth = 3 depth = 4 Prove that the two networks above sort four numbers. Easy? 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend