haloop efficient iterative data processing on large scale
play

HaLoop: Efficient Iterative Data Processing On Large Scale Clusters - PowerPoint PPT Presentation

HaLoop: Efficient Iterative Data Processing On Large Scale Clusters Yingyi Bu, UC Irvine Horizon http://clue.cs.washington.edu/ Bill Howe, UW Magda Balazinska, UW Michael Ernst, UW Award IIS 0844572 Cluster Exploratory (CluE) QuickTime


  1. HaLoop: Efficient Iterative Data Processing On Large Scale Clusters Yingyi Bu, UC Irvine Horizon http://clue.cs.washington.edu/ Bill Howe, UW Magda Balazinska, UW Michael Ernst, UW Award IIS 0844572 Cluster Exploratory (CluE) QuickTime™ and a decompressor http://escience.washington.edu/ are needed to see this picture. VLDB 2010, Singapore

  2. Thesis in one slide  Observation: MapReduce has proven successful as a common runtime for non-recursive declarative languages  HIVE (SQL)  Pig (RA with nested types)  Observation: Many people roll their own loops  Graphs, clustering, mining, recursive queries  iteration managed by external script  Thesis: With minimal extensions, we can provide an efficient common runtime for recursive languages  Map, Reduce, Fixpoint QuickTime™ and a 2 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  3. Related Work: Twister [Ekanayake HPDC 2010]  Redesigned evaluation engine using pub/sub  Termination condition evaluated by main() 13. while(!complete){ 14. monitor = driver.runMapReduceBCast(cData); 15. monitor.monitorTillCompletion(); 16. DoubleVectorData newCData = ((KMeansCombiner) driver .getCurrentCombiner()).getResults(); 17. totalError = getError(cData, newCData); 18. cData = newCData; O(k) 19. if (totalError < THRESHOLD) { 20. complete = true; 21. break; 22. } 23. } QuickTime™ and a 3 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  4. In Detail: PageRank (Twister) while (!complete) { // start the pagerank map reduce process monitor = driver.runMapReduceBCast(new BytesValue(tmpCompressedDvd.getBytes())); run MR monitor.monitorTillCompletion(); // get the result of process newCompressedDvd = ((PageRankCombiner) driver.getCurrentCombiner()).getResults(); // decompress the compressed pagerank values newDvd = decompress(newCompressedDvd); O(N) in the size tmpDvd = decompress(tmpCompressedDvd); of the graph totalError = getError(tmpDvd, newDvd); // get the difference between new and old pagerank values if (totalError < tolerance) { term. complete = true; cond. } tmpCompressedDvd = newCompressedDvd; } QuickTime™ and a 4 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  5. Related Work: Spark [Zaharia HotCloud 2010]  Reduction output collected at driver program  “…does not currently support a grouped reduce operation as in MapReduce” val spark = new SparkContext(<Mesos master>) all output sent var count = spark.accumulator(0) to driver. for (i <- spark.parallelize(1 to 10000, 10) ) { val x = Math.random * 2 - 1 val y = Math.random * 2 - 1 if (x*x + y*y < 1) count += 1 } println("Pi is roughly " + 4 * count.value / 10000.0) QuickTime™ and a 5 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  6. Related Work: Pregel [Malewicz PODC 2009]  Graphs only  clustering: k-means, canopy, DBScan  Assumes each vertex has access to outgoing edges  So an edge representation … Edge(from, to)  …requires offline preprocessing  perhaps using MapReduce QuickTime™ and a 6 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  7. Related Work: Piccolo [Power OSDI 2010]  Partitioned table data model, with user- defined partitioning  Programming model:  message-passing with global synchronization barriers  User can give locality hints GroupTables(curr, next, graph)  Worth exploring a direct comparison QuickTime™ and a 7 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  8. Related Work: BOOM [c.f. Alvaro EuroSys 10]  Distributed computing based on Overlog (Datalog + temporal logic + more)  Recursion supported naturally  app: API-compliant implementation of MR  Worth exploring a direct comparison QuickTime™ and a 8 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  9. Details  Architecture  Programming Model  Caching (and Indexing)  Scheduling QuickTime™ and a 9 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  10. Example 1: PageRank Rank Table R 0 url rank Linkage Table L www.a.com 1.0 R i+1 url_src url_dest www.b.com 1.0 www.a.com www.b.com www.c.com 1.0 www.a.com www.c.com π (url_dest, γ url_dest SUM(rank)) www.d.com 1.0 www.c.com www.a.com www.e.com 1.0 www.e.com www.c.com R i .rank = R i .rank/ γ url COUNT(url_dest) www.d.com www.b.com Rank Table R 3 www.c.com www.e.com url rank R i .url = L.url_src www.e.com www.c.om www.a.com 2.13 www.a.com www.d.com www.b.com 3.89 R i L www.c.com 2.60 www.d.com 2.60 www.e.com 2.13 QuickTime™ and a 10 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  11. A MapReduce Implementation Join & compute rank Aggregate fixpoint evaluation R i M M r M r r L-split0 M r M r M r L-split1 M Converged? i=i+1 Client done QuickTime™ and a 11 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  12. What’s the problem? R i m M r r M r L-split0 m r M r M r 3. L-split1 m 2. 1. L is loop invariant, but 1. L is loaded on each iteration 2. L is shuffled on each iteration plus 3. Fixpoint evaluated as a separate MapReduce job per iteration QuickTime™ and a 12 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  13. Example 2: Transitive Closure Friend Find all transitive friends of Eric R 0 {Eric, Eric} {Eric, Elisa} R 1 {Eric, Tom R 2 Eric, Harry} R 3 {} (semi-naïve evaluation) QuickTime™ and a 13 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  14. Example 2 in MapReduce (compute next generation of friends) (remove the ones we’ve Join already seen) Dupe-elim S i M M r r Friend0 M r M r Friend1 M Anything new? i=i+1 Client done QuickTime™ and a 14 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  15. What’s the problem? (compute next generation of friends) (remove the ones Join we’ve already seen) Dupe-elim S i M M r r Friend0 M r M r Friend1 2. M 1. Friend is loop invariant, but 1. Friend is loaded on each iteration 2. Friend is shuffled on each iteration QuickTime™ and a 15 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  16. Example 3: k-means k i = k centroids at iteration i k i P0 M r k i P1 k i+1 M r k i P2 M k i - k i+1 < threshold? Client i=i+1 done QuickTime™ and a 16 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  17. What’s the problem? k i = k centroids at iteration i k i P0 M r k i P1 k i+1 M r k i P2 M 1. k i - k i+1 < threshold? Client i=i+1 done P is loop invariant, but 1. P is loaded on each iteration QuickTime™ and a 17 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  18. Approach: Inter-iteration caching Loop body M … r M r M Reducer output cache (RO) Reducer input cache (RI) Mapper output cache (MO) Mapper input cache (MI) QuickTime™ and a 18 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  19. RI: Reducer Input Cache  Provides:  Access to loop invariant data without map/shuffle …  Used By:  Reducer function  Assumes: 1. Mapper output for a given table constant across iterations 2. Static partitioning (implies: no new nodes)  PageRank  Avoid shuffling the network at every step  Transitive Closure  Avoid shuffling the graph at every step  K-means  No help QuickTime™ and a 19 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  20. Reducer Input Cache Benefit Transitive Closure Billion Triples Dataset (120GB) 90 small instances on EC2 Overall run time QuickTime™ and a 20 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  21. Reducer Input Cache Benefit Transitive Closure Billion Triples Dataset (120GB) 90 small instances on EC2 Livejournal, 12GB Join step only QuickTime™ and a 21 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  22. Reducer Input Cache Benefit Transitive Closure Billion Triples Dataset (120GB) 90 small instances on EC2 Livejournal, 12GB Reduce and Shuffle of Join Step QuickTime™ and a 22 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  23. Join & compute rank Aggregate fixpoint evaluation R i M M r M r r L-split0 M r M r M r L-split1 M Total QuickTime™ and a 23 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  24. RO: Reducer Output Cache  Provides:  Distributed access to output of previous iterations …  Used By:  Fixpoint evaluation  Assumes: 1. Partitioning constant across iterations 2. Reducer output key functionally determines Reducer input key  PageRank  Allows distributed fixpoint evaluation  Obviates extra MapReduce job  Transitive Closure  No help  K-means  No help QuickTime™ and a 24 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

  25. Reducer Output Cache Benefit Fixpoint evaluation (s) Iteration # Iteration # Livejournal dataset Freebase dataset 50 EC2 small instances 90 EC2 small instances QuickTime™ and a 25 Bill Howe, UW 10/14/2013 decompressor are needed to see this picture.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend