mapreduce
play

MapReduce: Simplified Data Processing on Large Clusters J. Dean, S. - PowerPoint PPT Presentation

MapReduce: Simplified Data Processing on Large Clusters J. Dean, S. Ghemawat, OSDI, 2004. Review by Mariana Marasoiu for R212 Motivation: Large scale data processing We want to: Extract data from large datasets Run on big clusters of


  1. MapReduce: Simplified Data Processing on Large Clusters J. Dean, S. Ghemawat, OSDI, 2004. Review by Mariana Marasoiu for R212

  2. Motivation: Large scale data processing We want to: Extract data from large datasets Run on big clusters of computers Be easy to program

  3. Solution: MapReduce A new programming model: Map & Reduce Provides: Automatic parallelization and distribution Fault tolerance I/O scheduling Status and monitoring

  4. Map (you, 1) (are, 1) (1, you are in Cambridge) (in, 1) (Cambridge, 1) (I, 1) (2, I like Cambridge) (like, 1) (Cambridge, 1) (we, 1) (live, 1) (3, we live in Cambridge) (in, 1) (Cambridge, 1) map (in_key, in_value) → list(out_key, intermediate_value)

  5. (you, 1) (are, 1) (in, 1) (Cambridge, 1) (I, 1) (like, 1) (Cambridge, 1) (we, 1) (live, 1) (in, 1) (Cambridge, 1)

  6. Partition (you, 1) (you, 1) (are, 1) (are, 1) (in, 1) (in, 1) (Cambridge, 1) (in, 1) (Cambridge, 1) (I, 1) (Cambridge, 1) (like, 1) (Cambridge, 1) (Cambridge, 1) (I, 1) (we, 1) (like, 1) (live, 1) (in, 1) (we, 1) (Cambridge, 1) (live, 1)

  7. Partition Reduce (you, 1) (you, 1) (you, 1) (are, 1) (are, 1) (are, 1) (in, 1) (in, 1) (Cambridge, 1) (in, 2) (in, 1) (Cambridge, 1) (I, 1) (Cambridge, 3) (Cambridge, 1) (like, 1) (Cambridge, 1) (Cambridge, 1) (I, 1) (I, 1) (we, 1) (like, 1) (like, 1) (live, 1) (in, 1) (we, 1) (we, 1) (Cambridge, 1) (live, 1) (live, 1) reduce (out_key, list(intermediate_value)) -> list(out_value)

  8. User Program File 1 File 2 File 3 Input files

  9. User fork Master Program fork fork File 1 worker worker File 2 worker worker File 3 worker Input files

  10. User fork Master Program assign assign map reduce File 1 worker worker File 2 worker worker File 3 worker Input files

  11. User fork Master Program assign assign map reduce File 1 worker split 0 worker split 1 read File 2 split 2 worker split split 3 worker split 4 File 3 worker Input M Map files splits phase

  12. User fork Master Program assign assign map reduce File 1 worker split 0 worker split 1 local read write File 2 split 2 worker split split 3 worker split 4 File 3 worker Input M Map Intermediate files files splits phase (on local disks)

  13. User fork Master Program assign assign map reduce File 1 worker split 0 worker split 1 local read write remote File 2 split 2 worker read split split 3 worker split 4 File 3 worker Input M Map Intermediate files Reduce files splits phase (on local disks) phase

  14. User fork Master Program assign assign map reduce File 1 worker write Output split 0 worker File 1 split 1 local read write remote File 2 split 2 worker read split split 3 Output worker File 2 split 4 File 3 worker Input M Map Intermediate files Reduce R Output files splits phase (on local disks) phase files

  15. Fine task granularity M so that data is between 16MB and 64MB R is small multiple of workers E.g. M = 200,000, R = 5,000 on 2,000 workers Advantages: dynamic load balancing fault tolerance

  16. Fault tolerance Workers: Detect failure via periodic heartbeat Re-execute completed and in-progress map tasks Re-execute in progress reduce tasks Task completion committed through master Master: Not handled - failure unlikely

  17. Refinements Locality optimization Backup tasks Ordering guarantees Combiner function Skipping bad records Local execution

  18. Performance Tests run on 1800 machines: Dual 2GHz Intel Xeon processors with Hyper-Threading enabled 4GB of memory Two 160GB IDE disks Gigabit Ethernet link 2 Benchmarks: 10 10 x 100 byte entries, 92k matches MR_Grep 10 10 x 100 byte entries MR_Sort

  19. MR_Grep 150 seconds run (startup overhead of ~60 seconds)

  20. MR_Sort Normal execution No backup tasks 200 tasks killed

  21. Experience Rewrite of the indexing system for Google web search Large scale machine learning Clustering for Google News Data extraction for Google Zeitgeist Large scale graph computations

  22. Conclusions MapReduce: useful abstraction simplifies large-scale computations easy to use However: expensive for small applications long startup time (~1 min) chaining of map-reduce phases?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend