drizzle fast and adaptable stream processing at scale
play

DRIZZLE: FAST AND Adaptable STREAM PROCESSING AT SCALE Shivaram - PowerPoint PPT Presentation

DRIZZLE: FAST AND Adaptable STREAM PROCESSING AT SCALE Shivaram Venkataraman, Aurojit Panda, Kay Ousterhout , Michael Armbrust, Ali Ghodsi, Michael Franklin, Benjamin Recht, Ion Stoica STREAMING WORKLOADS Streaming Trends: Low latency Results


  1. DRIZZLE: FAST AND Adaptable STREAM PROCESSING AT SCALE Shivaram Venkataraman, Aurojit Panda, Kay Ousterhout , Michael Armbrust, Ali Ghodsi, Michael Franklin, Benjamin Recht, Ion Stoica

  2. STREAMING WORKLOADS

  3. Streaming Trends: Low latency Results power decisions by machines Credit card fraud Suspicious user logins Slow video load Ask security questions Direct user to new CDN Disable account

  4. Streaming Requirements: High throughput Detect suspicious Dynamically adjust Disable stolen accounts logins application behavior As many as 10s of millions of updates per second Need a distributed system

  5. Distributed Execution Models

  6. Execution models: CONTINUOUS OPERATORS Group by user, run anomaly detection … … …

  7. Execution models: CONTINUOUS OPERATORS Group by user, run anomaly detection … … Low latency output … Mutable local state

  8. Execution models: CONTINUOUS OPERATORS Systems: Group by user, run anomaly detection Naiad … … Google MillWheel Streaming DBs: Low latency output … … Borealis, Flux etc Mutable local … state

  9. Execution models: Micro-batch … … … … … … Group by user, … … run anomaly detection z … … Tasks output state on completion … Output at task granularity

  10. Execution models: Micro-batch Dynamic task … … … … … … Group by user, … … scheduling run anomaly detection Adaptability z … … Straggler mitigation Elasticity Tasks output state Fault tolerance on completion … Google FlumeJava Output at task Microsoft Dryad granularity

  11. Failure recovery

  12. Failure recovery: continuous operators All machines replay from checkpoint ? … Chandy Lamport Checkpointed Async Checkpoint state

  13. Failure recovery: Micro-batch z z z Task boundaries capture task interactions! … Task output is periodically checkpointed z

  14. Failure recovery: Micro-batch z z z Replay tasks from failed machine … Task output is Parallelize periodically replay checkpointed z

  15. Execution models Continuous operators Static scheduling Low latency Inflexible Slow failover Micro-batch Scheduling granularity Processing granularity Higher latency Dynamic scheduling Adaptable Parallel recovery Straggler mitigation

  16. Execution models Continuous operators Static scheduling Low latency Drizzle Dynamic scheduling Low latency (coarse granularity) (fine-grained processing) Micro-batch Higher latency Dynamic scheduling (coarse-grained processing) (coarse granularity)

  17. inside the scheduler … … … … … … … … ? Scheduler (1) (2) … … Decide how to assign Serialize and send tasks to machines tasks data locality fair sharing …

  18. SCHEDULING OVERHEADS Median-task time breakdown 250 200 Compute + Data Transfer ime (ms) Time (ms) 150 Task Fetch 100 50 Scheduler Delay 0 4 8 16 32 64 128 Machines Machines Cluster: 4 core, r3.xlarge machines Workload: Sum of 10k numbers per-core

  19. inside the scheduler … … … … … … … … ? ? Scheduler (1) (2) … … Decide how to assign Serialize and send Reuse scheduling decisions! tasks to machines tasks data locality fair sharing …

  20. DRIZZLE Goal: … … … … … … … … remove frequent scheduler interaction (2) Group schedule micro-batches … (1) Pre-schedule reduce tasks

  21. … … Goal: Remove scheduler involvement for reduce tasks … (1) Pre-schedule reduce tasks

  22. … … ? Goal: Remove scheduler involvement for reduce tasks … (1) Pre-schedule reduce tasks

  23. coordinating shuffles: Existing systems … … Data fetched from remote Metadata machines describes shuffle data location …

  24. coordinating shuffles: Pre-scheduling … … (1) Pre-schedule reducers (2) Mappers get metadata (3) Mappers trigger reducers …

  25. DRIZZLE Goal: … … … … … … … … wait to return to scheduler (2) Group schedule micro-batches … (1) Pre-schedule reduce tasks

  26. Group scheduling Group of 2 Group of 2 Schedule group of micro-batches … … … … … … … … at once Fault tolerance, scheduling at group boundaries …

  27. Micro-benchmark: 2-stages 100 iterations – Breakdown of pre-scheduling, group-scheduling Baseline 300 Only Pre-Scheduling 250 ime / Iter (ms) Time / Iter (ms) Drizzle-10 200 Drizzle-100 150 100 50 0 4 8 16 32 64 128 Machines Machines In the paper: group size auto-tuning

  28. Evaluation 1. Latency? Continuous operators Static scheduling Low latency Drizzle Dynamic scheduling Low latency (coarse granularity) (fine-grained processing) Micro-batch 2. Adaptability? Higher latency Dynamic scheduling (coarse-grained processing) (coarse granularity)

  29. EVALUATION: Latency Yahoo! Streaming Benchmark Input: JSON events of ad-clicks Compute: Number of clicks per campaign Window: Update every 10s Comparing Spark 2.0, Flink 1.1.1, Drizzle 128 Amazon EC2 r3.xlarge instances

  30. Streaming BENCHMARK - performance Yahoo Streaming Benchmark: 20M JSON Ad-events / second, 128 machines Event Latency: Difference between window end, processing end 1 0.8 Spark 0.6 Drizzle 0.4 Flink 0.2 0 0 500 1000 1500 2000 2500 3000 Event Latency (ms) Event Latency (ms)

  31. Adaptability: FAULT TOLERANCE Yahoo Streaming Benchmark: 20M JSON Ad-events / second, 128 machines Inject machine failure at 240 seconds 2000 20000 Spark Flink Drizzle Spark Flink Drizzle Latency (ms) 15000 1500 10000 1000 5000 500 0 0 190 200 210 220 230 240 250 260 270 280 290 190 200 210 220 230 31

  32. Execution models Continuous operators Static scheduling Low latency Drizzle Dynamic scheduling Low latency Optimization of (coarse-granularity) (fine-grained processing) batches Micro-batch Optimization of Dynamic scheduling Higher latency batches

  33. INTRA-BATCH QUERY optimization Yahoo Streaming Benchmark: 20M JSON Ad-events / second, 128 machines Optimize execution of each micro-batch by pushing down aggregation 1 Spark 0.8 0.6 Drizzle 0.4 Flink 0.2 Drizzle-Optimized 0 0 500 1000 1500 2000 2500 3000 Event Latency ( Event Latency (ms ms)

  34. EVALUATION End-to-end Latency Yahoo Streaming Benchmark Fault tolerance Query optimization Throughput Synthetic micro-benchmarks Video Analytics Elasticity Shivaram’s Thesis: Iterative ML Algorithms Group-size tuning

  35. Shivaram is answering questions on sli.do Conclusion Continuous operators Static scheduling Low latency Drizzle Dynamic scheduling Optimization of Low latency (coarse granularity) batches (fine-grained processing) Source code: https://github.com/amplab/drizzle-spark Micro-batch Dynamic scheduling Higher latency Optimization of (coarse granularity) (coarse-grained processing) batches

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend