service 1 service 2 service 3 postgresql mysql postgresql
play

Service 1 Service 2 Service 3 Postgresql/ MySQL Postgresql/ MySQL - PowerPoint PPT Presentation

Service 1 Service 2 Service 3 Postgresql/ MySQL Postgresql/ MySQL Data size: ~100GB to a few TB Latency: very fast since it was in a real DB Applications: Amazon S3 EMR Kafka 7 ETL/ Modelling City Ops Machine Learning


  1. Service 1 Service 2 Service 3 Postgresql/ MySQL Postgresql/ MySQL Data size: ~100GB to a few TB Latency: very fast since it was in a real DB

  2. Applications: Amazon S3 EMR Kafka 7 ● ETL/ Modelling ● City Ops ● Machine Learning ● Experiments Vertica ETL (Data Warehouse) Key-Val DBs Ad hoc Analytics: (Sharded) ● City Ops ● Data Scientists Generation 1 (2014-2015) Data size: ~10s TB Latency: 24hrs - 48hrs RDBMS DBs

  3. Applications: Kafka 8 ● ETL/ Modelling ● City Ops E T L ● Machine Learning (Flattened/ Modelled Tables) ● Experiments Hive/ Spark/ Schema Ingestion Presto/ enforced (EL) Notebooks Hadoop Key-Val DBs Flattened/ Modelled (Sharded) Tables (recent data) Ad hoc Analytics: ● City Ops ● Data Scientists Vertica Generation 2 (2015-2016) (Data Warehouse) Data size: ~10 PB RDBMS DBs Latency: 24hrs

  4. E T L (Flattened/Modelled Tables) Snapshot based ingestion: >100 TBs for Jan 2016: 6 hrs (500 executors) Batch recompute: Trips table Aug 2016: 10hrs (1000 executors) 8-10 hrs Hive/Spark/ Snapshot Ingestion Presto/ HBase (Batch) Notebooks Upsert Ingestion (Streaming) Generation 2 (2015-2016) Data size: ~10 PB Latency: 24hrs Key-Val DBs (Sharded) E2E data latency: 18-24 hours

  5. 2010-2014 partition 2015/xx/xx partition 2016/xx/xx partition Incremental pull Ingestion (every 30 min) (Batch) 2017/xx/xx partition Our largest datasets stored in key-value sharded DBs 2018/xx/xx partition New Trip Data Data partitioned by trip start date in Hadoop Existing Trip Data (at day-level granularity) Updated Trip Data

  6. Normal Table (Hive/ Presto/ Spark) Large Incr. Pull Update/ Delete/ Dataset (Hive/ Spark/ Insert records in HDFS Presto)

  7. E T L (Flattened/Modelled Tables) <30 min Incremental ingestion: <30min to get in new data/updates Incremental Pull Changelogs Key-Val DBs Hive/Spark/ (Sharded) Changelogs Ingestion Insert Kafka Presto/ Update (Batch) Notebooks Delete Hudi Changelogs Generation 3 (2017-present) Data size: ~100 PB Latency: <30min raw data <1 hr modelled RDBMS DBs E2E Fresh data ingestion: <30 min for raw data Tables <1 hour for Modelled Tables

  8. Stream Incremental Database Batch Processing Processing min-batch Processing <5 min <1 hour <1 Sec

  9. Schema-Service Kafka logging Library Hive/ Analytical Spark/ data Users Hadoop Presto/ Key-Value DBs (Direct Access) Notebooks Cassandra MySQL/ Postgresql Dispersal Ingestion Analytical Kafka AWS S3 Service Service Data Cassandra ElasticSearch Hudi file format ... ElasticSearch

  10. Ingestion Job (using Hoodie)

  11. Ingestion Job (using Hoodie)

  12. Storage Type Supported Views Storage 1.0 Read Optimized, (Copy On Write) ChangeLog View Read Optimized, Storage 2.0 RealTime, (Merge On Read) ChangeLog View

  13. Hadoop Platform @ Uber Want to be part of Gen.4 or beyond? ● Come talk to me ○ Office Hours: 11:30am - 12:10 pm ● Positions in both SF & Palo Alto ○ email me: reza@uber.com 39

  14. Further references 1. Open-Source Hudi Project on Github 2. “Hoodie: Uber Engineering’s Incremental Processing Framework on Hadoop”, Prasanna Rajaperumal, Vinoth Chandar, Uber Eng blog, 2017 3. “Uber, your Hadoop has arrived: Powering Intelligence for Uber’s Real-time marketplace”, Vinoth Chandar, Strata + Hadoop, 2016. 4. “Case For Incremental Processing on Hadoop”, Vinoth Chandar, O’Reily article, 2016 5. “Hoodie: Incremental processing on Hadoop at Uber”, Vinoth Chandar, Prasanna Rajaperumal, Strata + Hadoop World, 2017. 6. “Hoodie: An Open Source Incremental Processing Framework From Uber”, Vinoth Chandar, DataEngConf, 2017. 7. “Incremental Processing on Large Analytical Datasets”, Prasanna Rajaperumal, Spark Summit, 2017. 8. “Scaling Uber’s Hadoop Distributed File System for Growth”, Ang Zhang, Wei Yan, Uber Eng blog, 2018 41

  15. Further references 9. “Hadoop Infrastructure @Uber Past, Present and Future”, Mayank Bansal, Apache Big Data Europe , 2016. 10. “Even Faster: When Presto Meets Parquet @ Uber”, Zhenxiao Luo, Apache: Big Data North America, 2017. 11. 42

  16. Data @ Uber: Generation 2 (2015-1016) But soon, a new set of Pain Points showed up: Gen. 2- Pain Point #1: Reliability of the ingestion ○ Bulk Snapshot based data ingestion stressed source systems ○ Spiky source data (e.g. Kafka) resulted in data being deleted before it can be written out ○ Source were read in streaming fashion but Parquet was written in semi-batch mode Gen. 2- Pain Point #2: Scalability ○ Small file issue of HDFS started to show up (requiring larger Parquet files) ○ Ingestion was not easily-scalable due to: ■ involving streaming AND/OR batch modes ■ Running mostly on dedicated HW (Needed to set it up in new DCs without YARN) ■ Large sharded Key/Val provided changelogs that needed to be merged/compacted Gen. 3- Pain Point #3: Queries too slow ○ Single choice of query engine 44

  17. Data @ Uber: Generation 2.5 (2015-1016) Applications: ● ETL Kafka 8 E T L ● Business Ops (Flattened/Modelled Tables) ● Machine Learning ● Experiments Ingestion Hive/Spark/ (Batch) Ingestion Presto/ (Streaming) Row based Notebooks (HBase/ Hadoop Sequence file Key-Val DBs (Sharded) Flattened/ Modelled Tables Adhoc Analytics: ● City Ops ● Data Scientists Vertica (Data Warehouse) 45 RDBMS DBs

  18. Data @ Uber: Generation 2.5 (2015-1016) Main Highlights ● Presto added as interactive query engine ● Spark notebooks added to encourage data scientists to use Hadoop ● Simplified architecture: 2-Leg Data Ingestion ○ Get raw data into Hadoop, then do most of work as batch jobs ● Gave us time to stabilize the infrastructure (Kafka,....) & think long-term ● Reliable data ingestion with no data loss ○ since data was streamed into Hadoop with minimum work 46

  19. Data @ Uber: Generation 2.5 (2015-1016) 2-Leg data ingestion: Snapshot Tables: Full dump Full Snapshot - Trips snapshot (HBase) ● Leg1: - User snapshot ○ Running as streaming job on dedicated hardware ○ No extra pressure on the source (especially for Backfills/Catch-up) ○ Fast streaming into row-oriented storage - HBase/Sequence file ○ Can run on DCs without YARN etc ● Leg 2: DB changelogs ○ (HDFS) Running as batch jobs in Hadoop Incremental Tables: Incremental Pull ○ - Changelog history Efficient especially for Parquet writing (Append-only) ○ Control Data Quality - - Kafka events ■ Kafka logs Schema Enforcement - ■ (HDFS) Cleaning JSON - ■ Hive Partitioning ○ File Stitching - ■ Keeps NN happy & queries performant 47

  20. Data @ Uber: Generation 2.5 (2015-1016) Hive: Presto: ● Powerful, scales reliably ● Interactive queries (fast) ● But slow ● Deployed at scale and good integration with HDFS/Hive Vertica: ● Doesn’t require flattening unlike Vertica ● Supported ANSI SQL ● Fast ● Have to improve by adding: ● Can’t cheaply scale to x PB ○ Support for geo data Spark Notebooks ○ Better support for nested data types ● Great for Data Scientists to prototype/explore data 48

  21. Data @ Uber: Generation 2.5 (2015-1016) Solved issues from Generation 2: Gen. 2- Pain Point #1: Reliability of the ingestion -> solved ○ Bulk Snapshot based data ingestion stressed source systems ○ Spiky source data (e.g. Kafka) resulted in data being deleted before it can be written out ○ Source were read in streaming fashion but Parquet was written in semi-batch mode Gen. 2- Pain Point #2: Scalability -> solved ○ Small file issue of HDFS started to show up (requiring larger Parquet files) ○ Ingestion was not easily-scalable due to: ■ involving streaming AND/OR batch modes ■ Running mostly on dedicated HW (Needed to set it up in new DCs without YARN) ■ Large sharded Key/Val provided changelogs that needed to be merged/compacted Gen. 2- Pain Point #3: Queries too slow -> solved ○ Limited choice of query engine 49

  22. Data @ Uber: Generation 2.5 (2015-1016) Pain points of snapshot-based DB ingestion: E T L (Flattened/Modelled Tables) Snapshot based ingestion: Jan 2016: 6 hrs (500 executors) Batch recompute: >100TBs for Aug 2016: 10hrs (1000 executors) 8-10 hrs Trips table Ingestion Hive/Spark/ HBase Presto (Batch) Upsert Ingestion (Streaming) Key-Val DBs (Sharded) 50 E2E Fresh data ingestion: 18-24 hours

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend