towards a learning optimizer for shared clouds
play

Towards a Learning Optimizer for Shared Clouds* Chenggang Wu, Alekh - PowerPoint PPT Presentation

Towards a Learning Optimizer for Shared Clouds* Chenggang Wu, Alekh Jindal, Saeed Amizadeh, Hiren Patel, Wangchao Le, Shi Qiao, Sriram Rao February 8, 2019 * C. Wu, A. Jindal, S. Amizadeh, H. Patel, W. Le, S. Qiao, and S. Rao. Towards a Learning


  1. Towards a Learning Optimizer for Shared Clouds* Chenggang Wu, Alekh Jindal, Saeed Amizadeh, Hiren Patel, Wangchao Le, Shi Qiao, Sriram Rao February 8, 2019 * C. Wu, A. Jindal, S. Amizadeh, H. Patel, W. Le, S. Qiao, and S. Rao. Towards a Learning Optimizer for Shared Clouds. In PVLDB, 12(3): 210 –222, 2018.

  2. Rise of Big Data Systems Hive Declarative query interface Cost-based query optimizer (CBO) Spark BIG Flink 400 SELECT Customer.cname, Item.iname Join cid Calcite 6000 FROM Customer iid INNER JOIN Order 800 Join ON Customer.cid == Order.cid Order 10 BigQuery 600 INNER JOIN Item Filter Filter ON Item.iid == Order.iid Age < 18 Price>100 DATA WHERE Item.iprice > 100 Big SQL AND Customer.cage < 18; 50 1000 Customer Item HDInsight Good plan => Good performance SCOPE SYSTEM Problem: CBO can make mistakes Etc. esp. Cardinality Estimation

  3. Rise of Big Data Systems Hive The root of all evil , the Achilles Heel of query optimization, is the estimation of the size of intermediate results, known Spark as cardinalities . – [Guy Lohman, SIGMOD Blog 2014] BIG Flink Calcite 𝐹𝑊𝐽𝑀 BigQuery DATA Big SQL HDInsight SCOPE SYSTEM Etc. ESTIMATION CARDINALITY

  4. Rise of Big Data Systems Hive Spark BIG Flink TUNING! Calcite BigQuery DATA Collecting Statistics Big SQL Providing Query Hints Database Administration HDInsight SCOPE SYSTEM Etc.

  5. Rise of the Clouds MANAGED Hive Spark BIG Flink SERVERLESS Calcite BigQuery DATA Collecting Statistics No Admin Big SQL Providing Query Hints No Expertise Database Administration No Control HDInsight SCOPE SYSTEM Etc.

  6. Rise of the Clouds Hive Spark SELF BIG Flink Calcite TUNING! BigQuery DATA Big SQL HDInsight SCOPE SYSTEM Etc.

  7. Hope: Shared Cloud Infrastructures BIG DATA Shared data processing SYSTEM Massive volumes of query logs Centrally visible query workload

  8. Cosmos: shared cloud infra at Microsoft • SCOPE Workloads: Ideal • Batch processing in a job service • 100Ks jobs; 1000s users; EBs data; 100Ks nodes Under- Over- • Cardinality estimation in SCOPE: estimation estimation • 1 day’s log from Asimov • Lots of constants for best effort estimation • Big data, unstructured Data, custom code 400 Q1 • Workload patterns Join cid 6000 Q2 iid • Recurring jobs Aggregate Join 800 Order 10 600 600 • Shared query subgraphs Filter Filter Filter Age < 18 Price>100 Price>100 • Can we learn cardinality models? 50 1000 1000 Item Customer Item

  9. Learning Cardinality Model • Strict: cache previously seen values Subgraph Logical Parameter Data Type Expression Values Inputs • Low coverage • Online feedback Strict Fixed Fixed Fixed • General: learning a single model General Variable Variable Variable • Hard to featurize Template Fixed Variable Variable • Hard to train • Prediction latency • Low accuracy • Template: learning a model per subgraph template => No one-size-fits-all

  10. Learned Cardinality Models Join Join cid cid • Subgraph Template: Filter Filter • Same logical subexpression Age < 18 Age < 20 Order Order’ • Different physical implementation Customer Customer’ • Different parameters and inputs Table 3: The features used for learning cardinality. Name Description JobName Name of the job containing the subgraph • Feature Selection NormJobName Normalize job name InputCardinality Total cardinality of all inputs to the subgraph • Model Selection P ow ( InputCardinality , 2) Square of InputCardinality Sqrt ( InputCardinality ) Square root of InputCardinality Log ( InputCardinality ) Log of InputCardinality • Generalized liner models due to their AvgRowLength Average output row length interpretability InputDataset Name of all input datasets to the subgraph Parameters One or more parameters in the subgraph • More complex models, such as multi- layer perceptron harder to train

  11. Accuracy: 10-fold cross validation Fraction Subgraph Instances 1 75 th 90 th 0.9 Model Percentile Percentile 0.8 Error Error 0.7 Default 74602% 5931418% 0.6 Neural Network SCOPE 0.5 Linear Regression Poisson Regression 0.4 Poisson 1.5% 32% 0.3 Regression 0.2 0.1 Note: Neural network overfits due to small 0 observation and feature space per model 10 -6 10 -4 10 -2 10 0 10 2 10 4 10 6 10 8 Estimated/Actual Cardinality Ratio

  12. Applicability: %tage subgraphs having models Varying Training Window Sliding Test Window 100 100 Jobs Jobs Subgraphs Subgraphs Applicability (%) Applicability (%) 80 80 60 60 40 40 20 20 0 0 1-day 2-day 4-day 1-week 2-week 1-month 1-day 1-week 1-month Train Duration Test Slide Duration

  13. End-to-end Feedback Loop Easy to featurize with low overhead Accurate and easy to understand Model Lookup & Prediction Query Compiler Optimizer Scheduler Runtime Result Annotation hints Cardinality to the query Models optimizer Model Parallel Workload Server Trainer Analyzer Compiled Optimized plans & Execution graphs Actual runtime query DAGs estimated statistics & resources statistics Figure 5: The feedback loop architecture. Trained offline over new batches of data Large number of smaller, highly accurate models

  14. Performance • Subset of hourly jobs from Asimov • These queries process unstructured data, use SPJA operators, and a UDO • Re-ran the queries over same production data, but with redirected output 1200 700 12000 Default Optimizer Default Optimizer Default Optimizer Number of Vertices With CardLearner Processing Time (s) With CardLearner With CardLearner 900 525 9000 Latency (s) 600 350 6000 300 175 3000 0 0 0 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Queries Queries Queries

  15. Avoiding Learning Bias Actual:100 Actual:100 Actual:100 • Learning only what is seen Estimated:75 Estimated:50 Estimated:200 Actual:75 Actual:100 Actual:50 Z Y X • Exploratory join ordering X Y X Z Y Z • Actively try different join orders (a) Plan 1 (b) Plan 2 (c) Plan 3 • Pruning: discard plans with subexpressions that are more expensive than at least one other plan • Maximize new observations when comparing plans • Execution strategies • Static workload tuning • Using sample data • Leveraging recurring/overlapping jobs

  16. Takeaways • Big data systems increasingly use cost-based optimization • Users cannot tune these systems in managed/serverless services • Hard to achieve a one-size-fits-all query optimizer • Instance optimized systems are more feasible • Very promising results from SCOPE workloads: • Could achieve very high accuracy • Reasonably large applicability, could further apply exploration • Performance gains, most significant being less resource consumption • Learned cardinality models a step towards self-learning optimizers

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend