scalable machine learning with apache spark introductions
play

Scalable Machine Learning with Apache Spark Introductions - PowerPoint PPT Presentation

Scalable Machine Learning with Apache Spark Introductions Instructor Introduction Student Introductions Name Professional Responsibilities Fun Personal Interest/Fact Expectations for the Course Course Objectives 1


  1. LINEAR REGRESSION LAB II

  2. MLflow Tracking

  3. MLflow Open-source platform for machine learning lifecycle ▪ Operationalizing machine learning ▪ Developed by Databricks ▪ Pre-installed on the Databricks Runtime for ML ▪

  4. Core Machine Learning Issues Keeping track of experiments or model development ▪ Reproducing code ▪ Comparing models ▪ Standardization of packaging and deploying models ▪ MLflow addresses these issues.

  5. MLflow Components MLflow Tracking ▪ MLflow Projects ▪ MLflow Models ▪ MLflow Plugins ▪ APIs: CLI, Python, R, Java, REST ▪

  6. MLflow Tracking Logging API ▪ Specific to machine learning ▪ Library and environment agnostic ▪ Runs Experiments Executions of data science code Aggregations of runs E.g. a model build, an optimization Typically correspond to a data science run project

  7. What Gets Tracked Parameters ▪ Key-value pairs of parameters (e.g. hyperparameters) ▪ Metrics ▪ Evaluation metrics (e.g. RMSE) ▪ Artifacts ▪ Arbitrary output files (e.g. images, pickled models, data files) ▪ Source ▪ The source code from the run ▪

  8. Examining Past Runs Querying Past Runs via the API ▪ MLflowClient Object ▪ List experiments ▪ Search runs ▪ Return run metrics ▪ MLflow UI ▪ Built in to Databricks platform ▪

  9. MLFLOW TRACKING DEMO

  10. MLflow Model Registry

  11. MLflow Model Registry Collaborative, centralized model hub ▪ Facilitate experimentation, testing, and production ▪ Integrate with approval and governance workflows ▪ Monitor ML deployments and their performance ▪ Databricks MLflow Blog Post

  12. MLFLOW MODEL REGISTRY DEMO

  13. MLFLOW LAB

  14. Decision Trees

  15. Decision Making Decision Node Root Node Salary > $50,000 Yes No Decision Node Leaf Node Decline Offer Commute > 1 hr Yes No Decision Node Leaf Node Decline Offer Offers Free Coffee Yes No Leaf Node Leaf Node Accept Offer Decline Offer

  16. Determining Splits Commute? Bonus? < 30 min 30 min - 1 hr > 1 hr No Yes Commute is a better choice because it provides information about the classification.

  17. Creating Decision Boundaries Commute Decline Offer Salary > $50,000 Yes No 1 hour Decline Offer Commute > 1 hr Decline Offer Yes No Accept Offer Decline Offer Accept Offer $50,000 Salary

  18. Lines vs. Boundaries Linear Regression Decision Trees Lines through data Boundaries instead of lines ▪ ▪ Assumed linear relationship Learn complex relationships ▪ ▪ Commute Y 1 hour X $50,000 Salary

  19. Linear Regression or Decision Tree? It depends on the data...

  20. Tree Depth Tree Depth: the length of the 0 Root Node Salary > $50,000 longest path from a root note to a Yes No leaf node 1 Decline Offer Commute > 1 hr 3 Yes No 2 Offers Free Decline Offer Coffee Yes No Leaf Node 3 Leaf Node Accept Offer Decline Offer Note: shallow trees tend to underfit , and deep trees tend to overfit

  21. Underfitting vs. Overfitting Just Right Underfitting Overfitting

  22. Additional Resource R2D3 has an excellent visualization of how decision trees work.

  23. DECISION TREE DEMO

  24. Random Forests

  25. Decision Trees Pros Cons Interpretable ▪ Poor accuracy ▪ Simple ▪ High variance ▪ Classification ▪ Regression ▪ Nonlinear relationships ▪

  26. Bias vs. Variance

  27. Bias-Variance Tradeoff Error = Variance + Bias 2 + noise Reduce Bias ▪ Error Total Error Optimum Model Build more complex ▪ Complexity Variance models Reduce Variance ▪ Use a lot of data ▪ Build simple models ▪ What about the noise? ▪ Bias 2 Model Complexity

  28. https://www.explainxkcd.com/wiki/index.php/2021:_Software_Development

  29. Building Five Hundred Decision Trees Using more data reduces variance for one model ▪ Averaging more predictions reduces prediction variance ▪ But that would require more decision trees ▪ And we only have one training set … or do we? ▪

  30. Bootstrap Sampling A method for simulating N new datasets: 1. Take sample with replacement from original training set 2. Repeat N times

  31. Bootstrap Visualization Bootstrap 1 (N = 100) Bootstrap 2 (N = 100) Training Set (N = 100) Bootstrap 3 (N = 100) Bootstrap 4 (N = 100) Why are some points in the bootstrapped samples not selected?

  32. Training Set Coverage Assume we are bootstrapping N draws from a training set with N observations ... Probability of an element getting picked in each draw : ▪ Probability of an element not getting picked in each draw : ▪ Probability of an element not getting drawn in the entire sample : ▪ As N → ∞, the probability for each element of not getting picked in a sample approaches 0.368.

  33. Bootstrap Aggregating Train a tree on each of sample, and average the predictions ▪ This is bootstrap aggregating, commonly referred to as bagging ▪ Bootstrap 1 Bootstrap 2 Bootstrap 3 Bootstrap 4 Decision Tree 1 Decision Tree 2 Decision Tree 3 Decision Tree 4 Final Prediction

  34. Random Forest Algorithm Full Training Data Bootstrap 1 Bootstrap 2 Bootstrap K ... At each split, a subset of features is considered to ensure each tree is different.

  35. Random Forest Aggregation Scoring Record ... Aggregation Final Prediction Majority-voting for classification ▪ Mean for regression ▪

  36. RANDOM FOREST DEMO

  37. Hyperparameter Tuning

  38. What is a Hyperparameter? Examples for Random Forest: ▪ Tree depth ▪ Number of trees ▪ Number of features to consider ▪ A parameter whose value is used to control the training process.

  39. Selecting Hyperparameter Values Build a model for each hyperparameter value ▪ Evaluate each model to identify the optimal hyperparameter value ▪ What dataset should we use to train and evaluate? ▪ Training Validation Test What if there isn’t enough data to split into three separate sets?

  40. K-Fold Cross Validation Pass 1: Training Training Validation Average Validation Errors to Identify Pass 2: Training Validation Training Optimal Hyperparameter Values Pass 3: Validation Training Training Final Pass: Training with Optimal Hyperparameters Test

  41. Optimizing Hyperparameter Values Grid Search Train and validate every unique combination of hyperparameters ▪ Tree Depth Number of Trees Tree Depth Number of Trees 5 2 5 2 5 4 8 4 8 2 8 4 Question: With 3-fold cross validation, how many models will this build?

  42. HYPERPARAMETER TUNING DEMO

  43. HYPERPARAMETER TUNING LAB

  44. Hyperparameter Tuning with Hyperopt

  45. Problems with Grid Search Exhaustive enumeration is expensive ▪ Manually determined search space ▪ Past information on good hyperparameters isn’t used ▪ So what do you do if… ▪ You have a training budget ▪ You have a non-parametric search space ▪ You want to pick your hyperparameters based on past results ▪

  46. Hyperopt Open-source Python library ▪ Optimization over awkward search spaces ▪ Serial ▪ Parallel ▪ Spark integration ▪ Three core algorithms for optimization: ▪ Random Search ▪ Tree of Parzen Estimators (TPE) ▪ Adaptive TPE ▪ Paper

  47. Optimizing Hyperparameter Values Random Search Generally outperforms grid search ▪ Can struggle on some datasets (e.g. convex spaces) ▪

  48. Optimizing Hyperparameter Values Tree of Parzen Estimators Meta-learner, Bayesian process ▪ Non-parametric densities ▪ Returns candidate hyperparameters based on best expected ▪ improvement Provide a range and distribution for continuous and discrete ▪ values Adaptive TPE better tunes the search space ▪ Freezes hyperparameters ▪ Tunes number of random trials before TPE ▪

  49. HYPEROPT DEMO

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend