outline
play

Outline Introduction 1 Algorithms 2 crtrees 3 Examples 4 - PowerPoint PPT Presentation

CRTREES : A N I MPLEMENTATION OF C LASSIFICATION A ND R EGRESSION T REES (CART) & R ANDOM F ORESTS IN S TATA Ricardo Mora Universidad Carlos III de Madrid Madrid, October 2019 1 / 52 Outline Introduction 1 Algorithms 2 crtrees 3


  1. CRTREES : A N I MPLEMENTATION OF C LASSIFICATION A ND R EGRESSION T REES (CART) & R ANDOM F ORESTS IN S TATA Ricardo Mora Universidad Carlos III de Madrid Madrid, October 2019 1 / 52

  2. Outline Introduction 1 Algorithms 2 crtrees 3 Examples 4 Simulations 5 2 / 52

  3. Introduction Introduction 3 / 52

  4. Introduction Decision trees Decision tree-structured models are predictive models that use tree-like diagrams Classification trees: the target variable takes a finite set of values Regression trees: the target variable takes real numbers 4 / 52

  5. Introduction Decision trees Decision tree-structured models are predictive models that use tree-like diagrams Classification trees: the target variable takes a finite set of values Regression trees: the target variable takes real numbers Each branch in the tree represents a sample split criterion 4 / 52

  6. Introduction Decision trees Decision tree-structured models are predictive models that use tree-like diagrams Classification trees: the target variable takes a finite set of values Regression trees: the target variable takes real numbers Each branch in the tree represents a sample split criterion Several Approaches: Chi-square automated interaction detection, CHAID (Kass 1980; Biggs et al. 1991) Classification and Regression Trees, CART (Breiman et al. 1984) Random Forests (Breiman 2001; Scornet et al. 2015) 4 / 52

  7. Introduction A simple tree structure  = y 1 if x 1 ≤ s 1  y ( x 1 , x 2 ) = y 2 if x 1 > s 1 and x 2 ≤ s 2 = y 3 if x 1 > s 1 and x 2 > s 2  x 1 ≤ s 1 yes no y = y 1 x 2 ≤ s 2 yes no y = y 2 y = y 3 5 / 52

  8. Introduction CART CART objective is to estimate a binary tree structure 6 / 52

  9. Introduction CART CART objective is to estimate a binary tree structure It performs three algorithms: Tree-growing : step-optimal recursive partition (LS on 50 cells with at most two terminal nodes ≈ 6 × 10 14 models) Tree-pruning & Obtaining the honest tree 6 / 52

  10. Introduction CART CART objective is to estimate a binary tree structure It performs three algorithms: Tree-growing : step-optimal recursive partition (LS on 50 cells with at most two terminal nodes ≈ 6 × 10 14 models) Tree-pruning & Obtaining the honest tree The last two algorithms attempt to minimize overfitting (growing trees with no external validity) test sample cross-validation, bootstrap 6 / 52

  11. Introduction CART CART objective is to estimate a binary tree structure It performs three algorithms: Tree-growing : step-optimal recursive partition (LS on 50 cells with at most two terminal nodes ≈ 6 × 10 14 models) Tree-pruning & Obtaining the honest tree The last two algorithms attempt to minimize overfitting (growing trees with no external validity) test sample cross-validation, bootstrap In Stata, modules < chaid > perform CHAID and < cart > performs CART analysis for failure time data 6 / 52

  12. Introduction Random Forests Random forests is an ensemble learning method to generate predictions using tree structures 7 / 52

  13. Introduction Random Forests Random forests is an ensemble learning method to generate predictions using tree structures Ensemble learning method: use of many strategically generated models First step: create multitude of (presumably over-fitted) trees with tree-growing algorithm The multitude of trees are obtained by random sampling (bagging) and by random choice of splitting variables Second step: case predictions are built using modes (in classification) and averages (in regression) 7 / 52

  14. Introduction Random Forests Random forests is an ensemble learning method to generate predictions using tree structures Ensemble learning method: use of many strategically generated models First step: create multitude of (presumably over-fitted) trees with tree-growing algorithm The multitude of trees are obtained by random sampling (bagging) and by random choice of splitting variables Second step: case predictions are built using modes (in classification) and averages (in regression) In Stata, < sctree > is a Stata wrapper for the R functions "tree()", "randomForest()", and "gbm()" Classification tree with optimal pruning, bagging, boosting, and random forests 7 / 52

  15. Algorithms Algorithms 8 / 52

  16. Algorithms Growing the tree (CART & Random Forests) 9 / 52

  17. Algorithms Growing the tree (CART & Random Forests) Requires a so-called training or learning sample 9 / 52

  18. Algorithms Growing the tree (CART & Random Forests) Requires a so-called training or learning sample At iteration i with tree structure T i consider all terminal nodes t ∗ ( T i ) Classification: Let i ( T i ) be an overall impurity measure (using the gini or entropy index) Regression: Let i ( T i ) be the residual sum of squares in all terminal nodes The best split at iteration i identifies the terminal node and split criterion that maximizes i ( T i ) − i ( T i + 1 ) 9 / 52

  19. Algorithms Growing the tree (CART & Random Forests) Requires a so-called training or learning sample At iteration i with tree structure T i consider all terminal nodes t ∗ ( T i ) Classification: Let i ( T i ) be an overall impurity measure (using the gini or entropy index) Regression: Let i ( T i ) be the residual sum of squares in all terminal nodes The best split at iteration i identifies the terminal node and split criterion that maximizes i ( T i ) − i ( T i + 1 ) Recursive partitioning ends with the largest possible tree, T MAX where there are no nodes to split or the number of observations reach a lower limit (splitting rule) 9 / 52

  20. Algorithms Overfitting and aggregation bias 10 / 52

  21. Algorithms Overfitting and aggregation bias In a trivial setting, the result is equivalent to dividing the sample into all possible cells and computing within-cell least squares 10 / 52

  22. Algorithms Overfitting and aggregation bias In a trivial setting, the result is equivalent to dividing the sample into all possible cells and computing within-cell least squares Overfitting: T MAX will usually be too complex in the sense that it has no external validity and some terminal nodes should be aggregated Besides, a more simplified structure will normally lead to more accurate estimates since the number of observations in each terminal node grows as aggregation takes place 10 / 52

  23. Algorithms Overfitting and aggregation bias In a trivial setting, the result is equivalent to dividing the sample into all possible cells and computing within-cell least squares Overfitting: T MAX will usually be too complex in the sense that it has no external validity and some terminal nodes should be aggregated Besides, a more simplified structure will normally lead to more accurate estimates since the number of observations in each terminal node grows as aggregation takes place However, if aggregation goes too far, aggregation bias becomes a serious problem 10 / 52

  24. Algorithms Pruning the tree: Error-complexity clustering (CART) In order to avoid overfitting, CART identifies a sequence of nested trees that results from recursive aggregation of nodes from T MAX with a clustering procedure 11 / 52

  25. Algorithms Pruning the tree: Error-complexity clustering (CART) In order to avoid overfitting, CART identifies a sequence of nested trees that results from recursive aggregation of nodes from T MAX with a clustering procedure For a given value α , let R ( α, T ) = R ( T ) + α | T | where | T | denotes the number of terminal nodes, or complexity, of tree T and R ( T ) is the MSE in regression or the misclassification rate in classification 11 / 52

  26. Algorithms Pruning the tree: Error-complexity clustering (CART) In order to avoid overfitting, CART identifies a sequence of nested trees that results from recursive aggregation of nodes from T MAX with a clustering procedure For a given value α , let R ( α, T ) = R ( T ) + α | T | where | T | denotes the number of terminal nodes, or complexity, of tree T and R ( T ) is the MSE in regression or the misclassification rate in classification The optimal tree for a given α , T ( α ) , minimizes R ( α, T ) within the set of subtrees of T MAX T ( α ) belongs to a much broader set than the sequence of trees obtained in the growing algorithm 11 / 52

  27. Algorithms Pruning the tree: Error-complexity clustering (CART) In order to avoid overfitting, CART identifies a sequence of nested trees that results from recursive aggregation of nodes from T MAX with a clustering procedure For a given value α , let R ( α, T ) = R ( T ) + α | T | where | T | denotes the number of terminal nodes, or complexity, of tree T and R ( T ) is the MSE in regression or the misclassification rate in classification The optimal tree for a given α , T ( α ) , minimizes R ( α, T ) within the set of subtrees of T MAX T ( α ) belongs to a much broader set than the sequence of trees obtained in the growing algorithm Pruning identifies a sequence of real positive numbers { α 0 , α 1 , ..., α M } such that α j < α j + 1 and T MAX ≡ T ( α 0 ) ≻ T ( α 1 ) ≻ T ( α 2 ) ≻ . . . ≻ { root } 11 / 52

  28. Algorithms Honest tree (CART) Out of the sequence of optimal trees, { T ( α j ) } j , T MAX has lowest R ( T ) in the learning sample by construction and R ( · ) increases with α 12 / 52

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend