csc 411 lecture 06 decision trees
play

CSC 411: Lecture 06: Decision Trees Class based on Raquel Urtasun - PowerPoint PPT Presentation

CSC 411: Lecture 06: Decision Trees Class based on Raquel Urtasun & Rich Zemels lectures Sanja Fidler University of Toronto Jan 26, 2016 Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 1 / 39 Today Decision


  1. CSC 411: Lecture 06: Decision Trees Class based on Raquel Urtasun & Rich Zemel’s lectures Sanja Fidler University of Toronto Jan 26, 2016 Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 1 / 39

  2. Today Decision Trees ◮ entropy ◮ information gain Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 2 / 39

  3. Another Classification Idea We tried linear classification (eg, logistic regression), and nearest neighbors. Any other idea? Pick an attribute, do a simple test Conditioned on a choice, pick another attribute, do another test In the leaves, assign a class with majority vote Do other branches as well Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 3 / 39

  4. Another Classification Idea Gives axes aligned decision boundaries Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 4 / 39

  5. Decision Tree: Example Yes No Yes No Yes No Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 5 / 39

  6. Decision Tree: Classification Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 6 / 39

  7. Example with Discrete Inputs What if the attributes are discrete? Attributes: Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 7 / 39

  8. Decision Tree: Example with Discrete Inputs The tree to decide whether to wait (T) or not (F) Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 8 / 39

  9. Decision Trees Yes No Yes No Yes No Internal nodes test attributes Branching is determined by attribute value Leaf nodes are outputs (class assignments) Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 9 / 39

  10. Decision Tree: Algorithm Choose an attribute on which to descend at each level. Condition on earlier (higher) choices. Generally, restrict only one dimension at a time. Declare an output value when you get to the bottom In the orange/lemon example, we only split each dimension once, but that is not required. Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 10 / 39

  11. Decision Tree: Classification and Regression Each path from root to a leaf defines a region R m of input space Let { ( x ( m 1 ) , t ( m 1 ) ) , . . . , ( x ( m k ) , t ( m k ) ) } be the training examples that fall into R m Classification tree : ◮ discrete output ◮ leaf value y m typically set to the most common value in { t ( m 1 ) , . . . , t ( m k ) } Regression tree : ◮ continuous output ◮ leaf value y m typically set to the mean value in { t ( m 1 ) , . . . , t ( m k ) } Note: We will only talk about classification [Slide credit: S. Russell] Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 11 / 39

  12. Expressiveness Discrete-input, discrete-output case : ◮ Decision trees can express any function of the input attributes. ◮ E.g., for Boolean functions, truth table row → path to leaf: Continuous-input, continuous-output case : ◮ Can approximate any function arbitrarily closely Trivially, there is a consistent decision tree for any training set w/ one path to leaf for each example (unless f nondeterministic in x ) but it probably won’t generalize to new examples Need some kind of regularization to ensure more compact decision trees [Slide credit: S. Russell] Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 12 / 39

  13. How do we Learn a DecisionTree? How do we construct a useful decision tree? Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 13 / 39

  14. Learning Decision Trees Learning the simplest (smallest) decision tree is an NP complete problem [if you are interested, check: Hyafil & Rivest’76] Resort to a greedy heuristic: ◮ Start from an empty decision tree ◮ Split on next best attribute ◮ Recurse What is best attribute? We use information theory to guide us [Slide credit: D. Sonntag] Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 14 / 39

  15. Choosing a Good Attribute Which attribute is better to split on, X 1 or X 2 ? Idea: Use counts at leaves to define probability distributions, so we can measure uncertainty Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 15 / 39 [Slide credit: D. Sonntag]

  16. Choosing a Good Attribute Which attribute is better to split on, X 1 or X 2 ? ◮ Deterministic: good (all are true or false; just one class in the leaf) ◮ Uniform distribution: bad (all classes in leaf equally probable) ◮ What about distributons in between? Note: Let’s take a slight detour and remember concepts from information theory [Slide credit: D. Sonntag] Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 16 / 39

  17. We Flip Two Different Coins Sequence 1: 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 ... ? Sequence 2: 0 1 0 1 0 1 1 1 0 1 0 0 1 1 0 1 0 1 ... ? 16 10 8 versus 2 0 0 1 1 Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 17 / 39

  18. Quantifying Uncertainty Entropy H : � H ( X ) = − p ( x ) log 2 p ( x ) x ∈ X 8/9 5/9 4/9 1/9 0 1 0 1 − 8 8 9 − 1 1 9 ≈ 1 − 4 4 9 − 5 5 9 log 2 9 log 2 9 log 2 9 log 2 9 ≈ 0 . 99 2 How surprised are we by a new value in the sequence? How much information does it convey? Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 18 / 39

  19. Quantifying Uncertainty � H ( X ) = − p ( x ) log 2 p ( x ) x ∈ X entropy 1.0 0.8 0.6 0.4 0.2 probability p of heads 0.2 0.4 0.6 0.8 1.0 Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 19 / 39

  20. Entropy “High Entropy” : ◮ Variable has a uniform like distribution ◮ Flat histogram ◮ Values sampled from it are less predictable “Low Entropy” ◮ Distribution of variable has many peaks and valleys ◮ Histogram has many lows and highs ◮ Values sampled from it are more predictable [Slide credit: Vibhav Gogate] Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 20 / 39

  21. Entropy of a Joint Distribution Example: X = { Raining, Not raining } , Y = { Cloudy, Not cloudy } Cloudy' Not'Cloudy' Raining' 24/100' 1/100' Not'Raining' 25/100' 50/100' � � H ( X , Y ) = − p ( x , y ) log 2 p ( x , y ) x ∈ X y ∈ Y − 24 24 1 100 − 25 1 100 − 50 25 50 = 100 log 2 100 − 100 log 2 100 log 2 100 log 2 100 ≈ 1 . 56bits Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 21 / 39

  22. Specific Conditional Entropy Example: X = { Raining, Not raining } , Y = { Cloudy, Not cloudy } Cloudy' Not'Cloudy' Raining' 24/100' 1/100' Not'Raining' 25/100' 50/100' What is the entropy of cloudiness Y , given that it is raining ? � H ( Y | X = x ) = − p ( y | x ) log 2 p ( y | x ) y ∈ Y − 24 24 25 − 1 1 = 25 log 2 25 log 2 25 ≈ 0 . 24bits We used: p ( y | x ) = p ( x , y ) p ( x ) , and p ( x ) = � y p ( x , y ) (sum in a row) Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 22 / 39

  23. Conditional Entropy Cloudy' Not'Cloudy' Raining' 24/100' 1/100' Not'Raining' 25/100' 50/100' The expected conditional entropy: � H ( Y | X ) = p ( x ) H ( Y | X = x ) x ∈ X � � = − p ( x , y ) log 2 p ( y | x ) x ∈ X y ∈ Y Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 23 / 39

  24. Conditional Entropy Example: X = { Raining, Not raining } , Y = { Cloudy, Not cloudy } Cloudy' Not'Cloudy' Raining' 24/100' 1/100' Not'Raining' 25/100' 50/100' What is the entropy of cloudiness, given the knowledge of whether or not it is raining? � H ( Y | X ) = p ( x ) H ( Y | X = x ) x ∈ X 1 4 H (cloudy | is raining) + 3 = 4 H (cloudy | not raining) ≈ 0 . 75 bits Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 24 / 39

  25. Conditional Entropy Some useful properties: ◮ H is always non-negative ◮ Chain rule: H ( X , Y ) = H ( X | Y ) + H ( Y ) = H ( Y | X ) + H ( X ) ◮ If X and Y independent, then X doesn’t tell us anything about Y : H ( Y | X ) = H ( Y ) ◮ But Y tells us everything about Y : H ( Y | Y ) = 0 ◮ By knowing X , we can only decrease uncertainty about Y : H ( Y | X ) ≤ H ( Y ) Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 25 / 39

  26. Information Gain Cloudy' Not'Cloudy' Raining' 24/100' 1/100' Not'Raining' 25/100' 50/100' How much information about cloudiness do we get by discovering whether it is raining? IG ( Y | X ) = H ( Y ) − H ( Y | X ) ≈ 0 . 25 bits Also called information gain in Y due to X If X is completely uninformative about Y : IG ( Y | X ) = 0 If X is completely informative about Y : IG ( Y | X ) = H ( Y ) How can we use this to construct our decision tree? Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 26 / 39

  27. Constructing Decision Trees Yes No Yes No Yes No I made the fruit data partitioning just by eyeballing it. We can use the information gain to automate the process. At each level, one must choose: 1. Which variable to split. 2. Possibly where to split it. Choose them based on how much information we would gain from the decision! (choose attribute that gives the highest gain) Urtasun, Zemel, Fidler (UofT) CSC 411: 06-Decision Trees Jan 26, 2016 27 / 39

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend