Data Mining Lecture 04: Decision Trees Theses slides are based - - PowerPoint PPT Presentation

data mining
SMART_READER_LITE
LIVE PREVIEW

Data Mining Lecture 04: Decision Trees Theses slides are based - - PowerPoint PPT Presentation

CISC 4631 Data Mining Lecture 04: Decision Trees Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Eamonn Koegh (UC Riverside) Raymond Mooney (UT Austin) 1 Classification: Definition


slide-1
SLIDE 1

CISC 4631 Data Mining

Lecture 04:

  • Decision Trees

Theses slides are based on the slides by

  • Tan, Steinbach and Kumar (textbook authors)
  • Eamonn Koegh (UC Riverside)
  • Raymond Mooney (UT Austin)

1

slide-2
SLIDE 2

Classification: Definition

  • Given a collection of records (training set )

– Each record contains a set of attributes, one of the attributes is the class.

  • Find a model for class attribute as a function of

the values of other attributes.

  • Goal: previously unseen records should be

assigned a class as accurately as possible.

– A test set is used to determine the accuracy of the

  • model. Usually, the given data set is divided into training

and test sets, with training set used to build the model and test set used to validate it.

2

slide-3
SLIDE 3

Illustrating Classification Task

Apply Model

Induction Deduction

Learn Model

Model

Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes

10

Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ?

10

Test Set Learning algorithm Training Set

3

slide-4
SLIDE 4

Classification Techniques

  • Decision Tree based Methods
  • Rule-based Methods
  • Memory based reasoning
  • Neural Networks
  • Naïve Bayes and Bayesian Belief Networks
  • Support Vector Machines

4

slide-5
SLIDE 5

Example of a Decision Tree

Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes

10

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Splitting Attributes

Training Data Model: Decision Tree

5

slide-6
SLIDE 6

Another Example of Decision Tree

Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes

10

MarSt Refund TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

There could be more than one tree that fits the same data!

6

slide-7
SLIDE 7

Decision Tree Classification Task

Apply Model

Induction Deduction

Learn Model

Model

Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes

10

Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ?

10

Test Set Tree Induction algorithm Training Set

Decision Tree

7

slide-8
SLIDE 8

Apply Model to Test Data

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Refund Marital Status Taxable Income Cheat No Married 80K ?

10

Test Data Start from the root of tree.

8

slide-9
SLIDE 9

Apply Model to Test Data

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Refund Marital Status Taxable Income Cheat No Married 80K ?

10

Test Data

9

slide-10
SLIDE 10

Apply Model to Test Data

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Refund Marital Status Taxable Income Cheat No Married 80K ?

10

Test Data

10

slide-11
SLIDE 11

Apply Model to Test Data

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Refund Marital Status Taxable Income Cheat No Married 80K ?

10

Test Data

11

slide-12
SLIDE 12

Apply Model to Test Data

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Refund Marital Status Taxable Income Cheat No Married 80K ?

10

Test Data

12

slide-13
SLIDE 13

Apply Model to Test Data

Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K

Refund Marital Status Taxable Income Cheat No Married 80K ?

10

Test Data Assign Cheat to “No”

13

slide-14
SLIDE 14

Decision Tree Terminology

14

slide-15
SLIDE 15

Decision Tree Classification Task

Apply Model

Induction Deduction

Learn Model

Model

Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes

10

Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ?

10

Test Set Tree Induction algorithm Training Set

Decision Tree

15

slide-16
SLIDE 16

Decision Tree Induction

  • Many Algorithms:

– Hunt’s Algorithm (one of the earliest) – CART – ID3, C4.5 – SLIQ,SPRINT

  • John Ross Quinlan is a computer science researcher in data

mining and decision theory. He has contributed extensively to the development of decision tree algorithms, including inventing the canonical C4.5 and ID3 algorithms.

16

slide-17
SLIDE 17

17

Decision Tree Classifier

Ross Quinlan Antenna Length Antenna Length

10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9

Abdomen Length Abdomen Length Abdomen Length Abdomen Length > 7.1? no yes

Katydid

Antenna Length Antenna Length > 6.0? no yes

Katydid Grasshopper

slide-18
SLIDE 18

18

Grasshopper Antennae shorter than body? Cricket Foretiba has ears? Katydids Camel Cricket Yes Yes Yes No No 3 Tarsi? No

Decision trees predate computers

slide-19
SLIDE 19

19

 Decision tree is a classifier in the form of a tree structure – Decision node: specifies a test on a single attribute – Leaf node: indicates the value of the target attribute – Arc/edge: split of one attribute – Path: a disjunction of test to make the final decision  Decision trees classify instances or examples by starting at the root of the

tree and moving through it until a leaf node.

Definition

slide-20
SLIDE 20

20

  • Decision tree generation consists of two phases

– Tree construction

  • At start, all the training examples are at the root
  • Partition examples recursively based on selected attributes

– Tree pruning

  • Identify and remove branches that reflect noise or outliers
  • Use of decision tree: Classifying an unknown sample

– Test the attribute values of the sample against the decision tree

Decision Tree Classification

slide-21
SLIDE 21

21

Decision Tree Representation

  • Each internal node tests an attribute
  • Each branch corresponds to attribute value
  • Each leaf node assigns a classification
  • utlook

sunny

  • vercast

rain yes humidity wind high normal strong weak yes yes no no

slide-22
SLIDE 22

22

  • Basic algorithm (a greedy algorithm)

– Tree is constructed in a top-down recursive divide-and-conquer manner – At start, all the training examples are at the root – Attributes are categorical (if continuous-valued, they can be discretized in advance) – Examples are partitioned recursively based on selected attributes. – Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain)

  • Conditions for stopping partitioning

– All samples for a given node belong to the same class – There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf – There are no samples left

How do we construct the decision tree?

slide-23
SLIDE 23

23

Top-Down Decision Tree Induction

  • Main loop:

1. A  the “best” decision attribute for next node 2. Assign A as decision attribute for node 3. For each value of A, create new descendant of node 4. Sort training examples to leaf nodes 5. If training examples perfectly classified, Then STOP, Else iterate over new leaf nodes

slide-24
SLIDE 24

Tree Induction

  • Greedy strategy.

– Split the records based on an attribute test that optimizes certain criterion.

  • Issues

– Determine how to split the records

  • How to specify the attribute test condition?
  • How to determine the best split?

– Determine when to stop splitting

24

slide-25
SLIDE 25

25

How To Split Records

  • Random Split

– The tree can grow huge – These trees are hard to understand. – Larger trees are typically less accurate than smaller trees.

  • Principled Criterion

– Selection of an attribute to test at each node - choosing the most useful attribute for classifying examples. – How? – Information gain

  • measures how well a given attribute separates the training examples

according to their target classification

  • This measure is used to select among the candidate attributes at each step

while growing the tree

slide-26
SLIDE 26

Tree Induction

  • Greedy strategy:

– Split the records based on an attribute test that optimizes certain criterion: – Hunt’s algorithm: recursively partition training records into successively purer subsets. How to measure purity/impurity

  • Entropy and information gain (covered in the lectures slides)
  • Gini (covered in the textbook)
  • Classification error

26

slide-27
SLIDE 27

How to determine the Best Split

Own Car?

C0: 6 C1: 4 C0: 4 C1: 6 C0: 1 C1: 3 C0: 8 C1: 0 C0: 1 C1: 7

Car Type?

C0: 1 C1: 0 C0: 1 C1: 0 C0: 0 C1: 1

Student ID?

...

Yes No Family Sports Luxury c1 c10 c20

C0: 0 C1: 1

...

c11

Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best? Why is student id a bad feature to use?

27

Gender

slide-28
SLIDE 28

How to determine the Best Split

  • Greedy approach:

– Nodes with homogeneous class distribution are preferred

  • Need a measure of node impurity:

C0: 5 C1: 5 C0: 9 C1: 1

Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity

28

slide-29
SLIDE 29

29

Picking a Good Split Feature

  • Goal is to have the resulting tree be as small as possible, per Occam’s

razor.

  • Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard
  • ptimization problem.
  • Top-down divide-and-conquer method does a greedy search for a simple

tree but does not guarantee to find the smallest.

– General lesson in Machine Learning and Data Mining: “Greed is good.”

  • Want to pick a feature that creates subsets of examples that are relatively

“pure” in a single class so they are “closer” to being leaf nodes.

  • There are a variety of heuristics for picking a good test, a popular one is

based on information gain that originated with the ID3 system of Quinlan (1979).

  • R. Mooney, UT Austin
slide-30
SLIDE 30

30

Information Theory

  • Think of playing "20 questions": I am thinking of an integer between 1 and

1,000 -- what is it? What is the first question you would ask?

  • What question will you ask?
  • Why?
  • Entropy measures how much more information you need before you can

identify the integer.

  • Initially, there are 1000 possible values, which we assume are equally

likely.

  • What is the maximum number of question you need to ask?
slide-31
SLIDE 31

31

  • R. Mooney, UT Austin

Entropy

  • Entropy (disorder, impurity) of a set of examples, S, relative to a binary

classification is: where p1 is the fraction of positive examples in S and p0 is the fraction of negatives.

  • If all examples are in one category, entropy is zero (we define 0log(0)=0)
  • If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1.
  • Entropy can be viewed as the number of bits required on average to encode

the class of an example in S where data compression (e.g. Huffman coding) is used to give shorter codes to more likely cases.

  • For multi-class problems with c categories, entropy generalizes to:

) ( log ) ( log ) (

2 1 2 1

p p p p S Entropy   

 

c i i i

p p S Entropy

1 2

) ( log ) (

slide-32
SLIDE 32

32

Entropy Plot for Binary Classification

  • The entropy is 0 if the outcome is certain.
  • The entropy is maximum if we have no knowledge of the system

(or any outcome is equally possible). Entropy of a 2-class problem with regard to the portion of one of the two groups

slide-33
SLIDE 33

33

Information Gain

  • Is the expected reduction in entropy caused by partitioning the examples

according to this attribute.

  • is the number of bits saved when encoding the target value of an arbitrary

member of S, by knowing the value of attribute A.

slide-34
SLIDE 34

34

Information Gain in Decision Tree Induction

  • Assume that using attribute A, a current set will be partitioned into some

number of child sets

  • The encoding information that would be gained by branching on A

) ( ) ( ) ( sets child all E set Current E A Gain

 

Note: entropy is at its minimum if the collection of objects is completely uniform

slide-35
SLIDE 35

Examples for Computing Entropy

C1 C2 6

C1 2 C2 4 C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = – 0 log2 0 – 1 log2 1 = – 0 – 0 = 0 P(C1) = 1/6 P(C2) = 5/6 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (5/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92

 

j

t j p t j p t Entropy ) | ( log ) | ( ) (

2

C1 3 C2 3

P(C1) = 3/6=1/2 P(C2) = 3/6 = 1/2 Entropy = – (1/2) log2 (1/2) – (1/2) log2 (1/2) = -(1/2)(-1) – (1/2)(-1) = ½ + ½ = 1

NOTE: p( j | t) is computed as the relative frequency of class j at node t

35

slide-36
SLIDE 36

How to Calculate log2x

  • Many calculators only have a button for log10x and

logex (note log typically means log10)

  • You can calculate the log for any base b as follows:

– logb(x) = logk(x) / logk(b) – Thus log2(x) = log10(x) / log10(2) – Since log10(2) = .301, just calculate the log base 10 and divide by .301 to get log base 2. – You can use this for HW if needed

36

slide-37
SLIDE 37

Splitting Based on INFO...

  • Information Gain:

Parent Node, p is split into k partitions; ni is number of records in partition i

– Measures Reduction in Entropy achieved because of the

  • split. Choose the split that achieves most reduction

(maximizes GAIN) – Used in ID3 and C4.5 – Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

       

 k i i split

i Entropy n n p Entropy GAIN

1

) ( ) (

slide-38
SLIDE 38

38

Continuous Attribute?

(more on it later)

  • Each non-leaf node is a test, its edge partitioning the attribute into

subsets (easy for discrete attribute).

  • For continuous attribute

– Partition the continuous value of attribute A into a discrete set of intervals – Create a new boolean attribute Ac , looking for a threshold c,

if

  • therwise

c c

true A c A false     

How to choose c ?

slide-39
SLIDE 39

39

Person

Hair Length Weight Age Class

Homer 0” 250 36 M Marge 10” 150 34 F Bart 2” 90 10 M Lisa 6” 78 8 F Maggie 4” 20 1 F Abe 1” 170 70 M Selma 8” 160 41 F Otto 10” 180 38 M Krusty 6” 200 45 M Comic 8” 290 38 ?

slide-40
SLIDE 40

40

Hair Length <= 5?

yes no Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911

                       n p n n p n n p p n p p S Entropy

2 2

log log ) (

Gain(Hair Length <= 5) = 0.9911 – (4/9 * 0.8113 + 5/9 * 0.9710 ) = 0.0911

) ( ) ( ) ( sets child all E set Current E A Gain

 

Let us try splitting on Hair length Let us try splitting on Hair length

slide-41
SLIDE 41

41

Weight <= 160?

yes no Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911

                       n p n n p n n p p n p p S Entropy

2 2

log log ) (

Gain(Weight <= 160) = 0.9911 – (5/9 * 0.7219 + 4/9 * 0 ) = 0.5900

) ( ) ( ) ( sets child all E set Current E A Gain

 

Let us try splitting on Weight Let us try splitting on Weight

slide-42
SLIDE 42

42

age <= 40?

yes no Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911

                       n p n n p n n p p n p p S Entropy

2 2

log log ) (

Gain(Age <= 40) = 0.9911 – (6/9 * 1 + 3/9 * 0.9183 ) = 0.0183

) ( ) ( ) ( sets child all E set Current E A Gain

 

Let us try splitting on Age Let us try splitting on Age

slide-43
SLIDE 43

43

Weight <= 160?

yes no

Hair Length <= 2?

yes no Of the 3 features we had, Weight was best. But while people who weigh over 160 are perfectly classified (as males), the under 160 people are not perfectly classified… So we simply recurse! This time we find that we can split on Hair length, and we are done!

slide-44
SLIDE 44

44

Weight <= 160?

yes no Hair Length <= 2? yes no We don’t need to keep the data around, just the test conditions.

Male Male Female

How would these people be classified?

slide-45
SLIDE 45

45

It is trivial to convert Decision Trees to rules…

Weight <= 160?

yes no Hair Length <= 2? yes no

Male Male Female

Rules to Classify Males/Females If Weight greater than 160, classify as Male Elseif Hair Length less than or equal to 2, classify as Male Else classify as Female Rules to Classify Males/Females If Weight greater than 160, classify as Male Elseif Hair Length less than or equal to 2, classify as Male Else classify as Female

slide-46
SLIDE 46

46

Decision tree for a typical shared-care setting applying the system for the diagnosis of prostatic obstructions.

Once we have learned the decision tree, we don’t even need a computer!

This decision tree is attached to a medical machine, and is designed to help nurses make decisions about what type of doctor to call.

slide-47
SLIDE 47

47

Wears green? Yes No The worked examples we have seen were performed on small datasets. However with small datasets there is a great danger of

  • verfitting the data…

When you have few datapoints, there are many possible splitting rules that perfectly classify the data, but will not generalize to future datasets.

For example, the rule “Wears green?” perfectly classifies the data, so does “Mothers name is Jacqueline?”, so does “Has blue shoes”…

Male Female

slide-48
SLIDE 48

How to Find the Best Split: GINI

B?

Yes No Node N3 Node N4

A?

Yes No Node N1 Node N2 Before Splitting:

C0 N10 C1 N11

C0 N20 C1 N21

C0 N30 C1 N31

C0 N40 C1 N41 C0 N00 C1 N01

M0 M1 M2 M3 M4 M12 M34 Gain = M0 – M12 vs M0 – M34

48

slide-49
SLIDE 49

Measure of Impurity: GINI (at node t)

  • Gini Index for a given node t with classes j

NOTE: p( j | t) is computed as the relative frequency of class j at node t

  • Example: Two classes C1 & C2 and node t has 5 C1

and 5 C2 examples. Compute Gini(t)

– 1 – [p(C1|t) + p(C2|t)] = 1 – [(5/10)2 + [(5/10)2 ] – 1 – [¼ + ¼] = ½. – Do you think this Gini value indicates a good split or bad split? Is it an extreme value?

 

j

t j p t GINI

2

)] | ( [ 1 ) (

49

slide-50
SLIDE 50

More on Gini

  • Worst Gini corresponds to probabilities of 1/nc, where nc is

the number of classes. – For 2-class problems the worst Gini will be ½

  • How do we get the best Gini? Come up with an example for

node t with 10 examples for classes C1 and C2 – 10 C1 and 0 C2 – Now what is the Gini?

  • 1 – [(10/10)2 + (0/10)2 = 1 – [1 + 0] = 0

– So 0 is the best Gini

  • So for 2-class problems:

– Gini varies from 0 (best) to ½ (worst).

50

slide-51
SLIDE 51

Some More Examples

  • Below we see the Gini values for 4 nodes with

different distributions. They are ordered from best to

  • worst. See next slide for details

– Note that thus far we are only computing GINI for one

  • node. We need to compute it for a split and then compute

the change in Gini from the parent node.

C1 C2 6 Gini=0.000 C1 2 C2 4 Gini=0.444 C1 3 C2 3 Gini=0.500 C1 1 C2 5 Gini=0.278

51

slide-52
SLIDE 52

Examples for computing GINI

C1 C2 6

C1 2 C2 4 C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

 

j

t j p t GINI

2

)] | ( [ 1 ) (

P(C1) = 1/6 P(C2) = 5/6 Gini = 1 – (1/6)2 – (5/6)2 = 0.278 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 – (2/6)2 – (4/6)2 = 0.444

slide-53
SLIDE 53

Splitting Criteria based on Classification Error

  • Classification error at a node t :
  • Measures misclassification error made by a node.
  • Maximum (1 - 1/nc) when records are equally distributed among all

classes, implying least interesting information

  • Minimum (0.0) when all records belong to one class, implying most

interesting information ) | ( max 1 ) ( t i P t Error

i

 

53

slide-54
SLIDE 54

Examples for Computing Error

C1 C2 6

C1 2 C2 4 C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

) | ( max 1 ) ( t i P t Error

i

 

54

slide-55
SLIDE 55

Comparison among Splitting Criteria

For a 2-class problem:

55

slide-56
SLIDE 56

Discussion

  • Error rate is often the metric used to evaluate a

classifier (but not always)

– So it seems reasonable to use error rate to determine the best split – That is, why not just use a splitting metric that matches the ultimate evaluation metric? – But this is wrong!

  • The reason is related to the fact that decision trees use a greedy

strategy, so we need to use a splitting metric that leads to globally better results

  • The other metrics will empirically outperform error rate, although

there is no proof for this.

56

slide-57
SLIDE 57

DTs in practice...

  • Growing to purity is bad (overfitting)

x1: petal length x2: sepal width

57

slide-58
SLIDE 58

DTs in practice...

  • Growing to purity is bad (overfitting)

x1: petal length x2: sepal width

58

slide-59
SLIDE 59

DTs in practice...

  • Growing to purity is bad (overfitting)

– Terminate growth early – Grow to purity, then prune back

59

slide-60
SLIDE 60

DTs in practice...

  • Growing to purity is bad (overfitting)

x1: petal length x2: sepal width Not statistically supportable leaf Remove split & merge leaves

60

slide-61
SLIDE 61

61

Avoid Overfitting in Classification

(more on overfitting later)

  • The generated tree may overfit the training data

– Too many branches, some may reflect anomalies due to noise or outliers – Result is in poor accuracy for unseen samples

  • Two approaches to avoid overfitting

– Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold

  • Difficult to choose an appropriate threshold

– Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees

  • Use a set of data different from the training data to decide which is the

“best pruned tree”

slide-62
SLIDE 62

Tree Induction

  • Greedy strategy.

– Split the records based on an attribute test that optimizes certain criterion.

  • Issues

– Determine how to split the records

  • How to specify the attribute test condition?
  • How to determine the best split?

– Determine when to stop splitting

62

slide-63
SLIDE 63

How to Specify Test Condition?

  • Depends on attribute types

– Nominal – Ordinal – Continuous

  • Depends on number of ways to split

– 2-way split – Multi-way split

63

slide-64
SLIDE 64

Splitting Based on Nominal Attributes

  • Multi-way split: Use as many partitions as distinct

values.

  • Binary split: Divides values into two subsets.

Need to find optimal partitioning.

CarType

Family Sports Luxury

CarType

{Family, Luxury} {Sports}

CarType

{Sports, Luxury} {Family}

OR

64

slide-65
SLIDE 65
  • Multi-way split: Use as many partitions as distinct

values.

  • Binary split: Divides values into two subsets.

Need to find optimal partitioning.

  • What about this split?

Splitting Based on Ordinal Attributes

Size

Small Medium Large

Size

{Medium, Large} {Small}

Size

{Small, Medium} {Large}

OR

Size

{Small, Large} {Medium}

65

slide-66
SLIDE 66

Splitting Based on Continuous Attributes

  • Different ways of handling

– Discretization to form an ordinal categorical attribute

  • Static – discretize once at the beginning
  • Dynamic – ranges can be found by equal interval

bucketing, equal frequency bucketing (percentiles), or clustering.

– Binary Decision: (A < v) or (A  v)

  • consider all possible splits and finds the best cut
  • can be more compute intensive

66

slide-67
SLIDE 67

Splitting Based on Continuous Attributes

Taxable Income > 80K?

Yes No

Taxable Income? (i) Binary split (ii) Multi-way split

< 10K [10K,25K) [25K,50K) [50K,80K) > 80K

67

slide-68
SLIDE 68

Data Fragmentation

  • Number of instances gets smaller as you traverse

down the tree

  • Number of instances at the leaf nodes could be too

small to make any statistically significant decision

68

slide-69
SLIDE 69

Search Strategy

  • Finding an optimal decision tree is NP-hard
  • The algorithm presented so far uses a greedy, top-

down, recursive partitioning strategy to induce a reasonable solution

69

slide-70
SLIDE 70

Expressiveness

  • Decision tree provides expressive representation for learning

discrete-valued function – But they do not generalize well to certain types of Boolean functions

  • Example: parity function:

– Class = 1 if there is an even number of Boolean attributes with truth value = True – Class = 0 if there is an odd number of Boolean attributes with truth value = True

  • For accurate modeling, must have a complete tree
  • Not expressive enough for modeling continuous variables

– Particularly when test condition involves only a single attribute at-a-time

70

slide-71
SLIDE 71

Decision Boundary

y < 0.33? : 0 : 3 : 4 : 0 y < 0.47? : 4 : 0 : 0 : 4 x < 0.43? Yes Yes No No Yes No

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x y

  • Border line between two neighboring regions of different classes is known as

decision boundary

  • Decision boundary is parallel to axes because test condition involves a single

attribute at-a-time

71

slide-72
SLIDE 72

Oblique Decision Trees

x + y < 1

Class = + Class =

  • Test condition may involve multiple attributes
  • More expressive representation
  • Finding optimal test condition is computationally expensive

72

slide-73
SLIDE 73

Vertical/Horizontal Boundaries

500 circular and 500 triangular data points. Circular points: 0.5  sqrt(x1

2+x2 2)  1

Triangular points: sqrt(x1

2+x2 2) > 0.5 or

sqrt(x1

2+x2 2) < 1 73

slide-74
SLIDE 74

Tree Replication

P Q R S 1 1 Q S 1

  • Same subtree appears in multiple branches

74

slide-75
SLIDE 75

Model Evaluation

  • Metrics for Performance Evaluation

– How to evaluate the performance of a model?

  • Methods for Performance Evaluation

– How to obtain reliable estimates?

75

slide-76
SLIDE 76

76

10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9

100 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90

10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9

Which of the “Problems” can be solved by a Decision Tree?

1) Deep Bushy Tree 2) Useless 3) Deep Bushy Tree

The Decision Tree has a hard time with correlated attributes

?

slide-77
SLIDE 77

77

  • Advantages:

– Easy to understand (Doctors love them!) – Easy to generate rules

  • Disadvantages:

– May suffer from overfitting. – Classifies by rectangular partitioning (so does not handle correlated features very well). – Can be quite large – pruning is necessary. – Does not handle streaming data easily

Advantages/Disadvantages of Decision Trees Advantages/Disadvantages of Decision Trees