Decision Trees Some exercises
0.
Decision Trees Some exercises 1. Exemplifying how to compute - - PowerPoint PPT Presentation
0. Decision Trees Some exercises 1. Exemplifying how to compute information gains and how to work with decision stumps CMU, 2013 fall, W. Cohen E. Xing, sample questions, pr. 4 2. Timmy wants to know how to do well for ML exam. He collects
0.
CMU, 2013 fall, W. Cohen E. Xing, sample questions, pr. 4
1.
Timmy wants to know how to do well for ML exam. He collects those old statistics and decides to use decision trees to get his model. He now gets 9 data points, and two features: “whether stay up late before exam” (S) and “whether attending all the classes” (A). We already know the statistics is as below: Set (all ) = [5+, 4−] Set (S+) = [3+, 2−], Set(S−) = [2+, 2−] Set (A+) = [5+, 1−], Set(A−) = [0+, 3−] Suppose we are going to use the feature to gain most information at first split, which feature should we choose? How much is the information gain? You may use the following approximations: N 3 5 7 log2 N 1.58 2.32 2.81 2.
[5+,4−] [3+,2−] [2+,2−] + − S [5+,4−] [5+,1−] [0+,3−] + − A H=1 H=0
H(all)
not.
= H[5+, 4−]
not.
= H 5 9
= H 4 9
= 5 9 log2 9 5 + 4 9 log2 9 4 = log2 9 − 5 9 log2 5 − 4 9 log2 4 = 2 log2 3 − 5 9 log2 5 − 8 9 = −8 9 + 2 log2 3 − 5 9 log2 5 = 0.991076 H(all|S)
def.
= 5 9 · H[3+, 2−] + 4 9 · H[2+, 2−] = . . . = 5 9 · 0.970951 + 4 9 · 1 = 0.983861 H(all|A)
def.
= 6 9 · H[5+, 1−] + 3 9 · H[0+, 3−] = . . . = 6 9 · 0.650022 + 4 9 · 0 = 0.433348 IG(all, S)
def.
= H(all) − H(all|S) = 0.007215 IG(all, A)
def.
= H(all) − H(all|A) = 0.557728 IG(all, S) < IG(all, A) ⇔ H(all|S) > H(all|A) 3.
CMU, 2002(?) spring, Andrew Moore, midterm example questions, pr. 2
4.
You are stranded on a deserted island. Mushrooms of various types grow widely all
have been determined as poisonous and others as not (determined by your former companions’ trial and error). You are the only one remaining on the island. You have the following data to consider:
Example NotHeavy Smelly Spotted Smooth Edible A 1 1 B 1 1 1 C 1 1 1 D 1 E 1 1 1 F 1 1 1 G 1 1 H 1 U 1 1 1 ? V 1 1 1 ? W 1 1 ?
You know whether or not mushrooms A through H are poisonous, but you do not know about U through W. 5.
For the a–d questions, consider only mushrooms A through H.
Hint: You can figure this out by looking at the data without explicitly computing the information gain of all four attributes.
question?
not poisonous.
became scarce, should you consider trying U, V and W? Which one(s) and why? Or if none of them, then why not?
6.
a. HEdible = H[3+, 5−]
def.
= −3 8 · log2 3 8 − 5 8 · log2 5 8 = 3 8 · log2 8 3 + 5 8 · log2 8 5 = 3 8 · 3 − 3 8 · log2 3 + 5 8 · 3 − 5 8 · log2 5 = 3 − 3 8 · log2 3 − 5 8 · log2 5 ≈ 0.9544
7.
b.
1 [3+,5−] [1+,2−] [2+,3−] NotHeavy 1 [3+,5−] [2+,3−] [1+,2−] Smelly 1 [2+,3−] [1+,2−] Spotted [3+,5−] 1 [2+,2−] [1+,3−] Smooth [3+,5−]
Node 1 Node 2
8.
c. H0/Smooth
def.
= 4 8H[2+, 2−] + 4 8H[1+, 3−] = 1 2 · 1 + 1 2 1 4 log2 4 1 + 3 4 log2 4 3
1 2 + 1 2 1 4 · 2 + 3 4 · 2 − 3 4 log2 3
2 + 1 2
4 log2 3
1 2 + 1 − 3 8 log2 3 = 3 2 − 3 8 log2 3 ≈ 0.9056 IG0/Smooth
def.
= HEdible − H0/Smooth = 0.9544 − 0.9056 = 0.0488
9.
d. H0/NotHeavy
def.
= 3 8H[1+, 2−] + 5 8H[2+, 3−] = 3 8 1 3 log2 3 1 + 2 3 log2 3 2
8 2 5 log2 5 2 + 3 5 log2 5 3
3 8 1 3 log2 3 + 2 3 log2 3 − 2 3 · 1
8 2 5 log2 5 − 2 5 · 1 + 3 5 log2 5 − 3 5 log2 3
3 8
3
8
5 log2 3 − 2 5
3 8 log2 3 − 2 8 + 5 8 log2 5 − 3 8 log2 3 − 2 8 = 5 8 log2 5 − 4 8 ≈ 0.9512 ⇒ IG0/NotHeavy
def.
= HEdible − H0/NotHeavy = 0.9544 − 0.9512 = 0.0032, IG0/NotHeavy = IG0/Smelly = IG0/Spotted = 0.0032 < IG0/Smooth = 0.0488
10.
Important Remark (in Romanian) ˆ In loc s˘ a fi calculat efectiv aceste cˆ a¸ stiguri de informat ¸ie, pentru a de- termina atributul cel mai ,,bun“, ar fi fost suficient s˘ a compar˘ am valorile entropiilor condit ¸ionale medii H0/Smooth ¸ si H0/NotHeavy: IG0/Smooth > IG0/NotHeavy ⇔ H0/Smooth < H0/NotHeavy ⇔ 3 2 − 3 8 log2 3 < 5 8 log2 5 − 1 2 ⇔ 12 − 3 log2 3 < 5 log2 5 − 4 ⇔ 16 < 5 log2 5 + 3 log2 3 ⇔ 16 < 11.6096 + 4.7548 (adev.)
11.
Node 1: Smooth = 0
1 [2+,2−] [2+,0−] [0+,2−] 1 1 Smelly 1 1 [2+,2−] [0+,1−] [2+,1−] NotHeavy 1 [2+,2−] [1+,1−] [1+,1−] 1 Spotted
12.
Node 2: Smooth = 1
1 2 [1+,3−] [1+,1−] [0+,2−] Node 3 NotHeavy 2 [1+,3−] [1+,0−] [0+,3−] 1 1 Smelly 2 [1+,3−] [0+,1−] [1+,2−] 1 Spotted
13.
The resulting ID3 Tree
1 1 1 [3+,5−] [2+,2−] [1+,3−] [2+,0−] [0+,2−] [0+,3−] [1+,0−] 1 1 Smelly Smelly Smooth
IF (Smooth = 0 AND Smelly = 0) OR (Smooth = 1 AND Smelly = 1) THEN Edibile; ELSE ¬Edible;
Classification of test instances: U Smooth = 1, Smelly = 1 ⇒ Edible = 1 V Smooth = 1, Smelly = 1 ⇒ Edible = 1 W Smooth = 0, Smelly = 1 ⇒ Edible = 0 14.
CMU, 2003 fall, T. Mitchell, A. Moore, midterm, pr. 9.a
15.
Fie atributele binare de intrare A, B, C, atributul de ie¸ sire Y ¸ si urm˘ atoarele exemple de antrenament: A B C Y 1 1 1 1 1 1 1 1 1
¸i arborele de decizie calculat de algoritmul ID3. Este acest arbore de decizie consistent cu datele de antrenament?
16.
R˘ aspuns Nodul 0: (r˘ ad˘ acina)
[2+,2−] [1+,1−] [1+,1−] 1 A [2+,2−] [1+,1−] [1+,1−] 1 B Nod 1 [2+,2−] [0+,1−] [2+,1−] 1 C
Se observ˘ a imediat c˘ a primii doi “compa¸ si de decizie” (engl. decision stumps) au IG = 0, ˆ ın timp ce al treilea compas de decizie are IG > 0. Prin urmare, ˆ ın nodul 0 (r˘ ad˘ acin˘ a) vom pune atributul C.
17.
Nodul 1: Avem de clasificat instant ¸ele cu C = 1, deci alegerea se face ˆ ıntre atributele A ¸ si B.
Nod 2 1 [1+,1−] [1+,0−] 1 A [2+,1−] 1 [1+,1−] [1+,0−] 1 B [2+,1−]
Cele dou˘ a entropii condit ¸ionale medii sunt egale: H1/A = H1/B = 2 3H[1+, 1−] + 1 3H[1+, 0−] A¸ sadar, putem alege oricare dintre cele dou˘ a atribute. Pentru fixare, ˆ ıl alegem pe A.
18.
Nodul 2: La acest nod nu mai avem disponibil decˆ at atributul B, deci ˆ ıl vom pune pe acesta. Arborele ID3 complet este reprezentat ˆ ın figura al˘ aturat˘ a. Prin construct ¸ie, algoritmul ID3 este consistent cu datele de antrenament dac˘ a acestea sunt con- sistente (i.e., necontradictorii). ˆ In cazul nostru, se verific˘ a imediat c˘ a datele de antrenament sunt consistente.
1 1 [1+,0−] [0+,1−] 1 B [2+,1−] C 1 [0+,1−] [1+,1−] [1+,0−] 1 A [2+,2−] 19.
a un arbore de decizie de adˆ ancime mai mic˘ a (decˆ at cea a arborelui ID3) consistent cu datele de mai sus? Dac˘ a da, ce concept (logic) reprezint˘ a acest arbore? R˘ aspuns: Din date se observ˘ a c˘ a atributul de ie¸ sire Y reprezint˘ a de fapt funct ¸ia logic˘ a A xor B. Reprezentˆ and aceast˘ a funct ¸ie ca arbore de decizie, vom obt ¸ine arborele al˘ aturat. Acest arbore are cu un nivel mai put ¸in decˆ at arborele construit cu algoritmul ID3. Prin urmare, arborele ID3 nu este op- tim din punctul de vedere al num˘ arului de niveluri.
1 1 1 1 1 [2+,2−] [1+,1−] [1+,1−] A B B [0+,1−] [1+,0−] [1+,0−] [0+,1−] 20.
Aceasta este o consecint ¸˘ a a caracterului “greedy” al algoritmului ID3, datorat faptului c˘ a la fiecare iterat ¸ie alegem ,,cel mai bun“ atribut ˆ ın raport cu criteriul cˆ a¸ stigului de informat ¸ie. Se ¸ stie c˘ a algoritmii de tip “greedy” nu granteaz˘ a obt ¸inerea opti- mului global.
21.
22.
As of September 2012, 800 extrasolar planets have been identified in our galaxy. Super- secret surveying spaceships sent to all these planets have established whether they are habitable for humans or not, but sending a spaceship to each planet is expensive. In this problem, you will come up with decision trees to predict if a planet is habitable based only on features observable using telescopes. a. In nearby table you are given the data from all 800 planets surveyed so far. The fea- tures observed by telescope are Size (“Big” or “Small”), and Orbit (“Near” or “Far”). Each row indicates the values of the features and habitability, and how many times that set of values was observed. So, for example, there were 20 “Big” planets “Near” their star that were habitable. Size Orbit Habitable Count Big Near Yes 20 Big Far Yes 170 Small Near Yes 139 Small Far Yes 45 Big Near No 130 Big Far No 30 Small Near No 11 Small Far No 255 Derive and draw the decision tree learned by ID3 on this data (use the maximum information gain criterion for splits, don’t do any pruning). Make sure to clearly mark at each node what attribute you are splitting on, and which value corresponds to which
planets in the training data that belong to that node. 23.
[374+,426−] S B Size [184+,266−] [190+,160−] F N Orbit [215+,285−] [374+,426−] H(374/800) H(374/800) H(92/225) [159+,141−] H(19/35) H(47/100) H(43/100)
H(Habitable|Size) = 35 80 · H 19 35
80 · H 92 225
35 80 · 0.9946 + 45 80 · 0.9759 = 0.9841 H(Habitable|Orbit) = 3 8 · H 47 100
8 · H 43 100
3 8 · 0.9974 + 5 8 · 0.9858 = 0.9901 IG(Habitable; Size) = 0.0128 IG(Habitable; Orbit) = 0.0067
24.
The final decision tree
+ − − + F N [170+,30−] [374+,426−] S B F N [139+,11−] [45+,255−] [184+,266−] [190+,160−] [20+,130−] Size Orbit Orbit
25.
b. For just 9 of the planets, a third feature, Temperature (in Kelvin degrees), has been measured, as shown in the nearby table. Redo all the steps from part a on this data using all three features. For the Temper- ature feature, in each iteration you must maximize over all possible binary thresh-
for example). Size Orbit Temperature Habitable Big Far 205 No Big Near 205 No Big Near 260 Yes Big Near 380 Yes Small Far 205 No Small Far 260 Yes Small Near 260 Yes Small Near 380 No Small Near 380 No According to your decision tree, would a planet with the features (Big, Near, 280) be predicted to be habitable or not habitable? Hint: You might need to use the following values of the entropy function for a Bernoulli variable of parameter p: H(1/3) = 0.9182, H(2/5) = 0.9709, H(92/225) = 0.9759, H(43/100) = 0.9858, H(16/35) = 0.9946, H(47/100) = 0.9974. 26.
Answer Binary threshold splits for the continuous attribute Temperature:
205 380 260 232.5 320 Temperature
27.
Answer: Level 1
− H=0 H(1/3) T<=320 [4+,5−] H(4/9) [1+,2−] N [4+,5−] [4+,5−] S B [2+,2−] H=1 H(4/9) [1+,2−] F N H(1/3) H(4/9) Orbit Size [2+,3−] H(2/5) [3+,3−] H=1 [3+,3−] H=1 [4+,5−] H(4/9) [0+,3−] [4+,2−] H(1/3) N Y T<=232.5 Y = = > > > >
H(Habitable|Size) = 4 9 + 5 9 · H 2 5
9 + 5 9 · 0.9709 = 0.9838 H(Habitable|Temp ≤ 232.5) = 2 3 · H 1 3
3 · 0.9182 = 0.6121 IG(Habitable; Size) = H 187 400
= 0.9969 − 0.9838 = 0.0072 IG(Habitable; Temp ≤ 232.5) = 0.3788
28.
Answer: Level 2
+ H=0 + H=0 + H=0 [4+,2−] S B [2+,0−] F Orbit Size [2+,2−] H=1 [4+,2−] H(1/3) [1+,0−] N H(2/5) [3+,2−] H(1/3) T<=320 [4+,2−] H(1/3) [3+,0−] [1+,2−] N Y H(1/3) > > > >= Note:
The plain lines indicate that both the specific conditional entropies and their coefficients (weights) in the mean conditional entropies satisfy the indicated relationship. (For exam- ple, H(2/5) > H(1/3) and 5/6 > 3/6.) The dotted lines indicate that only the specific conditional entropies satisfy the indicated rela-
29.
The final decision tree:
− + + − [0+,3−] N Y [4+,2−] T<=232.5 [4+,5−] [3+,0−] [1+,2−] N Y T<=320 [1+,0−] S B [0+,2−] Size
would a planet with the features (Big, Near, 280) be predicted to be habitable or not habitable? Answer: habitable.
30.
31.
Suppose we are learning a classifier with binary input values Y = 0 and Y = 1. There is one real-valued input X. The training data is given in the nearby table. Assume that we learn a decision tree on this data. Assume that when the decision tree splits on the real-valued attribute X, it puts the split threshold halfway between the attributes that surround the split. For example, using information gain as splitting criterion, the decision tree would initially choos to split at X = 5, which is halfway between X = 4 and X = 6 datapoints. X Y 1 2 3 4 6 1 7 1 8 1 8.5 9 1 10 1 Let Algorithm DT2 be the method of learning a decision tree with only two leaf nodes (i.e., only one split). Let Algorithm DT⋆ be the method of learning a decision tree fully, with no prunning.
spectively DT⋆ on our data? 32.
1 2 3 4 5 7 8 9 10 X 6
X 1 2 3 4 6 7 8 9 10 5 8.25 8.75
2 1
X 1 2 3 4 5 6 7 8 9 10
X 8.25 8.75
+
+
5
ID3 tree:
X<5 X<8.75
1 1
X<8.25
1 2
[4−,0+] [1−,5+] [5−,5+] [1−,0+] [0−,2+] [1−,2+] [0−,3+] Da Da Nu Nu Nu Da 33.
ID3: IG computations
X<8.75 X<8.25 [1−,2+] [1−,3+] Nu [0−,2+] Da [1−,5+] [0−,3+] Nu Da [1−,5+] IG: 0.109 IG: 0.191 Level 1:
5 8.75 8.25
+ +
− −
Decision "surfaces": X<8.75 [5−,3+] Nu [0−,2+] Da [5−,5+] X<8.25 [1−,2+] [4−,3+] Nu Da [5−,5+] X<5 [4−,0+] [1−,5+] Da Nu [5−,5+] < < Level 0: = < 34.
ID3, LOOCV: Decision surfaces LOOCV error: 3/10
8.75 8.25
+ +
4.5 X=4: 5 8.75
+ +
7.75
X=8: 5
+ +
X=8.5: 5 8.25
+ +
9.25 X=9: 5 8.75 8.25
+ +
X=1,2,3: 5 8.75 8.25
+ +
X=7: 5 8.75 8.25
+ +
X=10: 8.75 8.25
+ +
5.5 X=6:
35.
DT2
+
5
Decision "surfaces": X<5
1
[1−,5+] Da Nu [5−,5+] [4−,0+]
36.
DT2, LOOCV IG computations
Case 1: X=1, 2, 3, 4 X<5 X<8.75 X<8.25 [3−,0+] [1−,5+] Da Nu [4−,5+] < < [1−,2+] [4−,3+] Nu [0−,2+] Da [4−,5+] [3−,3+] Nu Da [4−,5+] /4.5 = < Case 2: X=6, 7, 8 X<5 X<8.75 X<8.25 [4−,0+] [1−,4+] Da Nu < < [5−,2+] Nu [0−,2+] Da [5−,4+] [4−,2+] Nu Da [5−,4+] /5.5 /7.75 [5−,4+] [1−,2+] = = 37.
DT2, CVLOO IG computations (cont’d)
Case 3: X=8.5 X<5 [4−,0+] [0−,5+] Da Nu [4−,5+] Case 2: X=9, 10 X<5 X<8.75 X<8.25 < [1−,4+] Da Nu < [5−,3+] Nu [0−,1+] Da [5−,4+] [4−,3+] Nu Da [5−,4+] [5−,4+] [1−,1+] /9.25 [4−,0+] = < CVLOO error: 1/10 38.
Liviu Ciortuz, 2017
39.
Consider the training dataset in the nearby figure. X1 and X2 are considered countinous at- tributes. Apply the ID3 algorithm on this dataset. Draw the resulting decision tree. Make a graphical representation of the decision areas and decision boundaries determined by ID3.
X 1 X 2
1 3 2 4 1 2 3 4 5
40.
Level 1:
− H=0 − H=0 [4+,5−] [4+,5−] N Y [2+,0−] [2+,4−] Y N H(1/3) X1 < 9/2 [4+,5−] N Y X2 < 3/2 X1 < 5/2 H(1/4) [4+,5−] N Y [4+,5−] N Y X2 < 5/2 X2 < 7/2 [2+,5−] [2+,1−] [1+,1−] [3+,4−] [3+,2−] [1+,3−] [4+,2−] [0+,3−] H=1 H(2/5) H(3/7) H(2/7) H(1/3) = > H(1/3) < > IG=0.091 IG=0.378 IG=0.319
H[Y| . ] = 2/3 H(1/3) H[Y| . ] = 7/9 H(2/7)
H[Y| . ] = 5/9 H(2/5) + 4/9 H(1/4)
41.
Level 2:
− H=0 − H=0 − H=0 N X1 < 5/2 [1+,1−] N Y Y [4+,2−] [2+,0−] [2+,2−] H=1 X1 < 4 X2 < 3/2 X2 < 5/2 [4+,2−] [4+,2−] [4+,2−] Y N Y [2+,2−] H=1 [2+,0−] = [3+,1−] H=1 H(1/4) [3+,2−] [1+,0−] H(2/5) IG=0.04 IG=0.109 > IG=0.251 > = N IG=0.251
H[Y| . ] = 1/3 + 2/3 H(1/4)
H[Y| . ] = 5/6 H(2/5) H[Y| . ] = 2/3
Notes:
1. Split thresholds for continuous attributes must be recomputed at each new iteration, because they may change. (For instance, here above, 4 replaces 4.5 as a threshold for X1.)
either X1 < 5/2 or X1 < 4.
un-weighted specific entropies: H[2+, 2−] > H[3+, 2−] but 4 6 · H[2+, 2−] < 5 6 · H[3+, 2−].
42.
The final decision tree:
− [0+,3−] + + − N Y X2 < 7/2 [4+,5−] [4+,2−] [2+,0−] [2+,2−] N Y X1 < 5/2 S B [0+,2−] X1 < 4 [2+,0−]
Decision areas:
+ 2
1 1 2 3 3 2 4 4 5 +
43.
CMU, 2006 spring, Carlos Guestrin, midterm, pr. 4 [adapted by Liviu Ciortuz]
44.
Starting from the data in the following table, the ID3 algorithm builds the decision tree shown nearby. V W X Y 1 1 1 1 1 1 1 1 1 1
V X 1 W 1 1 W 1 1 1 1
a. One idea for pruning such a decision tree would be to start at the root, and prune splits for which the information gain (or some other criterion) is less than some small ε. This is called top- down pruning. What is the decision tree returned for ε = 0.0001? What is the training set error for this tree?
45.
We will first augment the given decision tree with informations regarding the data partitions (i.e., the number of positive and, respectively, negative in- stances) which were assigned to each test node during the application of ID3 algorithm. The information gain yielded by the attribute X in the root node is: H[3+; 2−] − 1/5 · 0 − 4/5 · 1 = 0.971 − 0.8 = 0.171 > ε. Therefore, this node will not be eliminated from the tree. The information gain for the attribute V (in the left- hand side child of the root node) is: H[2+; 2−] − 1/2 · 1 − 1/2 · 1 = 1 − 1 = 0 < ε.
X V 1 W W 1 1 [3+;2−] [2+;2−] [1+;0−] 1 1 [1+;1−] [1+;1−] 1 1 [0+;1−] [1+;0−] [1+;0−] [0+;1−]
So, the whole left subtree will be cut off and replaced by a decision node, as shown nearby. The training error produced by this tree is 2/5.
X 1 1
46.
b. Another option would be to start at the leaves, and prune subtrees for which the information gain (or some other criterion)
children with high information gain will get pruned. This is called bottom-up pruning. What is the tree returned for ε = 0.0001? What is the training set error for this tree?
The information gain of V is IG(Y ; V ) = 0. A step later, the infor- mation gain of W (for either one of the descendent nodes of V ) is IG(Y ; W) = 1. So bottom-up pruning won’t delete any nodes and the tree [given in the problem statement] remains unchanged. The training error is 0.
47.
c. Discuss when you would want to choose bottom-up pruning
Top-down pruning is computationally cheaper. When building the tree we can determine when to stop (no need for real pruning). But as we saw top-down pruning prunes too much. On the other hand, bottom-up pruning is more expensive since we have to first build a full tree — which can be exponentially large — and then apply pruning. The second problem with bottom-up pruning is that supperfluous attributes may fullish it (see CMU, CMU, 2009 fall, Carlos Guestrin, HW1, pr. 2.4). The third prob- lem with it is that in the lower levels of the tree the number of examples in the subtree gets smaller so information gain might be an inappropriate criterion for pruning, so one would usually use a statistical test instead.
48.
CMU, 2010 fall, Ziv Bar-Joseph, HW2, pr. 2.1
49.
In class, we learned a decision tree pruning algorithm that iter- atively visited subtrees and used a validation dataset to decide whether to remove the subtree. However, sometimes it is desir- able to prune the tree after training on all of the available data. One such approach is based on statistical hypothesis testing. After learning the tree, we visit each internal node and test whether the attribute split at that node is actually uncorrelated with the class labels. We hypothesize that the attribute is independent and then use Pearson’s chi-square test to generate a test statistic that may provide evidence that we should reject this “null” hypothesis. If we fail to reject the hypothesis, we prune the subtree at that node.
50.
examples that pass through that node on their paths to the leaves. The table will have the c class labels associated with the columns and the r values the split attribute associated with the rows. Each entry Oi,j in the table is the number of times we observe a training sample with that attribute value and label, where i is the row index that corresponds to an attribute value and j is the column index that corresponds to a class label. In order to calculate the chi-square test statistic, we need a similar table of expected counts. The expected count is the number of observations we would expect if the class and attribute are independent. Derive a formula for each expected count Ei,j in the table. Hint: What is the probability that a training example that passes through the node has a particular label? Using this probability and the independence assumption, what can you say about how many examples with a specific attribute value are expected to also have the class label? 51.
test statistic χ2 =
r
c
(Oi,j − Ei,j)2 Ei,j with degrees of freedom (r − 1)(c − 1). You can plug the test statistic and degrees of freedom into a software packagea
the null hypothesis that the attribute and class are independent and say the split is statistically significant. The decision tree given on the next slide was built from the data in the table nearby. For each of the 3 internal nodes in the decision tree, show the p-value for the split and state whether it is statistically significant. How many internal nodes will the tree have if we prune splits with p ≥ 0.05?
aUse 1-chi2cdf(x,df) in MATLAB or CHIDIST(x,df) in Excel. b(https://en.m.wikipedia.org/wiki/Chi-square distribution#Table of .CF.872 value vs p-value.
52.
Input:
X1 X2 X3 X4 Class 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
4
X
1
X
2
X 1 1 1 [4−,8+] 1 [1−,2+] 1 [0−,2+] [1−,0+] [3−,0+] [4−,2+] [0−,6+]
53.
While traversing the ID3 tree [usually in bottom-up manner], remove the nodes for which there is not enough (“significant”) statistical evidence that there is a dependence between the values of the input attribute tested in that node and the values
supported by the set of instances assigned to that node.
54.
OX4 Class = 0 Class = 1 X4 = 0 4 2 X4 = 1 6
N=12
⇒ P(X4 = 0) = 6 12 = 1 2, P(X4 = 1) = 1 2 P(Class = 0) = 4 12 = 1 3, P(Class = 1) = 2 3 OX1|X4=0 Class = 0 Class = 1 X1 = 0 3 X1 = 1 1 2
N=6
⇒ P(X1 = 0 | X4 = 0) = 3 6 = 1 2 P(X1 = 1 | X4 = 0) = 1 2 P(Class = 0 | X4 = 0) = 4 6 = 2 3 P(Class = 1 | X4 = 0) = 1 3 OX2|X4=0,X1=1 Class = 0 Class = 1 X2 = 0 2 X2 = 1 1
N=3
⇒ P(X2 = 0 | X4 = 0, X1 = 1) = 2 3 P(X2 = 1 | X4 = 0, X1 = 1) = 1 3 P(Class = 0 | X4 = 0, X1 = 1) = 1 3 P(Class = 1 | X4 = 0, X1 = 1) = 2 3
55.
k=1 Oi,k
k=1 Ok,j
indep.
k=1 Oi,k) (r k=1 Ok,j)
56.
EX4 Class = 0 Class = 1 X4 = 0 2 4 X4 = 1 2 4 EX1|X4 Class = 0 Class = 1 X1 = 0 2 1 X1 = 1 2 1 EX2|X4,X1=1 Class = 0 Class = 1 X2 = 0 2 3 4 3 X2 = 1 1 3 2 3 EX4(X4 = 0, Class = 0) : N = 12, P(X4 = 0) = 1 2 ¸ si P(Class = 0) = 1 3 ⇒ N · P(X4 = 0, Class = 0) = N · P(X4 = 0) · P(Class = 0) = 12 · 1 2 · 1 3 = 2 57.
χ2 =
r
c
(Oi,j − Ei,j)2 Ei,j χ2
X4 = (4 − 2)2
2 + (0 − 2)2 2 + (2 − 4)2 4 + (6 − 4)2 4 = 2 + 2 + 1 + 1 = 6 χ2
X1|X4=0 = (3 − 2)2
2 + (1 − 2)2 2 + (0 − 1)2 1 + (2 − 1)2 1 = 3 χ2
X2|X4=0,X1=1 =
3 2 2 3 +
3 2 1 3 +
3 2 4 3 +
3 2 2 3 = 4 9 · 27 4 = 3 p-values: 0.0143, 0.0833, and respectively 0.0833. The first one of these p-values is smaller than ε, therefore the root node (X4) cannot be prunned.
58.
2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0
χ2 − Pearson’s cumulative test statistic p−value
2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0
Chi Squared Pearson test statistics
k = 1 k = 2 k = 3 k = 4 k = 6 k = 9
59.
4
X 1 1
60.
CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.1-5 CMU, 2009 fall, Carlos Guestrin, HW2, pr. 3.1 CMU, 2009 fall, Eric Xing, HW3, pr. 4.2.2
61.
Consider m training examples S = {(x1, y1), . . . , (xm, ym)}, where x ∈ X and y ∈ {−1, 1}. Suppose we have a weak learning algorithm A which produces a hypothesis h : X → {−1, 1} given any distribution D of examples. AdaBoost is an iterative algorithm which works as follows:
m, i = 1, . . . , m.
– run the weak learning algorithm A on the distribution Dt and produce the hypothsis ht; Note: Since A is a weak learning algorithm, the produced hypothesis ht at round t is only slightly better than random guessing, say, by a margin γt: εt = err Dt(ht) = Prx∼Dt[y = ht(x)] = 1 2 − γt. [If at a certain iteration t < T the weak classifier A cannot produce a hypothesis better than random guessing (i.e., γt = 0) or it produces a hypothesis for which εt = 0, then the AdaBoost algorithm should be stopped.] – update Dt+1(i) = 1 Zt · Dt(i) · e−αtyiht(xi) for i = 1, . . . , m, where αt
not.
= 1 2 ln 1 − εt εt , and Zt is the normalizer.
T
t=1 αtht
as a weighted majority vote. 62.
We will prove that the training error err S(H) of AdaBoost decreases at a very fast rate, and in certain cases it converges to 0.
The above formulation of the AdaBoost algorithm states no restriction on the ht hypothesis delivered by the weak classifier A at iteration t, except that εt < 1/2. However, in another formulation of the AdaBoost algorithm (in a more gen- eral setup; see for instance MIT, 2006 fall, Tommi Jaakkola, HW4, problem 3), it is requested / reccommended that hypothesis ht be chosen by (approxi- mately) maximizing the [criterion of] weighted training error on a whole class
In this problem we will not be concerned with such a request, but we will comply with it for instance in problem CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr2.6, when showing how AdaBoost works in practice. 63.
2 ln 1 − εt εt implies err Dt+1(ht) = 1/2.
t=1 Zt
−1 e−yif(xi), where f(x) = T
t=1 αtht(x).
t=1 Zt, where err S(H) not.
= 1 m m
i=1 1{H(xi)=yi} is the traing error
produced by AdaBoost.
is hard to do so directly. We thus settle for greedily optimizing the upper bound on the training error found at part c. Observe that Z1, . . . , Zt−1 are determined by the first t − 1 iterations, and we cannot change them at iteration t. A greedy step we can take to minimize the training set error bound on round t is to minimize Zt. Prove that the value of αt that minimizes Zt (among all possible values for αt) is indeed αt = 1 2 ln 1 − εt εt (see the previous slide).
t=1 Zt ≤ e−2 T
t=1 γ2 t .
respect to T . Assume that there is a number γ > 0 such that γ ≤ γt for t = 1, . . . , T . (This is called empirical γ-weak learnability.) How many rounds are needed to achieve a training error ε > 0? Please express in big-O notation, T = O(·). 64.
0}, we can write: err Dt+1(ht) =
m
Dt+1(i) · 1{yi=ht(xi)} =
1 Zt Dt(i)eαt = 1 Zt
Dt(i)
eαt = 1 Zt · εt · eαt Zt =
m
Dt(i)e−αtyiht(xi) =
Dt(i)e−αt +
Dt(i)eαt = (1 − εt) · e−αt + εt · eαt (1) eαt = e 1 2
ln 1 − εt
εt = e
ln
εt =
εt and e−αt = 1 eαt =
1 − εt ⇒ Zt = (1 − εt) ·
1 − εt + εt ·
εt = 2
(2) ⇒ err Dt+1(ht) = 1 Zt · εt · eαt = 1 2
· εt ·
εt = 1 2 Note that 1 − εt εt > 0 because εt ∈ (0, 1/2). 65.
DT +1(i) = 1 ZT DT (i) e−αT yi hT (xi) = DT (i) 1 ZT e−αT yi hT (xi) = DT −1(i) 1 ZT −1 e−αT −1 yi hT −1(xi) 1 ZT e−αT yi hT (xi) . . . = D1(i) 1 T
t=1 Zt
e− T
t=1 αt yi ht(xi) =
1 m · T
t=1 Zt
e−yi f(xi).
loss, i.e. 1{x<0} ≤ e−x: err S(H) = 1 m
m
1{yif(xi)<0} ≤ 1 m
m
e−yi f(xi) = 1 m
m
DT +1(i) · m ·
T
Zt =
m
DT +1(i)
T
Zt = m
DT +1(i)
· T
Zt
T
Zt. 66.
Zt = εt · eαt + (1 − εt) · e−αt, which has been proven at part a. Note that the right-hand side is constant with respect to εt (the error produced by ht, the hypothesis produced by the weak classifier A at the current step). Then we will proceed as usually, computing the partial derivative w.r.t. εt: ∂ ∂αt
= 0 ⇔ εt · eαt − (1 − εt) · e−αt = 0 ⇔ εt · (eαt)2 = 1 − εt ⇔ e2αt = 1 − εt εt ⇔ 2αt = ln 1 − εt εt ⇔ αt = 1 2 ln 1 − εt εt . Note that 1 − εt εt > 1 (and therefore αt > 0) because εt ∈ (0, 1/2). It can also be immediately shown that αt = 1 2 ln 1 − εt εt is indeed the value for which we reach the minim of the expression εt·eαt +(1−εt)·e−αt, and therefore
εt · eαt − (1 − εt) · e−αt > 0 ⇔ e2αt − 1 − εt εt > 0 ⇔ αt > 1 2 ln 1 − εt εt > 0. 67.
1 − x ≤ e−x for all x ∈ R, we can write:
T
Zt =
T
2 ·
T
2 · 1 2 − γt 1 − 1 2 − γt
T
t
≤
T
t =
T
t )2 =
T
e−2γ2
t = e−2 T t=1 γ2 t
err S(H) ≤ e−2 T
t=1 γ2 t ≤ e−2T γ2
Therefore, err S(H) < ε if − 2T γ2 < ln ε ⇔ 2T γ2 > − ln ε ⇔ 2T γ2 > ln 1 ε ⇔ T > 1 2γ2 ln 1 ε Hence we need T = O 1 γ2 ln 1 ε
Note: It follows that err S(H) → 0 as T → ∞. 68.
CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.6
69.
Consider the training dataset in the nearby fig-
sion stumps (axis-aligned separators) as the base
ht in this figure and fill in the table given below.
(For the pseudo-code of the AdaBoost algorithm, see CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.1-5. Please read the Important Remark that follows that pseudo-code!)
x1 x2 x6 x3 x7 x4
5
x
8
x
9
x X 1 X 2
1 3 2 4 1 2 3 4 5
t εt αt Dt(1) Dt(2) Dt(3) Dt(4) Dt(5) Dt(6) Dt(7) Dt(8) Dt(9) errS(H) 1 2 3
Note:
The goal of this exercise is to help you understand how AdaBoost works in practice. It is advisable that — after understanding this exercise — you would implement a program / function that calculates the weighted training error produced by a given decision stump, w.r.t. a certain probabilistic distribution (D) defined on the training dataset. Later on you will extend this program to a full-fledged implementation of AdaBoost.
70.
Unlike the graphical reprezentation that we used until now for decision stumps (as trees of depth 1), here we will work with the following analit- ical representation: for a continuous attribute X taking values x ∈ R and for any threshold s ∈ R, we can define two decision stumps: sign(x − s) =
if x ≥ s −1 if x < s and sign(s − x) =
if x ≥ s 1 if x < s. For convenience, in the sequel we will denote the first decision stump with X ≥ s and the second with X < s. According to the Important Remark that follows the AdaBoost pseudo-code [see CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.1-5], at each iter- ation (t) the weak algorithm A selects the/a decision stump which, among all decision stumps, has the minimum weighted training error w.r.t. the current distribution (Dt) on the training data. 71.
When applying the ID3 algorithm, for each continous attribute X, we used a threshold for each pair of examples (xi, yi), (xi+1, yi+1), with yiyi+1 < 0 such that xi < xi+1, but no xj ∈ Val (X) for which xi < xj < xi+1. We will proceed similarly when applying AdaBoost with decision stumps and continous attributes. [In the case of ID3 algorithm, there is a theoretical result stating that there is no need to consider other thresholds for a continuous attribute X apart from those situated beteen pairs of successive values (xi < xi+1) having opposite labels (yi = yi+1), because the Information Gain (IG) for the other thresholds (xi < xi+1, with yi = yi+1) is provably less than the maximal IG for X. LC: A similar result can be proven, which allows us to simplify the application
Moreover, we will consider also a threshold from the outside of the interval of values taken by the attribute X in the training dataset. [The decision stumps corresponding to this “outside” threshold can be associated with the decision trees of depth 0 that we met in other problems.] 72.
Iteration t = 1: Therefore, at this stage (i.e, the first iteration of AdaBoost) the thresholds for the two continuous variables (X1 and X2) corresponding to the two coordinates
2, 5 2, and 9 2 for X1, and
2, 3 2, 5 2 and 7 2 for X2. One can easily see that we can get rid of the “outside” threshold 1 2 for X2, because the decision stumps corresponding to this threshold act in the same as the decision stumps associated to the “outside” threshold 1 2 for X1. The decision stumps corresponding to this iteration together with their as- sociated weighted training errors are shown on the next slide. When filling those tabels, we have used the equalities errDt(X1 ≥ s) = 1 − errDt(X1 < s) and, similarly, errDt(X2 ≥ s) = 1 − errDt(X2 < s), for any threshold s and every iteration t = 1, 2, . . .. These equalities are easy to prove. 73.
s 1 2 5 2 9 2 errD1(X1 < s) 4 9 2 9 4 9 + 2 9 = 2 3 errD1(X1 ≥ s) 5 9 7 9 1 3 s 1 2 3 2 5 2 7 2 errD1(X2 < s) 4 9 1 9 + 3 9 = 4 9 2 9 + 1 9 = 1 3 2 9 errD1(X2 ≥ s) 5 9 5 9 2 3 7 9
It can be seen that the minimal weighted training error (ε1 = 2/9) is obtained for the decision stumps X1 < 5/2 and X2 < 7/2. Therefore we can choose h1 = sign 7 2 − X2
best hypothesis at iteration t = 1; the corresponding separator is the line X2 = 7
h1 hypothesis wrongly classifies the instances x4 and x5. Then γ1 = 1 2 − 2 9 = 5 18 and α1 = 1 2 ln 1 − ε1 ε1 = ln
9
9 = ln
2 ≈ 0.626 74.
Now the algorithm must get a new distribution (D2) by altering the old one (D1) so that the next iteration concentrates more on the misclassified instances. D2(i) = 1 Z1 D1(i)( e−α1 √
2/7
)yi h1(xi) = 1 Z1 · 1 9 ·
7 for i ∈ {1, 2, 3, 6, 7, 8, 9}; 1 Z1 · 1 9 ·
2 for i ∈ {4, 5}. Remember that Z1 is a normalization factor for D2. So, Z1 = 1 9
7 + 2 ·
2
√ 14 9 = 0.8315 Therefore, D2(i) = 9 2 √ 14 · 1 9 ·
7 = 1 14 for i ∈ {4, 5}; 9 2 √ 14 · 1 9 ·
2 = 1 4 for i ∈ {4, 5}.
X 2 x3 x1
2
x
4
x
9
x
8
x h1 X 1
1 1 2 3 3 2 4 4 5 1/4 1/4 1/14 1/14 1/14 1/14 1/14 1/14 1/14 +
−
6 7 5
x x x
75.
If, instead of sign 7 2 − X2
sion stump sign 5 2 − X1
different (although both decision stumps have the same – minimal – weighted training error, 2 9): x8 and x9 would have been allocated the weights 1 4, while x4 ¸ si x5 would have been allocated the weights 1 14. (Therefore, the output of AdaBoost may not be uniquely determined!) 76.
Iteration t = 2:
s 1 2 5 2 9 2 errD2(X1 < s) 4 14 2 14 2 14 + 2 4 + 2 14 = 11 14 errD2(X1 ≥ s) 10 14 12 14 3 14 s 1 2 3 2 5 2 7 2 errD2(X2 < s) 4 14 1 4 + 3 14 = 13 28 2 4 + 1 14 = 8 14 2 4 = 1 2 errD2(X2 ≥ s) 10 14 15 28 6 14 1 2
Note: According to the theoretical result presented at part a of CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.1-5, computing the weighted error rate of the decision stump [corresponding to the test] X2 < 7/2 is now super- fluous, because this decision stump has been chosen as optimal hypothesis at the previous iteration. (Nevertheless, we had placed it into the tabel, for the sake of a thorough presentation.) 77.
Now the best hypothesis is h2 = sign 5 2 − X1
is the line X1 = 5 2. ε2 = PD2({x8, x9}) = 2 14 = 1 7 = 0.143 ⇒ γ2 = 1 2 − 1 7 = 5 14 α2 = ln
ε2 = ln
7
7 = ln √ 6 = 0.896 D3(i) = 1 Z2 · D2(i) · ( e−α2
1/ √ 6
)yi h2(xi) = 1 Z2 · D2(i) · 1 √ 6 if h2(xi) = yi; 1 Z2 · D2(i) · √ 6
= 1 Z2 · 1 14 · 1 √ 6 for i ∈ {1, 2, 3, 6, 7}; 1 Z2 · 1 4 · 1 √ 6 for i ∈ {4, 5}; 1 Z2 · 1 14 · √ 6 for i ∈ {8, 9}. 78.
Z2 = 5 · 1 14 · 1 √ 6 + 2 · 1 4 · 1 √ 6 + 2 · 1 14 · √ 6 = 5 14 √ 6 + 1 2 √ 6 + √ 6 7 = 12 + 12 14 √ 6 = 24 14 √ 6 = 2 √ 6 7 ≈ 0.7 D3(i) = 7 2 √ 6 · 1 14 · 1 √ 6 = 1 24 for i ∈ {1, 2, 3, 6, 7}; 7 2 √ 6 · 1 4 · 1 √ 6 = 7 48 for i ∈ {4, 5}; 7 2 √ 6 · 1 14 · √ 6 = 1 4 for i ∈ {8, 9}.
X 2 h2 h1 X 1 x1
5
x x9 x4 x3 x2 x6 x7
1 1 2 3 3 2 4 4 5 7/48 1/4 1/4 1/24 1/24 1/24 1/24 7/48 1/24 +
− −
+ 8
x
79.
Iteration t = 3:
s 1 2 5 2 9 2 errD3(X1 < s) 2 24 + 2 4 = 7 12 2 4 2 24 + 2 · 7 48 + 2 · 1 4 = 21 24 errD3(X1 ≥ s) 5 12 2 4 3 24 = 1 8 s 1 2 3 2 5 2 7 2 errD3(X1 < s) 7 12 7 48 + 2 24 + 1 4 = 23 48 2 · 7 48 + 1 24 = 1 3 2 · 7 48 = 7 24 errD3(X1 ≥ s) 5 12 25 48 2 3 17 24
80.
The new best hypothesis is h3 = sign
2
X1 = 9 2. ε3 = PD3({x1, x2, x7}) = 2 · 1 24 + 1 24 = 3 24 = 1 8 γ3 = 1 2 − 1 8 = 3 8 α3 = ln
ε3 = ln
8
8 = ln √ 7 = 0.973
2
X h2 h3 h1 X 1 x1 x2 x9 x8 x7 x4 x5 x6
3
x
1 1 2 3 3 2 4 4 5 + +
− − −
+
81.
Finally, after filling our results in the given table, we get:
t εt αt Dt(1) Dt(2) Dt(3) Dt(4) Dt(5) Dt(6) Dt(7) Dt(8) Dt(9) errS(H) 1 2/9 ln
1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 2/9 2 2/14 ln √ 6 1/14 1/14 1/14 1/4 1/4 1/14 1/14 1/14 1/14 2/9 3 1/8 ln √ 7 1/24 1/24 1/24 7/48 7/48 1/24 1/24 1/4 1/4
Note: The following table helps you understand how errS(H) was computed; remember that H(xi)
def.
= sign T
t=1 αt ht(xi)
t αt x1 x2 x3 x4 x5 x6 x7 x8 x9 1 0.626 +1 +1 −1 +1 +1 −1 −1 +1 +1 2 0.896 +1 +1 −1 −1 −1 −1 −1 −1 −1 3 0.973 − 1 −1 −1 −1 −1 −1 +1 +1 +1 H(xi) +1 +1 −1 −1 −1 −1 −1 +1 +1 82.
Remark: One can immediately see that the [test] instance (1, 4) will be classified by the hypothesis H learned by AdaBoost as negative (since −α1 + α2 − α3 = −0.626 + 0.896 − 0.973 < 0). After making other sim- ilar calculi, we can conclude that the de- cision zones and the decision boundaries produced by AdaBoost for the given train- ing data will be as indicated in the nearby figure.
+ + 2
X h2 h3 h1 X 1 x7 x8 x9 x1 x2
3
x x6 x4 x5
1 1 2 3 3 2 4 4 5
Remark: The execution of AdaBoost could continue (if we would have taken initially T > 3...), although we have obtained errS(H) = 0 at iteration t = 3. By elaborating the details, we would see that for t = 4 we would obtain as optimal hypothesis X2 < 7/2 (which has already been selected at iteration t = 1). This hypothesis produces now the weighted training error ε4 = 1/6. Therefore, α4 = ln √ 5, and this will be added to α1 = ln
would be strenghened. So, we should keep in mind that AdaBoost can select several times a certain weak hypothesis (but never at consecutive iterations, cf. CMU, 2015 fall, E. Xing, Z. Bar- Joseph, HW4, pr. 2.1). 83.
CMU, 2008 fall, Eric Xing, HW3, pr. 4.1.1 CMU, 2008 fall, Eric Xing, midterm, pr. 5.1
84.
At CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.1-5, part d, we have shown that in AdaBoost, we try to [indirectly] minimize the training error err S(H) by sequen- tially minimizing its upper bound T
t=1 Zt, i.e. at each iteration t (1 ≤ t ≤ T ) we choose
αt so as to minimize Zt (viwed as a function of αt). Here you will see that another way to explain AdaBoost is by sequentially minimizing the negative exponential loss: E
def.
=
m
exp(−yifT (xi))
not.
=
m
exp(−yi
T
αtht(xi)). (3) That is to say, at the t-th iteration (1 ≤ t ≤ T ) we want to choose besides the appropriate classifier ht the corresponding weight αt so that the overall loss E (accumulated up to the t-th iteration) is minimized. Prove that this [new] strategy will lead to the same update rule for αt used in AdaBoost, i.e., αt = 1 2 ln 1 − εt εt . Hint: You can use the fact that Dt(i) ∝ exp(−yift−1(xi)), and it [LC: the proportionality factor] can be viewed as constant when we try to optimize E with respect to αt in the t-th iteration. 85.
At the t-th iteration, we have E =
m
exp(−yift(xi)) =
m
exp
t−1
αt′ht′(xi)
m
exp(−yift−1(xi)) · exp(−yiαtht(xi)) ∝
m
Dt(i) · exp(−yiαtht(xi))
not.
= E′(see CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.1-5, part b) Further on, we can rewrite E′ as E′ =
m
Dt(i) · exp(−yiαtht(xi)) =
Dt(i) exp(−αt) +
Dt(i) exp(αt) = (1 − εt) · e−αt + εt · eαt, (4) where C is the set of examples which are correctly classified by ht, and M is the set of examples which are mis-classified by ht. 86.
The relation (4) is identical with the expression (1) from part a of CMU, 2015 fall, Ziv Bar-Joseph, Eric Xing, HW4, pr. 2.1-5 (see the solution). Therefore, E will reach its minim for αt = 1 2 ln 1 − εt εt . 87.
CMU, 2016 spring, W. Cohen, N. Balcan, HW4, pr. 3.3
88.
Despite that model complexity increases with each iteration, AdaBoost does not usually
The reason behind this is that the model becomes more “confident” as we increase the number of iterations. The “confidence” can be expressed mathematically as the [voting] margin. Recall that after the algorithm AdaBoost terminates with T iterations the [output] classifier is HT (x) = sign T
αt ht(x)
Similarly, we can define the intermediate weighted classifier after k iterations as: Hk(x) = sign k
αt ht(x)
As its output is either −1 or 1, it does not tell the confidence of its judgement. Here, without changing the decision rule, let Hk(x) = sign k
¯ αt ht(x)
where ¯ αt = αt k
t′=1 αt′
so that the weights on each weak classifier are normalized. 89.
Define the margin after the k-th iteration as [the sum of] the [normalized] weights of ht voting correctly minus [the sum of] the [normalized] weights of ht voting incorrectly. Margink(x) =
¯ αt −
¯ αt.
not.
= k
t=1 ¯
αt ht(x). Show that Margink(xi) = yi fk(xi) for all training instances xi, with i = 1, . . . , m.
higher weight in iteration k + 1? Hint: Use the relation Dk+1(i) = 1 m · k
t=1 Zt
· exp(−yifk(xi)) which was proven at CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.2. 90.
yi fk(xi) = yi
k
¯ αt ht(xi) =
k
¯ αt yi ht(xi) =
¯ αt −
¯ αt = Margink(xi).
Margink(xi) > Margink(xj) ⇔ yi fk(xi) > yj fk(xj) ⇔ −yi fk(xi) < −yj fk(xj) ⇔ exp(−yi fk(xi)) < exp(−yj fk(xj)). Based on the given Hint, it follows that Dk+1(i) < Dk+1(j). 91.
It can be shown mathematically that boosting tends to increase the margins of training examples — see the relation (3) at CMU, 2008 fall, Eric Xing, HW3, pr. 4.1.1 —, and that a large margin
Thus we can explain why, although the number of “parameters” of the model created by AdaBoost increases with 2 at every iteration — therefore complexity rises —, it usually doesn’t overfit.
92.
CMU, 2016 spring, W. Cohen, N. Balcan, HW4, pr. 3.1.4
93.
At CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.5 we encountered the notion of empirical γ-weak learnability. When this condition — γ ≤ γt for all t, where γt
def.
= 1 2 −εt, with εt being the weighted training error produced by the weak hypothesis ht — is met, it ensures that AdaBoost will drive down the training error quickly. However, this condition does not hold all the time. In this problem we will prove a sufficient condition for empirical weak learn- ability [to hold]. This condition refers to the notion of voting margin which was presented in CMU, 2016 spring, W. Cohen, N. Balcan, HW4, pr. 3.3. Namely, we will prove that if there is a constant θ > 0 such that the [voting] margins of all training instances are lower-bounded by θ at each iteration of the AdaBoost algorithm, then the property of empirical γ-weak learnability is “guaranteed”, with γ = θ/2. 94.
Suppose we are given a training set S = {(x1, y1), . . . , (xm, ym)}, such that for some weak hypotheses h1, . . . , hk from the hypothesis space H, and some non- negative coefficients α1, . . . , αk with k
j=1 αj = 1, there exists θ > 0 such that
yi(
k
αjhj(xi)) ≥ θ, ∀(xi, yi) ∈ S. Note: according to CMU, 2016 spring, W. Cohen, N. Balcan, HW4, pr. 3.3, yi(
k
αjhj(xi)) = Margink(xi) = fk(xi), where fk(xi)
not.
=
k
αjhj(xi). Key idea: We will show that if the condition above is satisfied (for a given k), then for any distribution D over S, there exists a hypothesis hl ∈ {h1, . . . , hk} with weighted training error at most 1 2 − θ 2 over the distribution D. It will follow that when the condition above is satisfied for any k, the training set S is empirically γ-weak learnable, with γ = θ 2. 95.
hypothesis hl from {h1, . . . , hk} such that Ei∼D[yihl(xi)] ≥ θ. Hint: Taking expectation under the same distribution does not change the inequality conditions.
Pri∼D[yi = hl(xi)] ≤ 1 2 − θ 2, meaning that the weighted training error of hl is at most 1 2 − θ 2, and therefore γt ≥ θ 2. 96.
j=1 αjhj(xi)) ≥ θ ⇔ yifk(xi) ≥ θ for i = 1, . . . , m, it follows (according to the
Hint) that Ei∼D[yifk(xi)] ≥ θ where fk(xi)
not.
=
k
αjhj(xi). (5) On the other side, Ei∼D[yihl(xi)] ≥ θ
def.
⇔ m
i=1 yihl(xi) · D(i) ≥ θ.
Suppose, on contrary, that Ei∼D[yihl(xi)] < θ, that is m
i=1 yihl(xi)·D(i) < θ for l = 1, . . . , k.
Then m
i=1 yihl(xi) · D(i) · αl < θ · αl for l = 1, . . . , k. By summing up these inequations for
l = 1, . . . , k we get
k
m
yihl(xi) · D(i) · αl <
k
θ · αl ⇔
m
yi D(i) k
hl(xi)αl
k
αl ⇔
m
yifk(xi) · D(i) < θ, (6) because k
j=1 αj = 1 ¸
si fk(xi)
not.
= k
l=1 αlhl(xi).
The inequation (6) can be written as Ei∼D[yifk(xi)] < θ. Obviously, it contradicts the relationship (5). Therefore, the previous supposition is false. In conclusion, there exist l ∈ {1, . . . , k} such that Ei∼D[yihl(xi)] ≥ θ. 97.
i=1 yi hl(xi) · D(i) ≥ θ.
Since yi ∈ {−1, +1} and hl(xi) ∈ {−1, +1} for i = 1, . . . , m and l = 1, . . . , k, we thave
m
yi hl(xi) · D(i) ≥ θ ⇔
D(xi) −
D(xi) ≥ θ ⇔ (1 − εl) − εl ≥ θ ⇔ 1 − 2εl ≥ θ ⇔ 2εl ≤ 1 − θ ⇔ εl ≤ 1 2 − θ 2
def.
⇔ Pri∼D[yi = hl(xi)] ≤ 1 2 − θ 2. 98.
Stanford, 2016 fall, Andrew Ng, John Duchi, HW2, pr. 6.abc
99.
At CMU, 2015, Z. Bar-Joseph, E. Xing, HW4, pr. 2.5 we encountered the notion of empirical γ-weak learnability. When this condition — γ ≤ γt for all t, where γt
def.
= 1 2 − εt, with εt being the weighted training error produced by the weak hypothesis ht — is met, it ensures that AdaBoost will drive down the training error quickly. In this problem we will assume that our input attribute vectors x ∈ R, that is, they are one-dimensional, and we will show that [LC] when these vectors are consitently labelled, decision stumps based on thresholding provide a weak- learning guarantee (γ). 100.
Decision stumps: analytical definitions / formalization
Thresholding-based decision stumps can be seen as functions indexed by a threshold s and sign +/−, such that φs,+(x) = 1 if x ≥ s −1 if x < s and φs,−(x) = −1 if x ≥ s 1 if x < s. Therefore, φs,+(x) = −φs,−(x).
Key idea for the proof
We will show that given a consistently labelled training set S = {(x1, y1), . . . , (xm, ym)}, with xi ∈ R and yi ∈ {−1, +1} for i = 1, . . . , m, there is some γ > 0 such that for any distribution p defined on this training set there is a threshold s ∈ R for which errorp(φs,+) ≤ 1 2 − γ
errorp(φs,−) ≤ 1 2 − γ, where errorp(φs,+) and errorp(φs,−) denote the weighted training error of φs,+ and respectively φs,−, computed according to the distribution p. 101.
Convention: In our problem we will assume that our training instances x1, . . . xm ∈ R are distinct. Moreover, we will assume (without loss of gen- erality, but this makes the proof notationally simpler) that x1 > x2 > . . . > xm. a. Show that, given S, for each threshold s ∈ R there is some m0(s) ∈ {0, 1, . . ., m} such that errorp(φs,+)
def.
=
m
pi · 1{yi=φs,+(xi)} = 1 2 − 1 2
m0(s)
yipi −
m
yipi
and errorp(φs,−)
def.
=
m
pi · 1{yi=φs,−(xi)} = 1 2 − 1 2
m
yipi −
m0(s)
yipi
Note: Treat sums over empty sets of indices as zero. Therefore, 0
i=1 ai = 0
for any ai, and similarly m
i=m+1 ai = 0.
102.
set size m) such that for any set of probabilities p on the training set (therefore pi ≥ 0 and m
i=1 pi = 1) we can find m0 ∈ {0, . . ., m} so as
|f(m0)| ≥ 2γ, where f(m0)
not.
=
m0
yipi −
m
yipi, Note: γ should not depend on p. Hint: Consider the difference f(m0) − f(m0 − 1). What is your γ? 103.
stumps guarantee on any training set {xi, yi}m
i=1, where the raw attributes
xi ∈ R are all distinct? Recall that the edge of a weak classifier φ : R → {−1, 1} is the constant γ ∈ (0, 1/2) such that errorp(φ)
def.
=
m
pi · 1{φ(xi)=yi} ≤ 1 2 − γ. d. Can you give an upper bound on the number of thresholded decision stumps required to achieve zero error on a given training set? 104.
Let sign(t) = 1 if t ≥ 0, and sign(t) = −1 otherwise. Then 1{φs,+(x)=y} = 1{sign(x−s)=y} = 1{y·sign(x−s)≤0}, where the symbol 1{ } denotes the well known indicator function. Thus we have errorp(φs,+)
def.
=
m
pi · 1{yi=φs,+(xi)} =
m
pi · 1{yi·sign(xi−s)≤0} =
pi · 1{yi=−1} +
pi · 1{yi=1} Thus, if we let m0(s) be the index in {0, . . ., m} such that xi ≥ s for i ≤ m0(s) and xi < s for i > m0(s), which we know must exist because x1 > x2 > . . . , we have errorp(φs,+)
def.
=
m
pi · 1{yi=φs,+(xi)} =
m0(s)
pi · 1{yi=−1} +
m
pi · 1{yi=1}. 105.
Now we make a key observation: we have 1{y=−1} = 1 − y 2 and 1{y=1} = 1 + y 2 , because y ∈ {−1, 1}. Consequently, errorp(φs,+)
def.
=
m
pi · 1{yi=φs,+(xi)} =
m0(s)
pi · 1{yi=−1} +
m
pi · 1{yi=1} =
m0(s)
pi · 1 − yi 2 +
m
pi · 1 + yi 2 = 1 2
m
pi − 1 2
m0(s)
piyi + 1 2
m
piyi = 1 2 − 1 2
m0(s)
piyi −
m
piyi . The last equality follows because m
i=1 pi = 1.
The case for φs,− is symmetric to this one, so we omit the argument. 106.
f(m0) − f(m0 − 1) =
m0
yipi −
m
yipi −
m0−1
yipi +
m
yipi = 2ym0pm0. Therefore, |f(m0) − f(m0 − 1)| = 2|ym0| pm0 = 2pm0 for all m0 ∈ {1, . . . , m}. Because m
i=1 pi = 1, there must be at least one index m′ 0 with pm′
0 ≥ 1
m. Thus we have |f(m′
0) − f(m′ 0 − 1)| ≥ 2
m, and so it must be the case that at least
|f(m′
0)| ≥ 1
m
|f(m′
0 − 1)| ≥ 1
m holds. Depending on which one of those two inequations is true, we would then “return” m′
0 or m′ 0 − 1.
(Note: If |f(m′
0 − 1)| ≥ 1
m and m′
0 = 1, then we have to consider an “outside”
threshold, s > x1.) Finally, we have γ = 1 2m. 107.
either f(m0) ≥ 2γ ⇔ −f(m0) ≤ −2γ ⇔ 1 2 − 1 2f(m0)
≤ 1 2 − 1 2 · 2γ = 1 2 − γ
2 + 1 2f(m0)
≤ 1 2 − 1 2 · 2γ = 1 2 − γ Therefore thresholded decision stumps are guaranteed to have an edge of at least γ = 1 2m over random guessing. 108.
At each iteration t executed by AdaBoost,
Joseph, E. Xing, HW4, pr. 2.1-5) is in use;
there is at least one m0 (better denoted m0(p)) in {0, . . . , m} such that f(m0) ≥ 1 m
not.
= 2γ, where f(m0)
def.
=
m0
yipi −
m
yipi
for any s ∈ (xm0+1, xm0], errorp(φs,+) ≤ 1 2 − γ or errorp(φs,−) ≤ 1 2 − γ, where γ
not.
= 1 m, As a consequence, AdaBoost can choose at each iteration a weak hypothesis (ht) for which γt ≥ γ = 1 m. 109.
2γ2 iterations to achieve zero [training] error, as shown at CMU, 2015, Z. Bar-Joseph, E. Xing, HW4, pr. 2.5, so with decision stumps we will achieve zero [training] error in at most 2m2 ln m iterations of boosting. Each iteration of boosting introduces a single new weak hypothesis, so at most 2m2 ln m thresholded decision stumps are necessary. 110.
111.
Here we derive a boosting algorithm from a slightly more general perspective than the AdaBoost algorithm in CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.1-5, that will be applicable for a class of loss functions including the exponential one. The goal is to generate discriminant functions of the form fK(x) = α1h(x; θ1) + . . . + αKh(x; θK), where both x and w belong to Rd and you can assume that the weak classifiers h(x; θ) are decision stumps whose predictions are ±1; any other set of weak learners would be fine without modification. We successively add components to the overall discriminant function in a manner that will separate the estimation of [the parameters of] the weak classifiers from the setting of the votes α to the extent possible. 112.
Let’s start by defining a set of useful loss functions. The only restriction we place on the loss is that it should be a monotonically decreasing and differ- entiable function of its argument. The argument in our context is yi fK(xi) so that the more the discriminant function agrees with the ±1 label yi, the smaller the loss. The simple exponential loss we have already considered [at CMU, 2015 fall,
Loss(yifK(xi)) = exp(−yifK(xi)) certainly conforms to this notion. And so does the logistic loss Loss(yifK(xi)) = ln(1 + exp(−yifK(xi)). 113.
Note that the logistic loss has a nice interpretation as a negative log-
− ln P(y = 1|x, w) = − ln 1 1 + exp(−z) = ln(1 + exp(−z)), where z = w1φ1(x)+. . .+wKφK(x) and we omit the bias term (w0) for simplicity. By replacing the additive combination of basis functions (φi(x)) with the combination of weak classifiers (h(x; θi)), we have an additive logistic regression model where the weak classifiers serve as the basis functions. The difference is that both the basis functions (weak classifiers) and the coefficients multiplying them will be estimated. In the logistic regression model we typically envision a fixed set of basis functions. 114.
Let us now try to derive the boosting algorithm in a manner that can acco- modate any loss function of the type discussed above. To this end, suppose we have already included k − 1 component classifiers fk−1(x) = ˆ α1h(x; ˆ θ1) + . . . + ˆ αk−1h(x; ˆ θk−1), (7) and we wish to add another h(x; θ). The estimation criterion for the overall discriminant function, including the new component with votes α, is given by J(α, θ) = 1 m
m
Loss(yifk−1(xi) + yi α h(xi; θ)). Note that we explicate only how the objective depends on the choice of the last component and the corresponding votes since the parameters of the k − 1 previous components along with their votes have already been set and won’t be modified further. 115.
We will first try to find the new component or parameters θ so as to maximize its potential in reducing the empirical loss, potential in the sense that we can subsequently adjust the votes to actually reduce the empirical loss. More precisely, we set θ so as to minimize the derivative ∂ ∂αJ(α, θ)|α=0 = 1 m
m
∂ ∂αLoss(yifk−1(xi) + yi α h(xi; θ))|α=0 = 1 m
m
dL(yifk−1(xi)) yi h(xi; θ), (8) where dL(z)
not.
= ∂Loss(z) ∂z . Note that this derivative ∂ ∂αJ(α, θ)|α=0 precisely captures the amount by which we would start to reduce the empirical loss if we gradually increased the vote (α) for the new component with parameters θ. Minimizing this reduction seems like a sensible estimation criterion for the new component or θ. This plan permits us to first set θ and then subsequently optimize α to actually minimize the empirical loss. 116.
Let’s rewrite the algorithm slightly to make it look more like a boosting al-
the training examples: W (k−1)
i
= −dL(yifk−1(xi)) and ˜ W (k−1)
i
= W (k−1)
i
m
j=1 W (k−1) j
, for i = 1, . . . , m. These weights are guaranteed to be non-negative since the loss function is a decreasing function of its argument (its derivative has to be negative or zero). 117.
Now we can rewrite the expression (8) as ∂ ∂αJ(α, θ)|α=0 = − 1 m
m
W (k−1)
i
yi h(xi; θ) = − 1 m(
W (k−1)
j
) ·
m
W (k−1)
i
j
yi h(xi; θ) = − 1 m(
W (k−1)
j
) ·
m
˜ W (k−1)
i
yi h(xi; θ). By ignoring the multiplicative constant (i.e., 1 m(
j W (k−1) j
), which is constant at iteration k), we will estimate θ by minimizing −
m
˜ W (k−1)
i
yih(xi; θ), where the normalized weights ˜ W (k−1)
i
sum to 1. This is the same as maximizing the weighted agreement with the labels, i.e., m
i=1 ˜
W (k−1)
i
yih(xi; θ). 118.
We are now ready to cast the steps of the boosting algorithm in a form similar to the AdaBoost algorithm given at CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.1-5: Step 1: Find any classifier h(x; ˆ θk) that performs better than chance with respect to the weighted training error: εk = 1 2
m
˜ W (k−1)
i
yih(xi; ˆ θk)
Step 2: Set the votes αk for the new component by minimizing the overall empirical loss: J(α, ˆ θk) = 1 m
m
Loss(yi fk−1(xi) + yi α h(xi; ˆ θk)) and αk = arg min
α≥0 J(α, ˆ
θk). Step 3: Recompute the normalized weights for the next iteration according to ˜ W (k)
i
= −c · dL(yi fk−1(xi) + yi αk h(xi; ˆ θk)
) for i = 1, . . . , m, where c is chosen so that m
i=1 ˜
W (k)
i
= 1. 119.
Show that the three steps in the algorithm correspond exactly to AdaBoost when the loss function is the exponential loss Loss(z) = exp(−z). More precisely, show that in this case the setting of αk based on the new weak classifier and the weight update to get ˜ W (k)
i
would be identical to AdaBoost. (In CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.1-5, ˜ W (k)
i
corresponds to Dk(i).)
Solution
For the first part, see CMU, 2008 fall, Eric Xing, HW3, pr. 4.1.1. For the second part, note that the weight assignment in Step 3 of the general algorithm (for stage k) is ˜ W (k)
i
= −c · dL(yifk(xi)) = c · exp(yifk(xi)), which is the same as in AdaBoost (see CMU, 2015 fall, Z. Bar-Joseph, E. Xing, HW4, pr. 2.2). 120.