Boosting Can we make dumb learners smart? Aarti Singh Machine - - PowerPoint PPT Presentation

boosting
SMART_READER_LITE
LIVE PREVIEW

Boosting Can we make dumb learners smart? Aarti Singh Machine - - PowerPoint PPT Presentation

Boosting Can we make dumb learners smart? Aarti Singh Machine Learning 10-701/15-781 Oct 11, 2010 Slides Courtesy: Carlos Guestrin, Freund & Schapire 1 Project Proposal Due Today! 2 Why boost weak learners? Goal: Automatically


slide-1
SLIDE 1

Boosting

Aarti Singh Machine Learning 10-701/15-781 Oct 11, 2010

Slides Courtesy: Carlos Guestrin, Freund & Schapire

1

Can we make dumb learners smart?

slide-2
SLIDE 2

Project Proposal Due Today!

2

slide-3
SLIDE 3

Why boost weak learners?

Goal: Automatically categorize type of call requested (Collect, Calling card, Person-to-person, etc.)

  • Easy to find “rules of thumb” that are “often” correct.

E.g. If ‘card’ occurs in utterance, then predict ‘calling card’

  • Hard to find single highly accurate prediction rule.

3

slide-4
SLIDE 4
  • Simple (a.k.a. weak) learners e.g., naïve Bayes, logistic

regression, decision stumps (or shallow decision trees) Are good  - Low variance, don’t usually overfit Are bad  - High bias, can’t solve hard learning problems

  • Can we make weak learners always good???

– No!!! But often yes…

Fighting the bias-variance tradeoff

4

slide-5
SLIDE 5

Voting (Ensemble Methods)

  • Instead of learning a single (weak) classifier, learn many weak

classifiers that are good at different parts of the input space

  • Output class: (Weighted) vote of each classifier

– Classifiers that are most “sure” will vote with more conviction – Classifiers will be most “sure” about a particular part of the space – On average, do better than single classifier!

5

1

  • 1

? ? ? ? 1

  • 1

H: X → Y (-1,1) h1(X) h2(X) H(X) = sign(∑αt ht(X))

t

weights H(X) = h1(X)+h2(X)

slide-6
SLIDE 6

Voting (Ensemble Methods)

  • Instead of learning a single (weak) classifier, learn many weak

classifiers that are good at different parts of the input space

  • Output class: (Weighted) vote of each classifier

– Classifiers that are most “sure” will vote with more conviction – Classifiers will be most “sure” about a particular part of the space – On average, do better than single classifier!

  • But how do you ???

– force classifiers ht to learn about different parts of the input space? – weigh the votes of different classifiers? t

6

slide-7
SLIDE 7

Boosting [Schapire’89]

  • Idea: given a weak learner, run it multiple times on (reweighted)

training data, then let learned classifiers vote

  • On each iteration t:

– weight each training example by how incorrectly it was classified – Learn a weak hypothesis – ht – A strength for this hypothesis – t

  • Final classifier:
  • Practically useful
  • Theoretically interesting

7

H(X) = sign(∑αt ht(X))

slide-8
SLIDE 8

Learning from weighted data

  • Consider a weighted dataset

– D(i) – weight of i th training example (xi,yi) – Interpretations:

  • i th training example counts as D(i) examples
  • If I were to “resample” data, I would get more samples of “heavier”

data points

  • Now, in all calculations, whenever used, i th training example

counts as D(i) “examples” – e.g., in MLE redefine Count(Y=y) to be weighted count Unweighted data Weights D(i) Count(Y=y) = ∑ 1(Y i=y) Count(Y=y) = ∑ D(i)1(Y i=y)

8 i =1 m i =1 m

slide-9
SLIDE 9

9

weak weak

Initially equal weights Naïve bayes, decision stump Magic (+ve) Increase weight if wrong on pt i yi ht(xi) = -1 < 0

AdaBoost [Freund & Schapire’95]

slide-10
SLIDE 10

10

weak weak

Initially equal weights Naïve bayes, decision stump Magic (+ve) Increase weight if wrong on pt i yi ht(xi) = -1 < 0

AdaBoost [Freund & Schapire’95]

Weights for all pts must sum to 1 ∑ Dt+1(i) = 1

t

slide-11
SLIDE 11

11

weak weak

Initially equal weights Naïve bayes, decision stump Magic (+ve) Increase weight if wrong on pt i yi ht(xi) = -1 < 0

AdaBoost [Freund & Schapire’95]

slide-12
SLIDE 12

12

εt = 0 if ht perfectly classifies all weighted data pts t = ∞ εt = 1 if ht perfectly wrong => -ht perfectly right t = -∞ εt = 0.5 t = 0

Does ht get ith point wrong Weighted training error

What t to choose for hypothesis ht?

Weight Update Rule: [Freund & Schapire’95]

slide-13
SLIDE 13

Boosting Example (Decision Stumps)

13

slide-14
SLIDE 14

14

Boosting Example (Decision Stumps)

slide-15
SLIDE 15

Analysis reveals:

  • What t to choose for hypothesis ht?

εt - weighted training error

  • If each weak learner ht is slightly better than random guessing (εt < 0.5),

then training error of AdaBoost decays exponentially fast in number of rounds T.

15

Analyzing training error

Training Error

slide-16
SLIDE 16

Training error of final classifier is bounded by: Where

16

Analyzing training error

Convex upper bound If boosting can make upper bound → 0, then training error → 0 1 0/1 loss exp loss

slide-17
SLIDE 17

Training error of final classifier is bounded by: Where Proof:

17

Analyzing training error

… Wts of all pts add to 1 Using Weight Update Rule

slide-18
SLIDE 18

Training error of final classifier is bounded by: Where

18

Analyzing training error

If Zt < 1, training error decreases exponentially (even though weak learners may not be good εt ~0.5) Training error t Upper bound

slide-19
SLIDE 19

Training error of final classifier is bounded by: Where If we minimize t Zt, we minimize our training error We can tighten this bound greedily, by choosing t and ht on each iteration to minimize Zt.

19

What t to choose for hypothesis ht?

slide-20
SLIDE 20

We can minimize this bound by choosing t on each iteration to minimize Zt. For boolean target function, this is accomplished by [Freund & Schapire ’97]: Proof:

20

What t to choose for hypothesis ht?

slide-21
SLIDE 21

We can minimize this bound by choosing t on each iteration to minimize Zt. For boolean target function, this is accomplished by [Freund & Schapire ’97]: Proof:

21

What t to choose for hypothesis ht?

slide-22
SLIDE 22

Training error of final classifier is bounded by:

22

Dumb classifiers made Smart

If each classifier is (at least slightly) better than random t < 0.5 AdaBoost will achieve zero training error exponentially fast (in number of rounds T) !!

grows as t moves away from 1/2 What about test error?

slide-23
SLIDE 23

Boosting results – Digit recognition

  • Boosting often,

– Robust to overfitting – Test set error decreases even after training error is zero

23

[Schapire, 1989] but not always

Test Error Training Error

slide-24
SLIDE 24
  • T – number of boosting rounds
  • d – VC dimension of weak learner, measures complexity of classifier
  • m – number of training examples

24

Generalization Error Bounds

T small large small T large small large tradeoff bias variance [Freund & Schapire’95]

slide-25
SLIDE 25

25

Generalization Error Bounds

Boosting can overfit if T is large Boosting often, Contradicts experimental results

– Robust to overfitting – Test set error decreases even after training error is zero

Need better analysis tools – margin based bounds

[Freund & Schapire’95] With high probability

slide-26
SLIDE 26

26

Margin Based Bounds

Boosting increases the margin very aggressively since it concentrates on the hardest examples. If margin is large, more weak learners agree and hence more rounds does not necessarily imply that final classifier is getting more complex. Bound is independent of number of rounds T! Boosting can still overfit if margin is too small, weak learners are too complex or perform arbitrarily close to random guessing

[Schapire, Freund, Bartlett, Lee’98] With high probability

slide-27
SLIDE 27

Boosting: Experimental Results

Comparison of C4.5 (decision trees) vs Boosting decision stumps (depth 1 trees) C4.5 vs Boosting C4.5 27 benchmark datasets

27

[Freund & Schapire, 1996] error error error

slide-28
SLIDE 28

28

Train Test Test Train Overfits Overfits Overfits Overfits

slide-29
SLIDE 29

Boosting and Logistic Regression

Logistic regression assumes: And tries to maximize data likelihood: Equivalent to minimizing log loss

29

iid

slide-30
SLIDE 30

Logistic regression equivalent to minimizing log loss

30

Both smooth approximations

  • f 0/1 loss!

Boosting and Logistic Regression

Boosting minimizes similar loss function!!

Weighted average of weak learners 1 0/1 loss exp loss log loss

slide-31
SLIDE 31

Logistic regression:

  • Minimize log loss
  • Define

where xj predefined features

(linear classifier)

  • Jointly optimize over all

weights w0, w1, w2… Boosting:

  • Minimize exp loss
  • Define

where ht(x) defined dynamically to fit data

(not a linear classifier)

  • Weights t learned per iteration

t incrementally

31

Boosting and Logistic Regression

slide-32
SLIDE 32

32

Hard & Soft Decision

Weighted average of weak learners

Hard Decision/Predicted label: Soft Decision: (based on analogy with logistic regression)

slide-33
SLIDE 33

33

Effect of Outliers

Good  : Can identify outliers since focuses on examples that are hard to categorize Bad  : Too many outliers can degrade classification performance dramatically increase time to convergence

slide-34
SLIDE 34

Bagging

34

Related approach to combining classifiers:

  • 1. Run independent weak learners on bootstrap replicates (sample with

replacement) of the training set

  • 2. Average/vote over weak hypotheses

Bagging vs. Boosting

Resamples data points Reweights data points (modifies their distribution) Weight of each classifier Weight is dependent on is the same classifier’s accuracy Only variance reduction Both bias and variance reduced – learning rule becomes more complex with iterations

[Breiman, 1996]

slide-35
SLIDE 35

Boosting Summary

  • Combine weak classifiers to obtain very strong classifier

– Weak classifier – slightly better than random on training data – Resulting very strong classifier – can eventually provide zero training error

  • AdaBoost algorithm
  • Boosting v. Logistic Regression

– Similar loss functions – Single optimization (LR) v. Incrementally improving classification (B)

  • Most popular application of Boosting:

– Boosted decision stumps! – Very simple to implement, very effective classifier

35