Introduction to Machine Learning CMU-10701 11. Learning Theory - - PowerPoint PPT Presentation

introduction to machine learning cmu 10701
SMART_READER_LITE
LIVE PREVIEW

Introduction to Machine Learning CMU-10701 11. Learning Theory - - PowerPoint PPT Presentation

Introduction to Machine Learning CMU-10701 11. Learning Theory Barnabs Pczos Learning Theory We have explored many ways of learning from data But How good is our classifier, really? How much data do we need to make it good


slide-1
SLIDE 1

Introduction to Machine Learning CMU-10701

  • 11. Learning Theory

Barnabás Póczos

slide-2
SLIDE 2

We have explored many ways of learning from data But…

– How good is our classifier, really? – How much data do we need to make it “good enough”?

Learning Theory

2

slide-3
SLIDE 3

Please ask Questions and give us Feedbacks!

3

slide-4
SLIDE 4

Review of what we have learned so far

4

slide-5
SLIDE 5

Notation

5

This is what the learning algorithm produces

We will need these definitions, please copy it!

slide-6
SLIDE 6

Big Picture

6

Bayes risk

Estimation error Approximation error

Bayes risk

Ultimate goal:

Approximation error Estimation error

slide-7
SLIDE 7

Big Picture

7

Bayes risk

Estimation error Approximation error

Bayes risk

slide-8
SLIDE 8

Big Picture

8

Bayes risk

Estimation error Approximation error

Bayes risk

slide-9
SLIDE 9

Big Picture: Illustration of Risks

9

Upper bound

Goal of Learning:

slide-10
SLIDE 10
  • 11. Learning Theory

10

slide-11
SLIDE 11

Outline

11

These results are useless if N is big, or infinite. (e.g. all possible hyper-planes)

Today we will see how to fix this with the Shattering coefficient and VC dimension

Theorem:

From Hoeffding’s inequality, we have seen that

slide-12
SLIDE 12

Outline

12

Theorem:

From Hoeffding’s inequality, we have seen that

After this fix, we can say something meaningful about this too: This is what the learning algorithm produces and its true risk

slide-13
SLIDE 13

Hoeffding inequality

13

Theorem:

Observation:

slide-14
SLIDE 14

McDiarmid’s Bounded Difference Inequality

It follows that

14

slide-15
SLIDE 15

Bounded Difference Condition

15

Our main goal is to bound Lemma:

Let g denote the following function:

Observation: Proof:

) McDiarmid can be applied for g!

slide-16
SLIDE 16

Bounded Difference Condition

16

Corollary: The Vapnik-Chervonenkis inequality does that with the shatter coefficient (and VC dimension)!

slide-17
SLIDE 17

Concentration and Expected Value

17

slide-18
SLIDE 18

Vapnik-Chervonenkis inequality

18

We already know:

Vapnik-Chervonenkis inequality: Corollary: Vapnik-Chervonenkis theorem: Our main goal is to bound

slide-19
SLIDE 19

Shattering

19

slide-20
SLIDE 20

How many points can a linear boundary classify exactly in 1D?

  • +

2 pts 3 pts -

+ +

  • +
  • + - ??

There exists placement s.t. all labelings can be classified

  • +

20

The answer is 2

slide-21
SLIDE 21
  • +

3 pts 4 pts

  • +

+

  • +
  • +
  • ??
  • +
  • +

How many points can a linear boundary classify exactly in 2D?

There exists placement s.t. all labelings can be classified

21

The answer is 3

slide-22
SLIDE 22

How many points can a linear boundary classify exactly in d-dim?

22

The answer is d+1

How many points can a linear boundary classify exactly in 3D?

The answer is 4

tetraeder

+ +

slide-23
SLIDE 23

Growth function, Shatter coefficient

Definition 0 0 0 0 1 0 1 1 1 1 0 0 0 1 1 0 1 0 1 1 1

(=5 in this example)

Growth function, Shatter coefficient

maximum number of behaviors on n points

23

slide-24
SLIDE 24

Growth function, Shatter coefficient

Definition Growth function, Shatter coefficient

maximum number of behaviors on n points

  • +

+

Example: Half spaces in 2D

+ +

  • 24
slide-25
SLIDE 25

VC-dimension

Definition Growth function, Shatter coefficient

maximum number of behaviors on n points

Definition: VC-dimension # behaviors Definition: Shattering Note:

25

slide-26
SLIDE 26

VC-dimension

Definition # behaviors

26

slide-27
SLIDE 27
  • +
  • +

VC-dimension

27

slide-28
SLIDE 28

Examples

28

slide-29
SLIDE 29

VC dim of decision stumps (axis aligned linear separator) in 2d

What’s the VC dim. of decision stumps in 2d?

  • +

+

  • +
  • +

+

  • There is a placement of 3 pts that can be shattered ) VC dim ≥ 3

29

slide-30
SLIDE 30

What’s the VC dim. of decision stumps in 2d?

If VC dim = 3, then for all placements of 4 pts, there exists a labeling that can’t be shattered

3 collinear 1 in convex hull

  • f other 3

quadrilateral

  • + -
  • +
  • +
  • +
  • VC dim of decision stumps

(axis aligned linear separator) in 2d

30

slide-31
SLIDE 31

VC dim. of axis parallel rectangles in 2d

What’s the VC dim. of axis parallel rectangles in 2d?

  • +

+

  • +
  • There is a placement of 3 pts that can be shattered ) VC dim ≥ 3

31

slide-32
SLIDE 32

VC dim. of axis parallel rectangles in 2d

There is a placement of 4 pts that can be shattered ) VC dim ≥ 4

32

slide-33
SLIDE 33

VC dim. of axis parallel rectangles in 2d

What’s the VC dim. of axis parallel rectangles in 2d?

+

+

  • +
  • + -
  • +
  • +
  • If VC dim = 4, then for all placements of 5 pts, there exists a labeling that

can’t be shattered

4 collinear 2 in convex hull 1 in convex hull pentagon

33

slide-34
SLIDE 34

Sauer’s Lemma

34

The VC dimension can be used to upper bound the shattering coefficient. Sauer’s lemma: Corollary: We already know that [Exponential in n] [Polynomial in n]

slide-35
SLIDE 35

Proof of Sauer’s Lemma

Write all different behaviors on a sample (x1,x2,…xn) in a matrix:

0 0 0 0 1 0 1 1 1 1 0 0 0 1 0 1 1 1 0 1 1

35

0 0 0 0 1 0 1 1 1 1 0 0 0 1 1

slide-36
SLIDE 36

Proof of Sauer’s Lemma

We will prove that

36

0 0 0 0 1 0 1 1 1 1 0 0 0 1 1

Shattered subsets of columns:

Therefore,

slide-37
SLIDE 37

Proof of Sauer’s Lemma

Lemma 1

37

0 0 0 0 1 0 1 1 1 1 0 0 0 1 1 for any binary matrix with no repeated rows.

Shattered subsets of columns: Lemma 2 In this example: 5· 6 In this example: 6· 1+3+3=7

slide-38
SLIDE 38

Proof of Lemma 1

Lemma 1

38

0 0 0 0 1 0 1 1 1 1 0 0 0 1 1

Shattered subsets of columns: Proof

In this example: 6· 1+3+3=7

slide-39
SLIDE 39

Proof of Lemma 2

39

for any binary matrix with no repeated rows.

Lemma 2 Induction on the number of columns Proof Base case: A has one column. There are three cases: ) 1 · 1 ) 1 · 1 ) 2 · 2

slide-40
SLIDE 40

Proof of Lemma 2

40

Inductive case: A has at least two columns. We have, By induction (less columns)

0 0 0 0 1 0 1 1 1 1 0 0 0 1 1

slide-41
SLIDE 41

Proof of Lemma 2

41

because

0 0 0 0 1 0 1 1 1 1 0 0 0 1 1

slide-42
SLIDE 42

Vapnik-Chervonenkis inequality

42

Vapnik-Chervonenkis inequality: From Sauer’s lemma:

Since Therefore,

[We don’t prove this]

Estimation error

slide-43
SLIDE 43

Linear (hyperplane) classifiers

43

Estimation error We already know that Estimation error

Estimation error

slide-44
SLIDE 44

Vapnik-Chervonenkis Theorem

44

We already know from McDiarmid:

Corollary: Vapnik-Chervonenkis theorem: [We don’t prove them] Vapnik-Chervonenkis inequality: Hoeffding + Union bound for finite function class:

slide-45
SLIDE 45

PAC Bound for the Estimation Error

45

VC theorem:

Inversion:

Estimation error

slide-46
SLIDE 46

Structoral Risk Minimization

46

Bayes risk

Estimation error Approximation error

Ultimate goal:

Approximation error Estimation error So far we studied when estimation error ! 0, but we also want approximation error ! 0

Many different variants… penalize too complex models to avoid overfitting

slide-47
SLIDE 47

What you need to know

Complexity of the classifier depends on number of points that can be classified exactly

Finite case – Number of hypothesis Infinite case – Shattering coefficient, VC dimension

PAC bounds on true error in terms of empirical/training error and complexity of hypothesis space Empirical and Structural Risk Minimization

47

slide-48
SLIDE 48

Thanks for your attention 

48