introduction to machine learning
play

Introduction to Machine Learning 13. Learning Theory Geoff Gordon - PowerPoint PPT Presentation

Introduction to Machine Learning 13. Learning Theory Geoff Gordon and Alex Smola Carnegie Mellon University http://alex.smola.org/teaching/cmu2013-10-701x 10-701 The Problem Training Data drawn iid


  1. Introduction to Machine Learning 13. Learning Theory Geoff Gordon and Alex Smola Carnegie Mellon University � http://alex.smola.org/teaching/cmu2013-10-701x 10-701

  2. The Problem • Training • Data drawn iid from { ( x 1 , y 1 ) , . . . ( x m , y m ) } p ( x, y ) • Loss function l ( x, y, f ( x )) • Function class F = { f : Ω [ f ] ≤ c } • Empirical risk minimization problem m 1 X � minimize l ( x i , y i , f ( x i )) m f ∈ F i =1 � • Testing ( x,y ) ∼ p ( x,y ) [ l ( x, y, f ( x ))] E

  3. data

  4. classifier 
 (polynomial regression)

  5. linear classifier (underfitting)

  6. quadratic classifier

  7. Typical behavior error model complexity

  8. Typical behavior error training error model complexity

  9. Typical behavior error test error training error model complexity

  10. Typical behavior error test error How do we find this? training error model complexity

  11. Typical behavior error test error How do we find this? training error model complexity

  12. 
 A broken reasoning • Hoeffding bound for bounded random variable 
 − 2 m ✏ 2 ✓ ◆ Pr ( | ˆ µ m − µ | > ✏ ) ≤ 2 exp . c 2 • Function that minimizes empirical risk f ∗ • Bounded risk by L • Apply bound to get with high probability p ✏ ≤ L (log 2 / � ) / 2 m � • Why does our bound diverge in reality?

  13. 
 A broken reasoning • Hoeffding bound for bounded random variable 
 − 2 m ✏ 2 ✓ ◆ Pr ( | ˆ µ m − µ | > ✏ ) ≤ 2 exp . c 2 • Function that minimizes empirical risk f ∗ • Bounded risk by L • Apply bound to get with high probability p ✏ ≤ L (log 2 / � ) / 2 m � • Why does our bound diverge in reality?

  14. Multiple testing • Tossing an unbiased coin 10 times 7 5.25 3.5 1.75 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 best ‘strategy’

  15. Multiple testing • Tossing an unbiased coin 100 times 70 52.5 35 17.5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 best ‘strategy’

  16. Multiple testing • We invoke the bound each time we test • Picking the best out of N gives us N opportunities to get it wrong! • Union bound X � Pr {| R emp [ f 0 ] − R [ f 0 ] | > ✏ } Pr {| R emp [ f ] − R [ f ] | > ✏ } ≤ f 0 2 F � • Testing over all functions in function class • Split error probability up among all functions • Take supremum over all terms

  17. Multiple testing • Our first generalization bound r � log | F | + log 2 / � ✏ ≤ L 2 m � • Putting it all together r log | F | + log 2 / δ � R [ f ∗ ] ≤ inf f ∈ F R emp [ f ] + L 2 m � • What if function class is not discrete? • What if we have binary loss

  18. Multiple testing • Our first generalization bound r � log | F | + log 2 / � ✏ ≤ L 2 m � • Putting it all together r log | F | + log 2 / δ � R [ f ∗ ] ≤ inf f ∈ F R emp [ f ] + L 2 m � • What if function class is not discrete? • What if we have binary loss

  19. Covering Numbers • What if we have an uncountable function class? • Approximate by finite cover

  20. Covering Numbers • What if we have an uncountable function class? • Approximate by finite cover

  21. Covering Numbers • What if we have an uncountable function class? • Approximate by finite cover • Now bound depends on discretization, too

  22. Covering Numbers • Approximation error ✏ • Covering number (actually need metric) N ( F , ✏ ) r log N ( F , ✏ ) + log 2 / � R [ f ⇤ ] ≤ inf + L 0 ✏ f 2 F R emp [ f ] + L 2 m

  23. VC Dimension • Binary classification problem • Given locations, enumerate all possible ways these points can be separated • Example - linear separation

  24. VC Dimension • Binary classification problem • Given locations, enumerate all possible ways these points can be separated • Exponential growth to VCD, then polynomial r � h (log(2 m/h ) + 1) + log 4 / δ R [ f ∗ ] ≤ inf f ∈ F R emp [ f ] + m � • Examples • d-dimensional linear functions have h=d • has infinite h sin( x/w )

  25. VC Dimension • Binary classification problem • Given locations, enumerate all possible ways these points can be separated • Exponential growth to VCD, then polynomial r � h (log(2 m/h ) + 1) + log 4 / δ R [ f ∗ ] ≤ inf f ∈ F R emp [ f ] + m � • Examples polynomial growth • d-dimensional linear functions have h=d • has infinite h sin( x/w )

  26. Rademacher Averages • Nontrivial bound (state of the art) • Reasonably easy to compute • Recall McDiarmid’s inequality − 2 ✏ 2 C − 2 � � Pr ( | f ( x 1 , . . . , x m ) − E X 1 ,...,X m [ f ( x 1 , . . . , x m )] | > ✏ ) ≤ 2 exp . � | f ( x 1 , . . . , x i , . . . , x m ) − f ( x 1 , . . . , x 0 i , . . . , x m ) | ≤ c i � m X � C 2 = c 2 i • Bound worst case deviation i =1 � � ( m ) 1 � � X Pr sup l ( x i , y i , f ( x i )) − E ( x,y ) [ l ( x, y, f ( x ))] � > ✏ � � m � � f ∈ F � i =1

  27. Rademacher Averages • Worst case deviation � � m 1 � � � X Ξ ( X, Y ) := sup l ( x i , y i , f ( x i )) − E ( x,y ) [ l ( x, y, f ( x ))] � � m � � f ∈ F � � i =1 � • If we change single observation pair � Ξ ( X, Y ) − Ξ ( X � i ∪ { x 0 i } , Y � i ∪ { y 0 � ≤ L/m � � i } ) � • Apply McDiarmid’s bound to get − 2 m ✏ 2 L − 2 � � Pr {| Ξ ( X, Y ) > E X,Y [ Ξ ( X, Y )] | > ✏ } ≤ 2 exp � • Worst case deviation not far from typical case

  28. Rademacher Averages • Worst case deviation � � m 1 � � � X Ξ ( X, Y ) := sup l ( x i , y i , f ( x i )) − E ( x,y ) [ l ( x, y, f ( x ))] � � m � � f ∈ F � � i =1 � • If we change single observation pair � Ξ ( X, Y ) − Ξ ( X � i ∪ { x 0 i } , Y � i ∪ { y 0 � ≤ L/m � � i } ) � • Apply McDiarmid’s bound to get − 2 m ✏ 2 L − 2 � � Pr {| Ξ ( X, Y ) > E X,Y [ Ξ ( X, Y )] | > ✏ } ≤ 2 exp � • Worst case deviation not far from typical case

  29. Rademacher Averages � � " m # 1 � � X sup l ( x i , y i , f ( x i )) − E ( x,y ) [ l ( x, y, f ( x ))] E X,Y � � � m � f 2 F � � i =1 � � " m m # 1 l ( x i , y i , f ( x i )) − E X 0 ,Y 0 1 � � X X [ l ( x 0 i , y 0 i , f ( x 0 = E X,Y sup i ))] � � m m � � f 2 F � � i =1 i =1 � � " m # 1 � � X [ l ( x i , y i , f ( x i )) − l ( x 0 i , y 0 i , f ( x 0 sup i ))] ≤ E X,Y,X 0 ,Y 0 � � m � � f 2 F � � i =1 � � " # m 1 � � X σ i [ l ( x i , y i , f ( x i )) − l ( x 0 i , y 0 i , f ( x 0 = E X,Y,X 0 ,Y 0 E σ sup i ))] � � m � � f 2 F � � i =1 " # m ≤ 2 X sup σ i l ( x i , y i , f ( x i )) m E X,Y E σ f 2 F i =1

  30. Rademacher Averages • Putting it all together r log 2 / δ � R [ f ] ≤ R emp [ f ] + 2 R [ F , m ] + L 2 m � averaging � behavior for � random labels � • Rademacher average can be bounded easily for linear function classes by solving a convex optimization problem.

  31. Some Alternatives • Validation set • Train on training set (e.g. 90% of the data) • Check performance on remaining 10% • Use only if dataset is huge and few tests • Crossvalidation • Average over validation sets (e.g. 10 fold) • Nested cross-validation for model selection 
 (e.g. 10-fold in each fold to find parameters) • Bayesian statistics

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend