reproducing kernel hilbert spaces
play

Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 - PowerPoint PPT Presentation

Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 L. Rosasco RKHS About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert Spaces (RKHS) We will discuss several


  1. Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 L. Rosasco RKHS

  2. About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert Spaces (RKHS) We will discuss several perspectives on RKHS. In particular in this class we investigate the fundamental definition of RKHS as Hilbert spaces with bounded, continuous evaluation functionals and the intimate connection with symmetric positive definite kernels. L. Rosasco RKHS

  3. Plan Part I: RKHS are Hilbert spaces with bounded, continuous evaluation functionals. Part II: Reproducing Kernels Part III: Mercer Theorem Part IV: Feature Maps Part V: Representer Theorem L. Rosasco RKHS

  4. Regularization The basic idea of regularization (originally introduced independently of the learning problem) is to restore well-posedness of ERM by constraining the hypothesis space H . Regularization A possible way to do this is considering regularized empirical risk minimization, that is we look for solutions minimizing a two term functional ERR ( f ) + λ R ( f ) � �� � ���� empirical error regularizer the regularization parameter λ trade-offs the two terms. L. Rosasco RKHS

  5. Tikhonov Regularization Tikhonov regularization amounts to minimize n 1 � V ( f ( x i ) , y i ) + λ R ( f ) λ > 0 (1) n i = 1 V ( f ( x ) , y ) is the loss function, that is the price we pay when we predict f ( x ) in place of y R ( f ) is a regularizer– often R ( f ) = � · � H , the norm in the function space H The regularizer should encode some notion of smoothness of f . L. Rosasco RKHS

  6. The "Ingredients" of Tikhonov Regularization The scheme we just described is very general and by choosing different loss functions V ( f ( x ) , y ) we can recover different algorithms The main point we want to discuss is how to choose a norm encoding some notion of smoothness/complexity of the solution Reproducing Kernel Hilbert Spaces allow us to do this in a very powerful way L. Rosasco RKHS

  7. Different Views on RKHS L. Rosasco RKHS

  8. Part I: Evaluation Functionals L. Rosasco RKHS

  9. Some Functional Analysis A function space F is a space whose elements are functions f , for example f : R d → R . A norm is a nonnegative function � · � such that ∀ f , g ∈ F and α ∈ R � f � ≥ 0 and � f � = 0 iff f = 0; 1 � f + g � ≤ � f � + � g � ; 2 � α f � = | α | � f � . 3 � A norm can be defined via a inner product � f � = � f , f � . A Hilbert space is a complete inner product space. L. Rosasco RKHS

  10. Some Functional Analysis A function space F is a space whose elements are functions f , for example f : R d → R . A norm is a nonnegative function � · � such that ∀ f , g ∈ F and α ∈ R � f � ≥ 0 and � f � = 0 iff f = 0; 1 � f + g � ≤ � f � + � g � ; 2 � α f � = | α | � f � . 3 � A norm can be defined via a inner product � f � = � f , f � . A Hilbert space is a complete inner product space. L. Rosasco RKHS

  11. Some Functional Analysis A function space F is a space whose elements are functions f , for example f : R d → R . A norm is a nonnegative function � · � such that ∀ f , g ∈ F and α ∈ R � f � ≥ 0 and � f � = 0 iff f = 0; 1 � f + g � ≤ � f � + � g � ; 2 � α f � = | α | � f � . 3 � A norm can be defined via a inner product � f � = � f , f � . A Hilbert space is a complete inner product space. L. Rosasco RKHS

  12. Some Functional Analysis A function space F is a space whose elements are functions f , for example f : R d → R . A norm is a nonnegative function � · � such that ∀ f , g ∈ F and α ∈ R � f � ≥ 0 and � f � = 0 iff f = 0; 1 � f + g � ≤ � f � + � g � ; 2 � α f � = | α | � f � . 3 � A norm can be defined via a inner product � f � = � f , f � . A Hilbert space is a complete inner product space. L. Rosasco RKHS

  13. Examples Continuous functions C [ a , b ] : a norm can be established by defining � f � = max a ≤ x ≤ b | f ( x ) | (not a Hilbert space!) Square integrable functions L 2 [ a , b ] : it is a Hilbert space where the norm is induced by the dot product � b � f , g � = f ( x ) g ( x ) dx a L. Rosasco RKHS

  14. Examples Continuous functions C [ a , b ] : a norm can be established by defining � f � = max a ≤ x ≤ b | f ( x ) | (not a Hilbert space!) Square integrable functions L 2 [ a , b ] : it is a Hilbert space where the norm is induced by the dot product � b � f , g � = f ( x ) g ( x ) dx a L. Rosasco RKHS

  15. Hypothesis Space: Desiderata Hilbert Space. Point-wise defined functions. L. Rosasco RKHS

  16. Hypothesis Space: Desiderata Hilbert Space. Point-wise defined functions. L. Rosasco RKHS

  17. RKHS An evaluation functional over the Hilbert space of functions H is a linear functional F t : H → R that evaluates each function in the space at the point t , or F t [ f ] = f ( t ) . Definition A Hilbert space H is a reproducing kernel Hilbert space (RKHS) if the evaluation functionals are bounded and continuous, i.e. if there exists a M s.t. |F t [ f ] | = | f ( t ) | ≤ M � f � H ∀ f ∈ H L. Rosasco RKHS

  18. RKHS An evaluation functional over the Hilbert space of functions H is a linear functional F t : H → R that evaluates each function in the space at the point t , or F t [ f ] = f ( t ) . Definition A Hilbert space H is a reproducing kernel Hilbert space (RKHS) if the evaluation functionals are bounded and continuous, i.e. if there exists a M s.t. |F t [ f ] | = | f ( t ) | ≤ M � f � H ∀ f ∈ H L. Rosasco RKHS

  19. Evaluation functionals Evaluation functionals are not always bounded. Consider L 2 [ a , b ] : Each element of the space is an equivalence class of � | f ( x ) | 2 dx . functions with the same integral An integral remains the same if we change the function in a countable set of points. L. Rosasco RKHS

  20. Norms in RKHS and Smoothness Choosing different kernels one can show that the norm in the corresponding RKHS encodes different notions of smoothness. Band limited functions. Consider the set of functions H := { f ∈ L 2 ( R ) | F ( ω ) ∈ [ − a , a ] , a < ∞} with the usual L 2 inner product. the function at every point is given by the convolution with a sinc function sin ( ax ) / ax . The norm � a � � f � 2 f ( x ) 2 dx = | F ( ω ) | 2 d ω H = a � ∞ −∞ f ( t ) e − i ω t dt is the Fourier Where F ( ω ) = F{ f } ( ω ) = tranform of f . L. Rosasco RKHS

  21. Norms in RKHS and Smoothness Choosing different kernels one can show that the norm in the corresponding RKHS encodes different notions of smoothness. Band limited functions. Consider the set of functions H := { f ∈ L 2 ( R ) | F ( ω ) ∈ [ − a , a ] , a < ∞} with the usual L 2 inner product. the function at every point is given by the convolution with a sinc function sin ( ax ) / ax . The norm � a � � f � 2 f ( x ) 2 dx = | F ( ω ) | 2 d ω H = a � ∞ −∞ f ( t ) e − i ω t dt is the Fourier Where F ( ω ) = F{ f } ( ω ) = tranform of f . L. Rosasco RKHS

  22. Norms in RKHS and Smoothness Sobolev Space: consider f : [ 0 , 1 ] → R with f ( 0 ) = f ( 1 ) = 0. The norm � � � f � 2 ( f ′ ( x )) 2 dx = ω 2 | F ( ω ) | 2 d ω H = Gaussian Space: the norm can be written as � 1 σ 2 ω 2 � f � 2 | F ( ω ) | 2 exp 2 d ω H = 2 π d L. Rosasco RKHS

  23. Norms in RKHS and Smoothness Sobolev Space: consider f : [ 0 , 1 ] → R with f ( 0 ) = f ( 1 ) = 0. The norm � � � f � 2 ( f ′ ( x )) 2 dx = ω 2 | F ( ω ) | 2 d ω H = Gaussian Space: the norm can be written as � 1 σ 2 ω 2 � f � 2 | F ( ω ) | 2 exp 2 d ω H = 2 π d L. Rosasco RKHS

  24. Norms in RKHS and Smoothness Sobolev Space: consider f : [ 0 , 1 ] → R with f ( 0 ) = f ( 1 ) = 0. The norm � � � f � 2 ( f ′ ( x )) 2 dx = ω 2 | F ( ω ) | 2 d ω H = Gaussian Space: the norm can be written as � 1 σ 2 ω 2 � f � 2 | F ( ω ) | 2 exp 2 d ω H = 2 π d L. Rosasco RKHS

  25. Linear RKHS Our function space is 1-dimensional lines f ( x ) = w x where the RKHS norm is simply � f � 2 H = � f , f � H = w 2 so that our measure of complexity is the slope of the line. We want to separate two classes using lines and see how the magnitude of the slope corresponds to a measure of complexity. We will look at three examples and see that each example requires more "complicated functions, functions with greater slopes, to separate the positive examples from negative examples. L. Rosasco RKHS

  26. Linear case (cont.) here are three datasets: a linear function should be used to separate the classes. Notice that as the class distinction becomes finer, a larger slope is required to separate the classes. 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 f(x) f(X) f(x) 0 0 0 − 0.5 − 0.5 − 0.5 − 1 − 1 − 1 − 1.5 − 1.5 − 1.5 − 2 − 2 − 2 − 2 − 1.5 − 1 − 0.5 0 0.5 1 1.5 2 − 2 − 1.5 − 1 − 0.5 0 0.5 1 1.5 2 − 2 − 1.5 − 1 − 0.5 0 0.5 1 1.5 2 x x x L. Rosasco RKHS

  27. Part II: Kernels L. Rosasco RKHS

  28. Different Views on RKHS L. Rosasco RKHS

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend