robustness and generalization
play

Robustness and Generalization Huan Xu The University of Texas at - PowerPoint PPT Presentation

Robustness and Generalization Huan Xu The University of Texas at Austin Department of Electrical and Computer Engineering COLT, June 29, 2010 Joint work with Shie Mannor What is Robustness? What is Robustness? What is Robustness?


  1. Robustness and Generalization Huan Xu The University of Texas at Austin Department of Electrical and Computer Engineering COLT, June 29, 2010 Joint work with Shie Mannor

  2. What is Robustness?

  3. What is Robustness?

  4. What is Robustness? • Robustness is the property that tested on a training sample and on a similar testing sample, the performance is close.

  5. What is Robustness? • Robust decision making/optimization: • Consider a general decision problem: find v such that ℓ ( v , ξ ) is small. • If for ξ ′ ≈ ξ , ℓ ( v , ξ ′ ) is also small, then v is robust to the perturbation of parameter. • Robust optimization: min v max ξ ′ ≈ ξ ℓ ( v , ξ ′ )

  6. What is Robustness? • Robust decision making/optimization: • Consider a general decision problem: find v such that ℓ ( v , ξ ) is small. • If for ξ ′ ≈ ξ , ℓ ( v , ξ ′ ) is also small, then v is robust to the perturbation of parameter. • Robust optimization: min v max ξ ′ ≈ ξ ℓ ( v , ξ ′ ) • Robustness in machine learning • Robust optimization was introduced to machine learning to handle observation noise (e.g., [Lanckriet et al 2003]; [Lebret and El Ghaoui 1997]; [Shivaswamy et al 2006]). • It is then discovered that SVM and Lasso can both be rewritten as robust optimization (of empirical loss), and the RO formulation implies consistency [HX, Caramanis and SM 2009; 2010].

  7. What is Robustness? • Robust decision making/optimization: • Consider a general decision problem: find v such that ℓ ( v , ξ ) is small. • If for ξ ′ ≈ ξ , ℓ ( v , ξ ′ ) is also small, then v is robust to the perturbation of parameter. • Robust optimization: min v max ξ ′ ≈ ξ ℓ ( v , ξ ′ ) • Robustness in machine learning • Robust optimization was introduced to machine learning to handle observation noise (e.g., [Lanckriet et al 2003]; [Lebret and El Ghaoui 1997]; [Shivaswamy et al 2006]). • It is then discovered that SVM and Lasso can both be rewritten as robust optimization (of empirical loss), and the RO formulation implies consistency [HX, Caramanis and SM 2009; 2010]. • This paper formalizes this observation to general learning algorithms.

  8. Difference with Stabiilty Non-stable algorithm:

  9. Difference with Stabiilty Stable algorithm:

  10. Difference with Stabiilty Non-robust algorithm:

  11. Difference with Stabiilty Robust algorithm:

  12. Outline 1. Algorithmic Robustness and Generalization Bound 2. Robust Algorithms 3. (Weak) Robustness is Necessary and Sufficient to (Asymptotic) Generalizability

  13. Outline 1. Algorithmic Robustness and Generalization Bound 2. Robust Algorithms 3. (Weak) Robustness is Necessary and Sufficient to (Asymptotic) Generalizability

  14. Notations • Training sample set s of n training samples ( s 1 , · · · , s n ) . • Z and H are the set from which each sample is drawn, and the hypothesis set. • A s is the hypothesis learned given training set s . • For each hypothesis h ∈ H and a point z ∈ Z , there is an associated loss ℓ ( h , z ) ∈ [ 0 , M ] .

  15. Notations • Training sample set s of n training samples ( s 1 , · · · , s n ) . • Z and H are the set from which each sample is drawn, and the hypothesis set. • A s is the hypothesis learned given training set s . • For each hypothesis h ∈ H and a point z ∈ Z , there is an associated loss ℓ ( h , z ) ∈ [ 0 , M ] . • In supervised learning, we decompose Z = Y × X , and use | x and | y to denote the x -component and y -component of a point.

  16. Notations • Training sample set s of n training samples ( s 1 , · · · , s n ) . • Z and H are the set from which each sample is drawn, and the hypothesis set. • A s is the hypothesis learned given training set s . • For each hypothesis h ∈ H and a point z ∈ Z , there is an associated loss ℓ ( h , z ) ∈ [ 0 , M ] . • In supervised learning, we decompose Z = Y × X , and use | x and | y to denote the x -component and y -component of a point. • The covering number of a metric space T : N ( ǫ, T , ρ )

  17. Motivating example 1: Large Margin Classifier An algorithm A s has a margin γ if for j = 1 , · · · , n A s ( x ) = A s ( s j | x ); ∀ x : � x − s j | x � 2 < γ. Example Fix γ > 0 and put K = 2 N ( γ/ 2 , X , � · � 2 ) . If A s has a margin γ , then Z can be partitioned into K disjoint sets, denoted by { C i } K i = 1 , such that if s j and z ∈ Z belong to a same C i , then | ℓ ( A s , s j ) − ℓ ( A s , z ) | = 0.

  18. Motivating example 1: Large Margin Classifier

  19. Motivating example 1: Large Margin Classifier

  20. Motivating example 1: Large Margin Classifier

  21. Motivating example 2: Linear Regression The norm-constrained linear regression algorithm is n � | s i | y − w ⊤ s i | x | , A s = min (0.1) w ∈ R m : � w � 2 ≤ c i = 1 Example Fix ǫ > 0 and let K = N ( ǫ/ 2 , X , � · � 2 ) × N ( ǫ/ 2 , Y , | · | ) . Consider the norm-constrained linear regression algorithm as in (0.1). The set Z can be partitioned into K disjoint sets, such that if s j and z ∈ Z belong to a same C i , then | ℓ ( A s , s j ) − ℓ ( A s , z ) | ≤ ( c + 1 ) ǫ.

  22. Algorithmic Robustness Definition (Algorithmic Robustness) Algorithm A is ( K , ǫ ( s )) robust if • Z can be partitioned into K disjoint sets, denoted by { C i } K i = 1 ; • such that ∀ s ∈ s , s , z ∈ C i , = ⇒ | ℓ ( A s , s ) − ℓ ( A s , z ) | ≤ ǫ ( s ) . (0.2)

  23. Algorithmic Robustness Definition (Algorithmic Robustness) Algorithm A is ( K , ǫ ( s )) robust if • Z can be partitioned into K disjoint sets, denoted by { C i } K i = 1 ; • such that ∀ s ∈ s , s , z ∈ C i , = ⇒ | ℓ ( A s , s ) − ℓ ( A s , z ) | ≤ ǫ ( s ) . (0.2) Remark: • The definition requires that the difference between a testing sample “similar to” a training sample is small. • The property jointly depends on the solution to the algorithm and the training set.

  24. Generalization property of robust algorithms – the main theorem Theorem Let ˆ ℓ ( · ) and ℓ emp ( · ) denote the expected loss and the training loss. If s consists of n i.i.d. samples, and A is ( K , ǫ ( s )) -robust, then for any δ > 0 , with probability at least 1 − δ , � 2 K ln 2 + 2 ln ( 1 /δ ) � � � ˆ ℓ ( A s ) − ℓ emp ( A s ) � ≤ ǫ ( s ) + M . � � n

  25. Generalization property of robust algorithms – the main theorem Theorem Let ˆ ℓ ( · ) and ℓ emp ( · ) denote the expected loss and the training loss. If s consists of n i.i.d. samples, and A is ( K , ǫ ( s )) -robust, then for any δ > 0 , with probability at least 1 − δ , � 2 K ln 2 + 2 ln ( 1 /δ ) � � � ˆ ℓ ( A s ) − ℓ emp ( A s ) � ≤ ǫ ( s ) + M . � � n Remark: The bounds depends on the partitioning of the sample space.

  26. Proof of the Main Theorem • Let N i be the set of index of points of s that fall into C i . Then ( | N 1 | , · · · , | N K | ) is an IID multinomial random variable with parameters n and ( µ ( C 1 ) , · · · , µ ( C K )) .

  27. Proof of the Main Theorem • Let N i be the set of index of points of s that fall into C i . Then ( | N 1 | , · · · , | N K | ) is an IID multinomial random variable with parameters n and ( µ ( C 1 ) , · · · , µ ( C K )) . • Breteganolle-Huber-Carol inequality gives � K � ≤ 2 K exp ( − n λ 2 � | N i | � � � � − µ ( C i ) � ≥ λ ) . Pr � � n 2 � i = 1

  28. Proof of the Main Theorem • Let N i be the set of index of points of s that fall into C i . Then ( | N 1 | , · · · , | N K | ) is an IID multinomial random variable with parameters n and ( µ ( C 1 ) , · · · , µ ( C K )) . • Breteganolle-Huber-Carol inequality gives � K � ≤ 2 K exp ( − n λ 2 � | N i | � � � � − µ ( C i ) � ≥ λ ) . Pr � � n 2 � i = 1 • Hence, with probability at least 1 − δ , K � � | N i | � 2 K ln 2 + 2 ln ( 1 /δ ) � � � − µ ( C i ) � ≤ . (0.3) � � n n � i = 1

  29. Proof of the Main Theorem (Cont.) Furthermore, � K n � µ ( C i ) − 1 � � � � � ˆ � � � � ℓ ( A s ) − ℓ emp ( A s ) � = ℓ ( A s , z ) | z ∈ C i ℓ ( A s , s i ) E � � � � � n � � � i = 1 i = 1 � K n � � | N i | − 1 � � � � � ≤ ℓ ( A s , z ) | z ∈ C i ℓ ( A s , s i ) E � � n n � � � � i = 1 i = 1 � K K � � | N i | � � � � � � � + ℓ ( A s , z ) | z ∈ C i µ ( C i ) − ℓ ( A s , z ) | z ∈ C i E E � � n � � � � i = 1 i = 1

  30. Proof of the Main Theorem (Cont.) Furthermore, � K n � µ ( C i ) − 1 � � � � � ˆ � � � � ℓ ( A s ) − ℓ emp ( A s ) � = ℓ ( A s , z ) | z ∈ C i ℓ ( A s , s i ) E � � � � � n � � � i = 1 i = 1 � K n � � | N i | − 1 � � � � � ≤ ℓ ( A s , z ) | z ∈ C i ℓ ( A s , s i ) E � � n n � � � � i = 1 i = 1 � K K � � | N i | � � � � � � � + ℓ ( A s , z ) | z ∈ C i µ ( C i ) − ℓ ( A s , z ) | z ∈ C i E E � � n � � � � i = 1 i = 1 • The first term is bounded by � � � K � 1 � j ∈ N i max z 2 ∈ C i | ℓ ( A s , s j ) − ℓ ( A s , z 2 ) | � ≤ ǫ ( s ) . � � n i = 1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend