nearly tight bounds for robust proper learning of
play

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a - PowerPoint PPT Presentation

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin Ilias Diakonikolas Daniel M. Kane Pasin Manurangsi UW Madison UC San Diego Google Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin


  1. Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin Ilias Diakonikolas Daniel M. Kane Pasin Manurangsi UW Madison UC San Diego Google Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  2. Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - + - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  3. Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  4. Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  5. Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  6. Agnostic Proper Learning of Halfspaces w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  7. Agnostic Proper Learning of Halfspaces w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  8. Agnostic Proper Learning of Halfspaces w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT + Ξ΅ - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  9. Agnostic Proper Learning of Halfspaces w* Input + w + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT + Ξ΅ - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  10. Agnostic Proper Learning of Halfspaces w* Input + w + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT + Ξ΅ - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  11. Agnostic Proper Learning of Halfspaces w* Input + w + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT + Ξ΅ - + - Bad news: [Arora et al.’97] Unless NP = RP, no poly-time 𝛽 -learner for all constants 𝛽 . OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 0] [Guruswami-Raghavendra’ 06, Feldman et al.’06] Even weak learning is NP-hard. Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  12. Agnostic Proper Learning of Halfspaces with a Margin Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  13. Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - OPT 𝛿 = Min 𝛿 -margin error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 𝛿 ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  14. Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - OPT 𝛿 = Min 𝛿 -margin error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 𝛿 ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  15. Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - 𝛿 𝛿 + Output A halfspace w with β€œsmall” classifjcation error - - - + - - - + - OPT 𝛿 = Min 𝛿 -margin error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 𝛿 ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  16. Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - 𝛿 𝛿 + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT 𝛿 + Ξ΅ - + - OPT 𝛿 = Min 𝛿 -margin error among all halfspaces = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 𝛿 ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  17. Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - 𝛿 𝛿 + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT 𝛿 + Ξ΅ - + - Margin Assumption OPT 𝛿 = Min 𝛿 -margin error among all halfspaces - β€œRobustness” of the optimal halfspace to β„“ 2 noise = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 𝛿 ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

  18. Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), … ∈ 𝓒 (d) Γ— {Β±1} + from distribution 𝓔 + - Positive real number Ξ΅ - 𝛿 𝛿 + Output A halfspace w with β€œsmall” classifjcation error - - - + - An algorithm is a 𝛽 -learner if it outputs w with - classifjcation error at most 𝛽 ・ OPT 𝛿 + Ξ΅ - + - Margin Assumption OPT 𝛿 = Min 𝛿 -margin error among all halfspaces - β€œRobustness” of the optimal halfspace to β„“ 2 noise = min w Pr (x, y)~ 𝓔 [<w, x> ・ y < 𝛿 ] - Variants used in Perceptron, SVMs Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend