SLIDE 32 Agnostic Proper Learning of Halfspaces with a Margin
Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Previous Works Our Results
[Shalev-Shwaruz, Shamir & Sridharan’09]
improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples
Approximation ratio: 𝛽 = 1
Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples
Approximation ratio: any 𝛽 > 1
Theorem 2 Assuming Exponential Time Hypothesis, for any constant 𝛽 > 1, no proper 𝛽-learner runs in poly(d/ε)・exp(O(1/𝛿2-o(1))) time
Approximation ratio: 𝛽 = 1
Theorem 3 Assuming W[1] ≠ FPT, for any function f, no proper 1-learner runs in poly(d/ε)・f(1/𝛿) time
Approximation ratio: 𝛽 = 1
Also results for large approximation ratio 𝛽 [Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples