Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a - - PowerPoint PPT Presentation

nearly tight bounds for robust proper learning of
SMART_READER_LITE
LIVE PREVIEW

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a - - PowerPoint PPT Presentation

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin Ilias Diakonikolas Daniel M. Kane Pasin Manurangsi UW Madison UC San Diego Google Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin


slide-1
SLIDE 1

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

Ilias Diakonikolas

UW Madison

Daniel M. Kane

UC San Diego

Pasin Manurangsi

Google

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

slide-2
SLIDE 2

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔 +

  • +

+ + + +

  • +
  • +
slide-3
SLIDE 3

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔 +

  • +

+ + + +

  • +
  • Positive real number ε

+

slide-4
SLIDE 4

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • +
  • Positive real number ε

+

slide-5
SLIDE 5

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • Positive real number ε

+

slide-6
SLIDE 6

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • Positive real number ε

+ w*

slide-7
SLIDE 7

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error + +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • Positive real number ε
  • w*
slide-8
SLIDE 8

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error + +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • An algorithm is a 𝛽-learner if it outputs w with

classifjcation error at most 𝛽・OPT + ε

  • Positive real number ε
  • w*
slide-9
SLIDE 9

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error + +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • An algorithm is a 𝛽-learner if it outputs w with

classifjcation error at most 𝛽・OPT + ε

  • Positive real number ε
  • w*

w

slide-10
SLIDE 10

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error + +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • An algorithm is a 𝛽-learner if it outputs w with

classifjcation error at most 𝛽・OPT + ε

  • Positive real number ε
  • w*

w

slide-11
SLIDE 11

Agnostic Proper Learning of Halfspaces

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error + +

  • +

+ + + +

  • OPT = Min classifjcation error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 0]

  • +
  • An algorithm is a 𝛽-learner if it outputs w with

classifjcation error at most 𝛽・OPT + ε

  • Positive real number ε
  • Bad news:

[Arora et al.’97] Unless NP = RP, no poly-time 𝛽-learner for all constants 𝛽.

w* w

[Guruswami-Raghavendra’ 06, Feldman et al.’06] Even weak learning is NP-hard.

slide-12
SLIDE 12

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • +
  • Positive real number ε
  • +
slide-13
SLIDE 13

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • OPT𝛿 = Min 𝛿-margin error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 𝛿]

  • +
  • Positive real number ε
  • +

w*

slide-14
SLIDE 14

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • OPT𝛿 = Min 𝛿-margin error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 𝛿]

  • +
  • Positive real number ε
  • +

w*

slide-15
SLIDE 15

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error +

  • +

+ + + +

  • OPT𝛿 = Min 𝛿-margin error among all halfspaces

= minw Pr(x, y)~𝓔 [<w, x>・y < 𝛿]

  • +
  • Positive real number ε
  • +

𝛿 𝛿 w*

slide-16
SLIDE 16

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error OPT𝛿 = Min 𝛿-margin error among all halfspaces = minw Pr(x, y)~𝓔 [<w, x>・y < 𝛿]

  • Positive real number ε

An algorithm is a 𝛽-learner if it outputs w with classifjcation error at most 𝛽・OPT𝛿 + ε +

  • +

+ + + +

  • +
  • +

𝛿 𝛿 w*

slide-17
SLIDE 17

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error OPT𝛿 = Min 𝛿-margin error among all halfspaces = minw Pr(x, y)~𝓔 [<w, x>・y < 𝛿]

  • Positive real number ε

An algorithm is a 𝛽-learner if it outputs w with classifjcation error at most 𝛽・OPT𝛿 + ε

Margin Assumption

  • “Robustness” of the optimal

halfspace to ℓ2 noise +

  • +

+ + + +

  • +
  • +

𝛿 𝛿 w*

slide-18
SLIDE 18

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Input

  • Labeled samples (x1, y1), (x2, y2), … ∈ 𝓒(d) × {±1}

from distribution 𝓔

Output

A halfspace w with “small” classifjcation error OPT𝛿 = Min 𝛿-margin error among all halfspaces = minw Pr(x, y)~𝓔 [<w, x>・y < 𝛿]

  • Positive real number ε

An algorithm is a 𝛽-learner if it outputs w with classifjcation error at most 𝛽・OPT𝛿 + ε

Margin Assumption

  • “Robustness” of the optimal

halfspace to ℓ2 noise

  • Variants used in Perceptron, SVMs

+

  • +

+ + + +

  • +
  • +

𝛿 𝛿 w*

slide-19
SLIDE 19

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time, takes poly(d/ε)・exp(Õ(1/𝛿)) samples [Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-20
SLIDE 20

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time, takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

Approximation ratio: 𝛽 = 1

slide-21
SLIDE 21

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time, takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1 Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-22
SLIDE 22

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1 Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-23
SLIDE 23

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace

Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-24
SLIDE 24

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace

Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-25
SLIDE 25

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-26
SLIDE 26

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1 Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-27
SLIDE 27

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1 Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-28
SLIDE 28

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1 Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-29
SLIDE 29

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1 Approximation ratio: 𝛽 = 1

Theorem 2 Assuming Exponential Time Hypothesis, for any constant 𝛽 > 1, no proper 𝛽-learner runs in poly(d/ε)・exp(O(1/𝛿2-o(1))) time [Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-30
SLIDE 30

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1 Approximation ratio: 𝛽 = 1

Theorem 2 Assuming Exponential Time Hypothesis, for any constant 𝛽 > 1, no proper 𝛽-learner runs in poly(d/ε)・exp(O(1/𝛿2-o(1))) time [Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-31
SLIDE 31

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1

Theorem 2 Assuming Exponential Time Hypothesis, for any constant 𝛽 > 1, no proper 𝛽-learner runs in poly(d/ε)・exp(O(1/𝛿2-o(1))) time

Approximation ratio: 𝛽 = 1

Theorem 3 Assuming W[1] ≠ FPT, for any function f, no proper 1-learner runs in poly(d/ε)・f(1/𝛿) time

Approximation ratio: 𝛽 = 1

[Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples

slide-32
SLIDE 32

Agnostic Proper Learning of Halfspaces with a Margin

Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi

Previous Works Our Results

[Shalev-Shwaruz, Shamir & Sridharan’09]

improper 1-learner that runs in poly(d/ε)・exp(Õ(1/𝛿)) time takes poly(d/ε)・exp(Õ(1/𝛿)) samples

Approximation ratio: 𝛽 = 1

Output hypothesis is not a halfspace Theorem 1 proper 1.01-learner that runs in poly(d/ε)・exp(Õ(1/𝛿2))-time takes O(1/ε2𝛿2) samples

Approximation ratio: any 𝛽 > 1

Theorem 2 Assuming Exponential Time Hypothesis, for any constant 𝛽 > 1, no proper 𝛽-learner runs in poly(d/ε)・exp(O(1/𝛿2-o(1))) time

Approximation ratio: 𝛽 = 1

Theorem 3 Assuming W[1] ≠ FPT, for any function f, no proper 1-learner runs in poly(d/ε)・f(1/𝛿) time

Approximation ratio: 𝛽 = 1

Also results for large approximation ratio 𝛽 [Ben-David & Simon’00] proper 1-learner that runs in poly(d)・exp(Õ(log(1/ε)/𝛿2)) time, takes O(1/ε2𝛿2) samples