Nearly Optimal Sparse Fourier Transform Haitham Hassanieh Piotr - - PowerPoint PPT Presentation

nearly optimal sparse fourier transform
SMART_READER_LITE
LIVE PREVIEW

Nearly Optimal Sparse Fourier Transform Haitham Hassanieh Piotr - - PowerPoint PPT Presentation

Nearly Optimal Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT 2012-04-27 Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 1 / 33 Outline Introduction 1


slide-1
SLIDE 1

Nearly Optimal Sparse Fourier Transform

Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price

MIT

2012-04-27

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 1 / 33

slide-2
SLIDE 2

Outline

1

Introduction

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 2 / 33

slide-3
SLIDE 3

Outline

1

Introduction

2

Special case: exactly sparse signals

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 2 / 33

slide-4
SLIDE 4

Outline

1

Introduction

2

Special case: exactly sparse signals

3

General case: approximately sparse signals

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 2 / 33

slide-5
SLIDE 5

Outline

1

Introduction

2

Special case: exactly sparse signals

3

General case: approximately sparse signals

4

Experiments

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 2 / 33

slide-6
SLIDE 6

Outline

1

Introduction

2

Special case: exactly sparse signals

3

General case: approximately sparse signals

4

Experiments

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 3 / 33

slide-7
SLIDE 7

The Dicrete Fourier Transform

Discrete Fourier transform: given x ∈ Cn, find

  • xi =
  • xjωij

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 4 / 33

slide-8
SLIDE 8

The Dicrete Fourier Transform

Discrete Fourier transform: given x ∈ Cn, find

  • xi =
  • xjωij
  • x = Fx

for Fij = ωij

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 4 / 33

slide-9
SLIDE 9

The Dicrete Fourier Transform

Discrete Fourier transform: given x ∈ Cn, find

  • xi =
  • xjωij
  • x = Fx

for Fij = ωij Fundamental tool

◮ Compression (audio, image, video) ◮ Signal processing ◮ Data analysis ◮ ...

FFT: O(n log n) time.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 4 / 33

slide-10
SLIDE 10

Sparse Fourier Transform

Often the Fourier transform is dominated by a small number of “peaks”

◮ Precisely the reason to use for compression.

If most of mass in k locations, can we compute FFT faster?

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 5 / 33

slide-11
SLIDE 11

Sparse Fourier Transform

Time Frequency Frequency

If at most k non-zero coefficients, then “exactly k-sparse.” More often well approximated by k largest coefficients: “approximately k-sparse.”

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 6 / 33

slide-12
SLIDE 12

Previous work

Boolean cube: [KM92], [GL89]. Cn: [Mansour92] kc logc n. Long line of additional work [GGIMS02, AGS03, Iwen10, Aka10] Fastest is [Gilbert-Muthukrishnan-Strauss-05]: k log4 n.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 7 / 33

slide-13
SLIDE 13

Previous work

Boolean cube: [KM92], [GL89]. Cn: [Mansour92] kc logc n. Long line of additional work [GGIMS02, AGS03, Iwen10, Aka10] Fastest is [Gilbert-Muthukrishnan-Strauss-05]: k log4 n.

◮ All have poor constants, many logs. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 7 / 33

slide-14
SLIDE 14

Previous work

Boolean cube: [KM92], [GL89]. Cn: [Mansour92] kc logc n. Long line of additional work [GGIMS02, AGS03, Iwen10, Aka10] Fastest is [Gilbert-Muthukrishnan-Strauss-05]: k log4 n.

◮ All have poor constants, many logs. ◮ Need n/k > 40,000 or ω(log3 n) to beat FFTW. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 7 / 33

slide-15
SLIDE 15

Previous work

Boolean cube: [KM92], [GL89]. Cn: [Mansour92] kc logc n. Long line of additional work [GGIMS02, AGS03, Iwen10, Aka10] Fastest is [Gilbert-Muthukrishnan-Strauss-05]: k log4 n.

◮ All have poor constants, many logs. ◮ Need n/k > 40,000 or ω(log3 n) to beat FFTW. ◮ Our goal: faster, beat FFTW for smaller n/k in theory and practice. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 7 / 33

slide-16
SLIDE 16

Our results

O(k log(n/k) log n) time. O(k log n) for special case: exactly k-sparse. Faster than FFT when n/k = ω(1). Lower bounds:

◮ Ω(k log k) for special case assuming FFT is optimal. ◮ For general case, Ω(k log(n/k)/ log log(n/k)) samples even with

adaptive sampling.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 8 / 33

slide-17
SLIDE 17

Our results

Compute the k-sparse Fourier transform in O(k log(n/k) log n) time.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 9 / 33

slide-18
SLIDE 18

Our results

Compute the k-sparse Fourier transform in O(k log(n/k) log n) time. Get x′ with approximation error

  • x′ −

x2

2 ≤ 2

min

k-sparse xk

  • x −

xk2

2

with 3/4 probability.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 9 / 33

slide-19
SLIDE 19

Our results

Compute the k-sparse Fourier transform in O(k log(n/k) log n) time. Get x′ with approximation error

  • x′ −

x2

2 ≤ 2

min

k-sparse xk

  • x −

xk2

2

with 3/4 probability. If x is sparse, recover it exactly.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 9 / 33

slide-20
SLIDE 20

Our results

Compute the k-sparse Fourier transform in O(k log(n/k) log n) time. Get x′ with approximation error

  • x′ −

x2

2 ≤ 2

min

k-sparse xk

  • x −

xk2

2

with 3/4 probability. If x is sparse, recover it exactly.

◮ In O(k log n) time. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 9 / 33

slide-21
SLIDE 21

Our results

Compute the k-sparse Fourier transform in O(k log(n/k) log n) time. Get x′ with approximation error

  • x′ −

x2

2 ≤ 2

min

k-sparse xk

  • x −

xk2

2

with 3/4 probability. If x is sparse, recover it exactly.

◮ In O(k log n) time.

Caveats:

◮ Additional x2

2/nΘ(1) error. Alternatively,

x has poly(n) precision.

◮ n must be a power of 2. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 9 / 33

slide-22
SLIDE 22

Outline

1

Introduction

2

Special case: exactly sparse signals

3

General case: approximately sparse signals

4

Experiments

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 10 / 33

slide-23
SLIDE 23

Algorithm

Suppose x is k-sparse, with integer coefficients in {−nΘ(1), . . . , nΘ(1)}.

Theorem

We can recover x in O(k log n) time with 3/4 probability.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 11 / 33

slide-24
SLIDE 24

Algorithm

Suppose x is k-sparse, with integer coefficients in {−nΘ(1), . . . , nΘ(1)}.

Theorem

We can recover x in O(k log n) time with 3/4 probability.

Lemma (Weak sparse recovery)

We can recover x′ in O(k log n) time with 3/4 probability such that

  • x −

x′ is k/2-sparse.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 11 / 33

slide-25
SLIDE 25

Algorithm

Suppose x is k-sparse, with integer coefficients in {−nΘ(1), . . . , nΘ(1)}.

Theorem

We can recover x in O(k log n) time with 3/4 probability.

Lemma (Weak sparse recovery)

We can recover x′ in O(k log n) time with 3/4 probability such that

  • x −

x′ is k/2-sparse. Then: repeat on x − x′, with k → k/2 and decreasing the error

  • probability. [Eppstein-Goodrich ’07]

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 11 / 33

slide-26
SLIDE 26

Inspiration: arbitrary linear measurements

Eppstein-Goodrich ’07

Get linear measurements xi = F −1

i

  • x of

x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 12 / 33

slide-27
SLIDE 27

Inspiration: arbitrary linear measurements

Eppstein-Goodrich ’07

Get linear measurements xi = F −1

i

  • x of

x What if we could choose arbitrary linear measurements?

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 12 / 33

slide-28
SLIDE 28

Inspiration: arbitrary linear measurements

Eppstein-Goodrich ’07

Get linear measurements xi = F −1

i

  • x of

x What if we could choose arbitrary linear measurements? Pairwise independent hash: h : [n] → [B] for B = Θ(k).

n coordinates B bins

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 12 / 33

slide-29
SLIDE 29

Inspiration: arbitrary linear measurements

Eppstein-Goodrich ’07

Get linear measurements xi = F −1

i

  • x of

x What if we could choose arbitrary linear measurements? Pairwise independent hash: h : [n] → [B] for B = Θ(k).

n coordinates B bins

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 12 / 33

slide-30
SLIDE 30

Inspiration: arbitrary linear measurements

Eppstein-Goodrich ’07

Get linear measurements xi = F −1

i

  • x of

x What if we could choose arbitrary linear measurements? Pairwise independent hash: h : [n] → [B] for B = Θ(k).

n coordinates B bins

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 12 / 33

slide-31
SLIDE 31

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-32
SLIDE 32

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-33
SLIDE 33

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-34
SLIDE 34

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. ◮ Hence i recovered correctly with 1 − k/B ≥ 15/16 probability. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-35
SLIDE 35

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. ◮ Hence i recovered correctly with 1 − k/B ≥ 15/16 probability. ◮ If i recovered incorrectly, can add one spurious coordinate. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-36
SLIDE 36

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. ◮ Hence i recovered correctly with 1 − k/B ≥ 15/16 probability. ◮ If i recovered incorrectly, can add one spurious coordinate. ◮ With 3/4 probability, less than k/4 such mistakes. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-37
SLIDE 37

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. ◮ Hence i recovered correctly with 1 − k/B ≥ 15/16 probability. ◮ If i recovered incorrectly, can add one spurious coordinate. ◮ With 3/4 probability, less than k/4 such mistakes. ◮ Hence

x − x′ is k/2-sparse.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-38
SLIDE 38

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. ◮ Hence i recovered correctly with 1 − k/B ≥ 15/16 probability. ◮ If i recovered incorrectly, can add one spurious coordinate. ◮ With 3/4 probability, less than k/4 such mistakes. ◮ Hence

x − x′ is k/2-sparse.

Goal: construct u, u′ from Fourier samples.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-39
SLIDE 39

Inspiration: arbitrary linear measurements

For j ∈ [B], observe uj =

  • h(i)=j
  • xi

u′

j =

  • h(i)=j

i · xi For each j, set i∗ = u′

j/uj and

x′i∗ = uj. Gives weak sparse recovery:

◮ If i alone in bucket h(i), recovered correctly. ◮ Hence i recovered correctly with 1 − k/B ≥ 15/16 probability. ◮ If i recovered incorrectly, can add one spurious coordinate. ◮ With 3/4 probability, less than k/4 such mistakes. ◮ Hence

x − x′ is k/2-sparse.

Goal: construct u, u′ from Fourier samples.

◮ Will be able to do this in O(B log n) time. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 13 / 33

slide-40
SLIDE 40

What can you do with Fourier measurements?

Time Frequency Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 14 / 33

n-dimensional DFT: O(n log n)

slide-41
SLIDE 41

What can you do with Fourier measurements?

Time Frequency Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 14 / 33

n-dimensional DFT: O(n log n)

slide-42
SLIDE 42

What can you do with Fourier measurements?

Time Frequency Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 14 / 33

n-dimensional DFT: O(n log n) n-dimensional DFT of first B terms: O(n log n)

slide-43
SLIDE 43

What can you do with Fourier measurements?

Time Frequency Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 14 / 33

n-dimensional DFT: O(n log n) n-dimensional DFT of first B terms: O(n log n) B-dimensional DFT of first B terms: O(B log B)

slide-44
SLIDE 44

What can you do with Fourier measurements?

Time Frequency Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 14 / 33

n-dimensional DFT: O(n log n) n-dimensional DFT of first B terms: O(n log n) B-dimensional DFT of first B terms: O(B log B)

slide-45
SLIDE 45

Framework

“Hashes” into B buckets in B log B time. Analogous to uj =

h(i)=j

xi.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 15 / 33

slide-46
SLIDE 46

Framework

“Hashes” into B buckets in B log B time. Analogous to uj =

h(i)=j

xi. Issues:

◮ “Hashing” needs a random hash function ◮ Leakage ◮ Want analog of u′

j = h(i)=j i ·

xi.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 15 / 33

slide-47
SLIDE 47

Framework

“Hashes” into B buckets in B log B time. Analogous to uj =

h(i)=j

xi. Issues:

◮ “Hashing” needs a random hash function ⋆ Access x′ t = ω−btxat, so

x′at+b = xt [GMS05]

◮ Leakage ◮ Want analog of u′

j = h(i)=j i ·

xi.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 15 / 33

slide-48
SLIDE 48

Framework

“Hashes” into B buckets in B log B time. Analogous to uj =

h(i)=j

xi. Issues:

◮ “Hashing” needs a random hash function ⋆ Access x′ t = ω−btxat, so

x′at+b = xt [GMS05]

◮ Leakage ◮ Want analog of u′

j = h(i)=j i ·

xi.

⋆ Time shift x′ t = xt−1: get phase shift

x′i = ωi xi.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 15 / 33

slide-49
SLIDE 49

Framework

“Hashes” into B buckets in B log B time. Analogous to uj =

h(i)=j

xi. Issues:

◮ “Hashing” needs a random hash function ⋆ Access x′ t = ω−btxat, so

x′at+b = xt [GMS05]

◮ Leakage ◮ Want analog of u′

j = h(i)=j i ·

xi.

⋆ Time shift x′ t = xt−1: get phase shift

x′i = ωi xi.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 15 / 33

slide-50
SLIDE 50

Leakage

Let Fi = 1 i < B

  • therwise

be the “boxcar” filter. (Used in [GGIMS02,GMS05]) Observe DFT(F·x, B)

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 16 / 33

slide-51
SLIDE 51

Leakage

Let Fi = 1 i < B

  • therwise

be the “boxcar” filter. (Used in [GGIMS02,GMS05]) Observe DFT(F·x, B) = subsample(DFT(F·x, n), B)

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 16 / 33

slide-52
SLIDE 52

Leakage

Let Fi = 1 i < B

  • therwise

be the “boxcar” filter. (Used in [GGIMS02,GMS05]) Observe DFT(F·x, B) = subsample(DFT(F·x, n), B) = subsample( F∗ x, B).

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 16 / 33

slide-53
SLIDE 53

Leakage

Let Fi = 1 i < B

  • therwise

be the “boxcar” filter. (Used in [GGIMS02,GMS05]) Observe DFT(F·x, B) = subsample(DFT(F·x, n), B) = subsample( F∗ x, B). DFT F of boxcar filter is sinc, decays as 1/i.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 16 / 33

slide-54
SLIDE 54

Leakage

Let Fi = 1 i < B

  • therwise

be the “boxcar” filter. (Used in [GGIMS02,GMS05]) Observe DFT(F·x, B) = subsample(DFT(F·x, n), B) = subsample( F∗ x, B). DFT F of boxcar filter is sinc, decays as 1/i. Need a better filter F!

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 16 / 33

slide-55
SLIDE 55

Filters

20 40 60 80 100 0.0 0.5 1.0 1.5 2.0

Filter (time)

Bin 5 10 15 20 25

Filter (freq)

Given |supp(F)| = B, concentrate F.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 17 / 33

slide-56
SLIDE 56

Filters

20 40 60 80 100 0.0 0.5 1.0 1.5 2.0

Filter (time)

Bin 5 10 15 20 25

Filter (freq)

Given |supp(F)| = B, concentrate F. Boxcar filter: decays perfectly in time, 1/t in frequency.

◮ Non-trivial leakage everywhere. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 17 / 33

slide-57
SLIDE 57

Filters

20 40 60 80 100 0.0 0.2 0.4 0.6 0.8 1.0

Filter (time)

Bin 2 4 6 8 10 12 14 16 18

Filter (freq)

Given |supp(F)| = B, concentrate F. Boxcar filter: decays perfectly in time, 1/t in frequency.

◮ Non-trivial leakage everywhere.

Gaussians: decay as e−t2 in time and frequency.

◮ Non-trivial leakage to O(

  • log n ·
  • log n) = O(log n) buckets.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 17 / 33

slide-58
SLIDE 58

Filters

50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0

Filter (time)

Bin 10 20 30 40 50 60 70 80

Filter (freq)

Given |supp(F)| = B log n, concentrate F. Boxcar filter: decays perfectly in time, 1/t in frequency.

◮ Non-trivial leakage everywhere.

Gaussians: decay as e−t2 in time and frequency.

◮ Non-trivial leakage to O(

  • log n ·
  • log n) = O(log n) buckets.

Still O(B log n) time when |supp( F)| = B log n.

◮ Non-trivial leakage to 0 buckets. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 17 / 33

slide-59
SLIDE 59

Filters

50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0

Filter (time)

Bin 10 20 30 40 50 60 70 80

Filter (freq)

Given |supp(F)| = B log n, concentrate F. Boxcar filter: decays perfectly in time, 1/t in frequency.

◮ Non-trivial leakage everywhere.

Gaussians: decay as e−t2 in time and frequency.

◮ Non-trivial leakage to O(

  • log n ·
  • log n) = O(log n) buckets.

Still O(B log n) time when |supp( F)| = B log n.

◮ Non-trivial leakage to 0 buckets. ◮ Trivial contribution to correct bucket. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 17 / 33

slide-60
SLIDE 60

Filters

50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0

Filter (time)

Bin 10 20 30 40 50 60 70 80

Filter (freq)

Let G be Gaussian with σ = B

  • log n

H be box-car filter of length n/B.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 18 / 33

slide-61
SLIDE 61

Filters

50 100 150 200 0.02 0.00 0.02 0.04 0.06 0.08 0.10

Filter (time)

Bin 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Filter (freq)

Let G be Gaussian with σ = B

  • log n

H be box-car filter of length n/B. Use F = G ∗ H.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 18 / 33

slide-62
SLIDE 62

Filters

50 100 150 200 0.02 0.00 0.02 0.04 0.06 0.08 0.10

Filter (time)

Bin 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Filter (freq)

Let G be Gaussian with σ = B

  • log n

H be box-car filter of length n/B. Use F = G ∗ H. Hashes correctly to one bucket, leaks to at most 1 bucket.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 18 / 33

slide-63
SLIDE 63

Filters

50 100 150 200 0.02 0.00 0.02 0.04 0.06 0.08 0.10

Filter (time)

Bin 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Filter (freq)

Let G be Gaussian with σ = B

  • log n

H be box-car filter of length n/B. Use F = G ∗ H. Hashes correctly to one bucket, leaks to at most 1 bucket.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 18 / 33

slide-64
SLIDE 64

Properties of filter

Filter (frequency): Gaussian * boxcar

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 19 / 33

slide-65
SLIDE 65

Properties of filter

n B

Pass region

“Pass region” of size n/B, outside which is negligible δ.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 19 / 33

slide-66
SLIDE 66

Properties of filter

9 10 n B

Super-pass region

“Pass region” of size n/B, outside which is negligible δ. “Super-pass region”, where ≈ 1.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 19 / 33

slide-67
SLIDE 67

Properties of filter

Bad region

“Pass region” of size n/B, outside which is negligible δ. “Super-pass region”, where ≈ 1. Small fraction (say 10%) is “bad region” with intermediate value.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 19 / 33

slide-68
SLIDE 68

Properties of filter

Filter (time): Gaussian · sinc

“Pass region” of size n/B, outside which is negligible δ. “Super-pass region”, where ≈ 1. Small fraction (say 10%) is “bad region” with intermediate value. Time domain has support size O(B log n).

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 19 / 33

slide-69
SLIDE 69

Algorithm for exactly sparse signals

Original signal x Original signal ˆ x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-70
SLIDE 70

Algorithm for exactly sparse signals

Computed F ·x Filtered signal c F ∗c x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-71
SLIDE 71

Algorithm for exactly sparse signals

F ·x aliased to B terms Filtered signal c F ∗c x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-72
SLIDE 72

Algorithm for exactly sparse signals

F ·x aliased to B terms Computed samples of c F ∗c x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-73
SLIDE 73

Algorithm for exactly sparse signals

F ·x aliased to B terms Computed samples of c F ∗c x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-74
SLIDE 74

Algorithm for exactly sparse signals

F ·x aliased to B terms Knowledge about ˆ x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-75
SLIDE 75

Algorithm for exactly sparse signals

F ·x aliased to B terms Knowledge about ˆ x

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-76
SLIDE 76

Algorithm for exactly sparse signals

F ·x aliased to B terms Knowledge about ˆ x

Lemma

If i is alone in its bucket and in the “super-pass” region, uh(i) = xi. Computing u takes O(B log n) time.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 20 / 33

slide-77
SLIDE 77

Algorithm for perfectly sparse signals

Lemma

If i is alone in its bucket and in the “super-pass” region, uh(i) = xi. Time-shift x by one and repeat: u′

h(i) =

xiωi. Divide to find i.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 21 / 33

slide-78
SLIDE 78

Permutation in time and frequency

Can recover coordinates that are alone in their bucket and in the super-pass region. What if coordinates are near each other?

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 22 / 33

slide-79
SLIDE 79

Permutation in time and frequency

Can recover coordinates that are alone in their bucket and in the super-pass region. What if coordinates are near each other? Define the “permutation” (Pa,bx)i = xaiω−ib. Then

  • (Pa,bx)ai+b =

xi. For random a and b, each i is probably “well-hashed.”

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 22 / 33

slide-80
SLIDE 80

Permutation in time and frequency

Can recover coordinates that are alone in their bucket and in the super-pass region. What if coordinates are near each other? Define the “permutation” (Pa,bx)i = xaiω−ib. Then

  • (Pa,bx)ai+b =

xi. For random a and b, each i is probably “well-hashed.”

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 22 / 33

slide-81
SLIDE 81

Overall algorithm

Weak sparse recovery:

◮ Permute with random a, b. ◮ Hash to u ◮ Time shift by one, hash to u′. ◮ For j ∈ [B] ⋆ Choose i∗ by u′ j /uj = ωi∗. ⋆ Set

x′i∗ = uj.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 23 / 33

slide-82
SLIDE 82

Overall algorithm

Weak sparse recovery:

◮ Permute with random a, b. ◮ Hash to u ◮ Time shift by one, hash to u′. ◮ For j ∈ [B] ⋆ Choose i∗ by u′ j /uj = ωi∗. ⋆ Set

x′i∗ = uj.

Full sparse recovery:

x′ ← WeakRecovery(x, k)

◮ k → k/2, x → (x − x′), repeat. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 23 / 33

slide-83
SLIDE 83

Overall algorithm

Weak sparse recovery:

◮ Permute with random a, b. ◮ Hash to u ◮ Time shift by one, hash to u′. ◮ For j ∈ [B] ⋆ Choose i∗ by u′ j /uj = ωi∗. ⋆ Set

x′i∗ = uj.

Full sparse recovery:

x′ ← WeakRecovery(x, k)

◮ k → k/2, x → (x − x′), repeat.

Time dominated by hash to Br = k/2r buckets in round r:

◮ Br log n to hash x. ◮ Hashing

x′ takes O(|supp( x′)|) = O(k).

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 23 / 33

slide-84
SLIDE 84

Overall algorithm

Weak sparse recovery:

◮ Permute with random a, b. ◮ Hash to u ◮ Time shift by one, hash to u′. ◮ For j ∈ [B] ⋆ Choose i∗ by u′ j /uj = ωi∗. ⋆ Set

x′i∗ = uj.

Full sparse recovery:

x′ ← WeakRecovery(x, k)

◮ k → k/2, x → (x − x′), repeat.

Time dominated by hash to Br = k/2r buckets in round r:

◮ Br log n to hash x. ◮ Hashing

x′ takes O(|supp( x′)|) = O(k).

Time ( k

2r log n + k) = O(k log n).

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 23 / 33

slide-85
SLIDE 85

Outline

1

Introduction

2

Special case: exactly sparse signals

3

General case: approximately sparse signals

4

Experiments

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 24 / 33

slide-86
SLIDE 86

Nearly sparse signals

What happens if only 90% of the mass lies in top k coordinates, not 100%? Want to find most “heavy” coordinates i with | xi|2 > xtail2

2/k.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 25 / 33

slide-87
SLIDE 87

Nearly sparse signals

What happens if only 90% of the mass lies in top k coordinates, not 100%? Want to find most “heavy” coordinates i with | xi|2 > xtail2

2/k.

Lemma

Each i is “well-hashed” with large constant probability over the permutation (a, b). If i is well-hashed, then with time shift c we have uh(i) = xiωci + η so that for random c, the noise η is bounded by E[|η|2] xtail2

2/B

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 25 / 33

slide-88
SLIDE 88

Recovering well-hashed i

ωci

xi

With good probability over c, get uh(i) = xiωcπ(i) + η with |η| < | xi|/10.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 26 / 33

slide-89
SLIDE 89

Recovering well-hashed i

ωci

xi + η

With good probability over c, get uh(i) = xiωcπ(i) + η with |η| < | xi|/10.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 26 / 33

slide-90
SLIDE 90

Recovering well-hashed i

ωci

xi + η

θ

With good probability over c, get uh(i) = xiωcπ(i) + η with |η| < | xi|/10. Phase error |θ| ≤ sin−1( |η|

| xi|) < 0.11.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 26 / 33

slide-91
SLIDE 91

Recovering well-hashed i

ωci

xi + η

θ

With good probability over c, get uh(i) = xiωcπ(i) + η with |η| < | xi|/10. Phase error |θ| ≤ sin−1( |η|

| xi|) < 0.11.

True for random c. For a fixed γ, run on c and c + γ to observe ωγπ(i) to within 0.22.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 26 / 33

slide-92
SLIDE 92

Recovering well-hashed i

  • bservation

ωγi

Find i from n/k possibilities in bucket. Choose any γ, then observe ωγi to within ±0.1 radians. Constant number of bits, so hope for Θ(log(n/k)) observations.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 27 / 33

slide-93
SLIDE 93

Recovering well-hashed i

ωi

We know i to within R. Set γ = ⌊n/R⌋. Restrict and repeat, log(n/k) times.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 28 / 33

slide-94
SLIDE 94

Recovering well-hashed i

ωγi

We know i to within R. Set γ = ⌊n/R⌋. Restrict and repeat, log(n/k) times.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 28 / 33

slide-95
SLIDE 95

Recovering well-hashed i

  • bservation

ωγi

We know i to within R. Set γ = ⌊n/R⌋. Restrict and repeat, log(n/k) times.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 28 / 33

slide-96
SLIDE 96

Recovering well-hashed i

  • bservation

ωγi

We know i to within R. Set γ = ⌊n/R⌋. Restrict and repeat, log(n/k) times.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 28 / 33

slide-97
SLIDE 97

Recovering well-hashed i

ωi

We know i to within R. Set γ = ⌊n/R⌋. Restrict and repeat, log(n/k) times.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 28 / 33

slide-98
SLIDE 98

Recovering well-hashed i

ωi

We know i to within R. Set γ = ⌊n/R⌋. Restrict and repeat, log(n/k) times.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 28 / 33

slide-99
SLIDE 99

Problem: constant failure probability per measurement

  • bservation

ωγi

We only estimate ωγi well with 90% probability. Some of the log(n/k) restrictions will go awry.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 29 / 33

slide-100
SLIDE 100

Problem: constant failure probability per measurement

ωγi

We only estimate ωγi well with 90% probability. Some of the log(n/k) restrictions will go awry.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 29 / 33

slide-101
SLIDE 101

Problem: constant failure probability per measurement

ωγi

We only estimate ωγi well with 90% probability. Some of the log(n/k) restrictions will go awry. Two options:

◮ Median of O(log log(n/k)) estimates. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 29 / 33

slide-102
SLIDE 102

Problem: constant failure probability per measurement

ωγi

We only estimate ωγi well with 90% probability. Some of the log(n/k) restrictions will go awry. Two options:

◮ Median of O(log log(n/k)) estimates. ◮ Can avoid the loss: learn log log(n/k) bits at a time. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 29 / 33

slide-103
SLIDE 103

General k-sparse algorithm

Shown how to find most heavy hitters.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-104
SLIDE 104

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-105
SLIDE 105

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value. Gives “weak sparse recovery”: k-sparse x′ such that x − x′ is k/2-sparse.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-106
SLIDE 106

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value. Gives “weak sparse recovery”: k-sparse x′ such that x − x′ is k/2-sparse.

◮ With a little additional noise [Gilbert-Li-Porat-Strauss ’10] Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-107
SLIDE 107

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value. Gives “weak sparse recovery”: k-sparse x′ such that x − x′ is k/2-sparse.

◮ With a little additional noise [Gilbert-Li-Porat-Strauss ’10]

Repeat on x − x′, with k → k/2.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-108
SLIDE 108

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value. Gives “weak sparse recovery”: k-sparse x′ such that x − x′ is k/2-sparse.

◮ With a little additional noise [Gilbert-Li-Porat-Strauss ’10]

Repeat on x − x′, with k → k/2. Takes O((Bi log n + k) log(n/Bi)) time in round i, with Bi buckets.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-109
SLIDE 109

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value. Gives “weak sparse recovery”: k-sparse x′ such that x − x′ is k/2-sparse.

◮ With a little additional noise [Gilbert-Li-Porat-Strauss ’10]

Repeat on x − x′, with k → k/2. Takes O((Bi log n + k) log(n/Bi)) time in round i, with Bi buckets.

◮ Previous recursion: Bi ≍ ki ≍ k/2i gives

k log n log k ≫ n log n

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-110
SLIDE 110

General k-sparse algorithm

Shown how to find most heavy hitters. Straightforward to estimate their value. Gives “weak sparse recovery”: k-sparse x′ such that x − x′ is k/2-sparse.

◮ With a little additional noise [Gilbert-Li-Porat-Strauss ’10]

Repeat on x − x′, with k → k/2. Takes O((Bi log n + k) log(n/Bi)) time in round i, with Bi buckets.

◮ Previous recursion: Bi ≍ ki ≍ k/2i gives

k log n log k ≫ n log n

◮ Instead: Bi ≍ k/iΘ(1), ki ≍ k/i! gives

k log n log(n/k)

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 30 / 33

slide-111
SLIDE 111

Outline

1

Introduction

2

Special case: exactly sparse signals

3

General case: approximately sparse signals

4

Experiments

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 31 / 33

slide-112
SLIDE 112

Empirical performance of exact sparse algorithm

1e-05 0.0001 0.001 0.01 0.1 1 10 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224

Run Time (sec) Signal Size (n) Run Time vs Signal Size (k=50)

sFFT 3.0 (Exact) FFTW AAFFT 0.9 0.001 0.01 0.1 1 10 26 27 28 29 210 211 212 213 214 215 216 217 218

Run Time (sec) Sparsity (K) Run Time vs Signal Sparsity (N=222)

sFFT 3.0 (Exact) FFTW AAFFT 0.9

Compare to FFTW, previous best sublinear algorithm (AAFFT). Faster than FFTW for k/n < 3%.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 32 / 33

slide-113
SLIDE 113

Conclusions and Future Work

O(k log n) for exactly sparse x O(k log n

k log n) for approximation.

Beats FFTW for k/n < 3% (in the exact case). Open problems:

◮ Can we get k log n for approximate recovery? ◮ Hadamard matrix / FFT over finite fields? ◮ n not a power of 2? ◮ Higher probability of success without log(1/δ) slowdown? ◮ Stronger approximation guarantee, like ℓ∞/ℓ2? ◮ Better recovery of off-grid frequencies? Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 33 / 33

slide-114
SLIDE 114

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 34 / 33

slide-115
SLIDE 115

SODA empirical Performance: runtime

0.001 0.01 0.1 1 10 214 215 216 217 218 219 220 221 222 223 224 225 226

Run Time (sec) Signal Size (n) Run Time vs Signal Size (k=50)

sFFT 1.0 sFFT 2.0 FFTW FFTW OPT AAFFT 0.9 0.01 0.1 1 10 26 27 28 29 210 211 212

Run Time (sec) Sparsity (K) Run Time vs Signal Sparsity (n=222)

sFFT 1.0 sFFT 2.0 FFTW FFTW OPT AAFFT 0.9

Compare to FFTW, previous best sublinear algorithm (AAFFT). Offer a heuristic that improves time to O(n1/3k2/3).

◮ Filter from [Mansour ’92]. ◮ Can’t rerandomize, might miss elements.

Faster than FFTW for n/k > 2,000. Faster than AAFFT for n/k < 1,000,000.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 35 / 33

slide-116
SLIDE 116

SODA empirical Performance: noise

1e-07 1e-06 1e-05 0.0001 0.001 0.01 0.1 1

  • 20

20 40 60 80 100 Average L1 Error per Enrty SNR (dB)

Robustness vs SNR (n=222, k=50)

sFFT 1.0 sFFT 2.0 AAFFT 0.9

Just like in Count-Sketch, algorithm is noise tolerant.

Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 36 / 33

slide-117
SLIDE 117

Saving a log log(n/k) factor

Could use only log(n/k) samples by taking random γ:

◮ For τ ′ = τ, ωγ(τ ′−τ) uniform over circle. ◮ Hence ωγτ ′ probably far from the observations. ◮ Distinguish among n/k possibilities with log(n/k) samples.

Takes n/k log(n/k) time to test all possibilities. Idea: mix the two approaches.

◮ Split region into log(n/k) subregions of size w. ◮ Choose random γ ∈ [ n

8w , n 4w ].

◮ Small enough that subregions remain local. ◮ Large enough that far subregions roughly uniform. ◮ Identify subregion exhaustively: log log(n/k) measurements and

log(n/k) log log(n/k) time.

◮ Repeat loglog(n/k)(n/k) times to identify τ. ◮ Total log(n/k) measurements, log2(n/k) time. Hassanieh, Indyk, Katabi, and Price (MIT) Nearly Optimal Sparse Fourier Transform 2012-04-27 37 / 33