Empirical Testing of Sparse Approximation and Matrix Completion - - PowerPoint PPT Presentation

empirical testing of sparse approximation and matrix
SMART_READER_LITE
LIVE PREVIEW

Empirical Testing of Sparse Approximation and Matrix Completion - - PowerPoint PPT Presentation

Sparse Approximation Phase Transitions Matrix completion Empirical Testing of Sparse Approximation and Matrix Completion Algorithms Jared Tanner Workshop on Sparsity, Compressed Sensing and Applications University of Oxford


slide-1
SLIDE 1

Sparse Approximation Phase Transitions Matrix completion

Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

Jared Tanner Workshop on Sparsity, Compressed Sensing and Applications ———– University of Oxford Joint with Blanchard, Donoho, and Wei

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-2
SLIDE 2

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Three sparse approximation questions to test

Sparse approximation: min

x x0 s.t. Ax − b2 ≤ τ

with A ∈ Rm×n

  • 1. Are there algorithms that have same behaviour for different A?
  • 2. Which algorithm is fastest and with a high recovery probability?

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-3
SLIDE 3

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Three sparse approximation questions to test

Sparse approximation: min

x x0 s.t. Ax − b2 ≤ τ

with A ∈ Rm×n

  • 1. Are there algorithms that have same behaviour for different A?
  • 2. Which algorithm is fastest and with a high recovery probability?

Matrix completion: min

X rank(X) s.t. A(X) − b2 ≤ τ

with A maps Rm×n to Rp

  • 3. What is largest rank that is recovered with efficient algorithm?

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-4
SLIDE 4

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Three sparse approximation questions to test

Sparse approximation: min

x x0 s.t. Ax − b2 ≤ τ

with A ∈ Rm×n

  • 1. Are there algorithms that have same behaviour for different A?
  • 2. Which algorithm is fastest and with a high recovery probability?

Matrix completion: min

X rank(X) s.t. A(X) − b2 ≤ τ

with A maps Rm×n to Rp

  • 3. What is largest rank that is recovered with efficient algorithm?

Information about each question can be gleaned from large scale empirical testing. Lets use some HPC resources.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-5
SLIDE 5

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Sparse approximation phase transition

◮ Problem characterized by three numbers: k ≤ m ≤ n

  • n, Signal Length, “Nyquist” sampling rate
  • m, number of inner product measurements,
  • k, signal complexity, sparsity, k := minx x0

◮ Mixed under/over-sampling rates compared to naive/optimal

Undersampling: δm := m n , Oversampling: ρm := k m

◮ Testing model: For matrix ensemble and algorithm draw A

and k-sparse x0, let Π(k, m, n) be the probability of recovery

◮ For fixed (δm, ρm), Π(k, m, n) converges to 1 or 0 with

increasing m: separated by phase transition curve ρ(δ)

◮ Algorithm with ρ(δ) large, Π(k, m, n) insensitive to matrix?

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-6
SLIDE 6

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Phase Transition: ℓ1 ball, C n

◮ With overwhelming probability on measurements Am,n:

for any ǫ > 0, as (k, m, n) → ∞

  • All k-sparse signals if k/m ≤ ρS(m/n, C)(1 − ǫ)
  • Most k-sparse signals if k/m ≤ ρW (m/n, C)(1 − ǫ)
  • Failure typical if k/m ≥ ρW (m/n, C)(1 + ǫ)

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Recovery: all signals Recovery: most signals S W

δ = m/n

k m ◮ Asymptotic behaviour δ → 0: ρ(m/n) ∼ [2(e) log(n/m)]−1

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-7
SLIDE 7

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Phase Transition: Simplex, T n−1, x ≥ 0

◮ With overwhelming probability on measurements Am,n:

for any ǫ > 0, x ≥ 0, as (k, m, n) → ∞

  • All k-sparse signals if k/m ≤ ρS(m/n, T)(1 − ǫ)
  • Most k-sparse signals if k/m ≤ ρW (m/n, T)(1 − ǫ)
  • Failure typical if k/m ≥ ρW (m/n, T)(1 + ǫ)

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Recovery: all signals Recovery: most signals !W !S

δ = m/n

k m ◮ Asymptotic behaviour δ → 0: ρ(m/n) ∼ [2(e) log(n/m)]−1

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-8
SLIDE 8

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

ℓ1-Weak Phase Transitions: Visual agreement

◮ Testing beyond the proven theory, 6.4 CPU years later...

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-9
SLIDE 9

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

ℓ1-Weak Phase Transitions: Visual agreement

◮ Testing beyond the proven theory, 6.4 CPU years later... ◮ Black: Weak phase transition: x ≥ 0 (top), x signed (bot.) ◮ Overlaid empirical evidence of 50% success rate:

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

δ=n/N ρ=k/n

Gaussian Bernoulli Fourier Ternary p=2/3 Ternary p=2/5 Ternary p=1/10 Hadamard Expander p=1/5 Rademacher ρ(δ,Q)

◮ Gaussian, Bernoulli, Fourier, Hadamard, Rademacher ◮ Ternary (p): P(0) = 1 − p and P(±1) = p/2 ◮ Expander (p): ⌈p · n⌉ ones per column, otherwise zeros ◮ Rigorous statistical comparison shows n−1/2 convergence

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-10
SLIDE 10

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Bulk Z-scores

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −4 −3 −2 −1 1 2 3 4 5 δ=n/N Z−score 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 −6 −4 −2 2 4 6 δ=n/N Z−score

(a) Bernoulli (b) Fourier

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −4 −3 −2 −1 1 2 3 4 δ=n/N Z−score 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 −4 −3 −2 −1 1 2 3 δ=n/N Z−score

(c) Ternary (1/3) (d) Rademacher

◮ n = 200, n = 400 and n = 1600 ◮ Linear trend with δ = m/n, decays at rate n−1/2 ◮ Proven for matrices with subgaussian tail, Montanari 2012

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-11
SLIDE 11

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Which algorithm is fastest and with high phase transition?

State of the art algorithms for sparse approximation

◮ Hard Thresholding, Hk(ATb), followed by subspace restricted

linear solver: Conjugate Gradient

◮ Normalized IHT: Hk(xt + κAT(b − Axt)), (Steepest Descent) ◮ Hard Thresholding Pursuit: NIHT with pseudo-inverse ◮ CSAMPSP (hybrid of CoSaMP and Subspace Pursuit)

v t+1 = xt+1 = Hαk(xt + κAT(b − Axt)) It = supp(v t) ∪ supp(xt) Join supp. sets wIt = (AT

It AIt)−1AT It b

Least squares fit xt+1 = Hβk(w t) Second threshold

◮ SpaRSA [Lee and Wright ’08]

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-12
SLIDE 12

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Which algorithm is fastest and with high phase transition?

State of the art algorithms for sparse approximation

◮ Hard Thresholding, Hk(ATb), followed by subspace restricted

linear solver: Conjugate Gradient

◮ Normalized IHT: Hk(xt + κAT(b − Axt)), (Steepest Descent) ◮ Hard Thresholding Pursuit: NIHT with pseudo-inverse ◮ CSAMPSP (hybrid of CoSaMP and Subspace Pursuit)

v t+1 = xt+1 = Hαk(xt + κAT(b − Axt)) It = supp(v t) ∪ supp(xt) Join supp. sets wIt = (AT

It AIt)−1AT It b

Least squares fit xt+1 = Hβk(w t) Second threshold

◮ SpaRSA [Lee and Wright ’08] ◮ Testing environment with random problem generation, or

passing matrix and measurements.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-13
SLIDE 13

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Which algorithm is fastest and with high phase transition?

State of the art algorithms for sparse approximation

◮ Hard Thresholding, Hk(ATb), followed by subspace restricted

linear solver: Conjugate Gradient

◮ Normalized IHT: Hk(xt + κAT(b − Axt)), (Steepest Descent) ◮ Hard Thresholding Pursuit: NIHT with pseudo-inverse ◮ CSAMPSP (hybrid of CoSaMP and Subspace Pursuit)

v t+1 = xt+1 = Hαk(xt + κAT(b − Axt)) It = supp(v t) ∪ supp(xt) Join supp. sets wIt = (AT

It AIt)−1AT It b

Least squares fit xt+1 = Hβk(w t) Second threshold

◮ SpaRSA [Lee and Wright ’08] ◮ Testing environment with random problem generation, or

passing matrix and measurements.

◮ Matrix Ensembles

◮ Discrete Cosine Transform, Sparse Matrices, Gaussian Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-14
SLIDE 14

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Ingredients of Greedy CS Algorithms:

◮ Descent: νt := xt + κAT(b − Axt) with κ = AT

Λt (b−Axt)2 2

AΛt AT

Λt (b−Axt)2 2

requires two matvec and one transpose matvec, and vec adds.

◮ Support: identification of the support set for

xt+1 = Hk(νt), hard thresholding, and calculating κ. Use linear binning for fast parallel order statistic calculation, and only do so when support set could change. Reduced support set time to a small fraction of one DCT matvec time.

◮ Generation: when testing millions of problems

the problem generation can become slow, especially using matlab randn. Total time (for large problems) reduced to essentially the matvecs.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-15
SLIDE 15

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Computing environment

CPU:

◮ Intel Xeon 5650 (released March 2010) ◮ 6 core, 2.66 GHz ◮ 12 GB of DDR2 PC3-1066, 6.4 GT/s ◮ Matlab 2010a, 64 bit (inherent multi-core threading)

GPU:

◮ NVIDIA Tesla c2050 (release April 2010) ◮ 448 Cores, peak performance 1.03 Tflop/s ◮ 3GB GDDR5 (on device memory) ◮ Error-correction

Is it faster?

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-16
SLIDE 16

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Multiplicative acceleration factor for NIHT: CPU/GPU

matrixEnsemble n nonZeros Descent Support Generation dct 214 63.21 42.16 1.04 216 64.46 41.59 1.77 218 54.11 38.45 3.20 220 57.94 38.82 5.80 smv 212 4 0.52 4.10 32.32 214 4 1.41 14.64 135.08 216 4 4.29 43.04 521.60 218 4 10.43 71.50 1630.08 212 7 0.63 3.48 33.92 214 7 1.86 12.86 142.53 216 7 5.42 37.11 526.82 218 7 10.80 55.60 1556.44 gen 210 1.06 2.07 0.34 212 10.36 4.09 2.53 214 16.75 6.17 5.85

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-17
SLIDE 17

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Algorithm Selection for DCT, map, n = 216

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 NIHT: circle HTP: plus CSMPSP: square ThresholdCG: times

m/n k/m Algorithm selection map, n=65536

NIHT dominant near phase transition.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-18
SLIDE 18

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Algorithm Selection for DCT, map, n = 218

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 NIHT: circle HTP: plus CSMPSP: square ThresholdCG: times

m/n k/m Algorithm selection map, n=262144

NIHT dominant near phase transition.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-19
SLIDE 19

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Algorithm Selection for DCT, map, n = 220

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 NIHT: circle HTP: plus CSMPSP: square ThresholdCG: times

m/n k/m Algorithm selection map, n=1048576

NIHT dominant near phase transition, though HTP nearly as fast.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-20
SLIDE 20

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

HTP / best time for DCT, n = 220

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

m/n Time: HTP / fastest algorithm, n=1048576 k/m

1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

NIHT and HTP have essentially identical average case behaviour.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-21
SLIDE 21

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Best time for DCT, n = 214

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

m/n Time (ms) of fastest algorithm, n=16384 k/m

25 30 35 40 45

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-22
SLIDE 22

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Best time for DCT, n = 216

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

m/n Time (ms) of fastest algorithm, n=65536 k/m

25 30 35 40 45 50 55 60

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-23
SLIDE 23

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Best time for DCT, n = 218

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

m/n Time (ms) of fastest algorithm, n=262144 k/m

40 50 60 70 80 90 100 110

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-24
SLIDE 24

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Best time for DCT, n = 220

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

m/n Time (ms) of fastest algorithm, n=1048576 k/m

100 150 200 250 300

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-25
SLIDE 25

Sparse Approximation Phase Transitions Matrix completion Universality using cluster: embarrassingly Empirical testing of iterative algorithms using GPUs

Concentration phenomenon NIHT for DCT, δ = 0.25

◮ Logit fit, exp(β0+β1k) 1+exp(β0+β1k), of data collected of about 105 tests ◮ ρniht W (1/4) ≈ 0.25967 (Note, ρW (1/4, C) = 0.2674) ◮ Transition width proportional to n−1/2

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-26
SLIDE 26

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

Optimal order recovery - matrix completion

◮ Four defining numbers: r ≤ m ≤ n and p

  • m × n, Matrix size, mn is “Nyquist” sampling rate
  • p, number of inner product measurements
  • r, matrix complexity, rank

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-27
SLIDE 27

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

Optimal order recovery - matrix completion

◮ Four defining numbers: r ≤ m ≤ n and p

  • m × n, Matrix size, mn is “Nyquist” sampling rate
  • p, number of inner product measurements
  • r, matrix complexity, rank

◮ For what (r, m, n, p) does an encoder/decoder pair recover a

suitable approximation of X from (b, A)?

  • p = r(m + n − r) is the optimal oracle rate
  • p ∼ r(m + n − r) possible using efficient algorithms

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-28
SLIDE 28

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

Optimal order recovery - matrix completion

◮ Four defining numbers: r ≤ m ≤ n and p

  • m × n, Matrix size, mn is “Nyquist” sampling rate
  • p, number of inner product measurements
  • r, matrix complexity, rank

◮ For what (r, m, n, p) does an encoder/decoder pair recover a

suitable approximation of X from (b, A)?

  • p = r(m + n − r) is the optimal oracle rate
  • p ∼ r(m + n − r) possible using efficient algorithms

◮ Mixed under/over-sampling rates compared to naive/optimal

Undersampling: δ := p mn, Oversampling: ρ := r(m + n − r) p

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-29
SLIDE 29

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

Largest rank recoverable with efficient algorithm

◮ Compresses sensing algorithms all behave about the same ◮ How about matrix completion, do simple methods work well? ◮ NIHT: alternating projection with column subspace stepsize

X j+1 = Hr(X j + µjA∗(b − A(X j))) with µj := Pj

UA∗(b − A(X j))2 F

||A(Pj

UA∗(b − A(X j)))||2 2

where Pj

U := UjU∗ j . (column & row projection doesn’t work.) ◮ Contrast NIHT with nuclear norm minimization via

semi-definite programming and simple Power Factorization.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-30
SLIDE 30

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

Three matrix completion algorithms to compare

◮ Nuclear norm minimization (extension of ℓ1 in CS)

min

X X∗ :=

  • σi(X)

subject to A(X) = b.

◮ NIHT for matrix completion (how to select µj)

X j+1 = Hr(X j + µjA∗(b − A(X j)))

◮ Power Factorization

min

R,V RV 2

subject to RV(X) := A(X) = b. Benchmark algorithms ability to recover low rank matrices, and contrast speed and memory requirements. 4.3 CPU years later...

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-31
SLIDE 31

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

NIHT vs “state of the art”, Gaussian sensing (m = n = 80)

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 p/mn rho Recovery phase transition with gamma = 1.000

NIHT: Column Projection (0.999) with Gaussian Measurements Power Factorization with Gaussian Measurements Nuclear Norm Minimization with Gaussian Measurements

◮ Simple NIHT has nearly optimal recovery ability ◮ Convex relaxation consistent with theory of Hassibi et al.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-32
SLIDE 32

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

NIHT vs “state of the art”, entry sensing (m = n = 800)

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p/mn rho Recovery phase transition with gamma = 1.000

NIHT: Column Projection (0.999) with Entry Measurements Power Factorization with Entry Measurements Nuclear Norm Minimization with Entry Measurements

◮ Simple NIHT has nearly optimal recovery ability ◮ Convex relaxation is slow and with a small recovery region.

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-33
SLIDE 33

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

Conclusions

◮ There are many algorithms for sparse approximation,

and matrix completion, all proven to have the optimal order recovery m ≥ Const.k log(n/m), and p ≥ Const.r(m + n − r).

◮ Empirical testing can suggest conjectures and point us to

“best” methods.

◮ Use of high performance computing tools allows testing large

numbers of problems, and problems quickly: GPU software solves problems of size n = 106 in under one second. Two new findings:

◮ Near universality of CS algorithms phase transitions, ℓ1 ◮ Convexification less effective for matrix completion; simple

methods for min rank have higher phase transition

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms

slide-34
SLIDE 34

Sparse Approximation Phase Transitions Matrix completion What is largest rank recoverable with efficient algorithm Matrix completion with ρ near 1

References

◮ Observed universality of phase transitions in high-dimensional

geometry, with implications for modern data analysis and signal processing (2009) Phil. Trans. Roy. Soc. A, Donoho and Tanner.

◮ GPU Accelerated Greedy Algorithms for compressed sensing

(2012), Blanchard and Tanner.

◮ Normalized iterative hard thresholding for matrix completion

(2012), Tanner and Wei.

Thanks for your time

Jared Tanner Empirical Testing of Sparse Approximation and Matrix Completion Algorithms