signal recovery from random measurements
play

Signal Recovery from Random Measurements Joel A. Tropp Anna C. - PowerPoint PPT Presentation

Signal Recovery from Random Measurements Joel A. Tropp Anna C. Gilbert {jtropp|annacg}@umich.edu Department of Mathematics The University of Michigan 1 The Signal Recovery Problem Let s be an m -sparse signal in R d , for example


  1. Signal Recovery from Random Measurements ❦ Joel A. Tropp Anna C. Gilbert {jtropp|annacg}@umich.edu Department of Mathematics The University of Michigan 1

  2. The Signal Recovery Problem ❦ Let s be an m -sparse signal in R d , for example � 0 . . . � T s = − 7 . 3 0 0 0 2 . 7 0 1 . 5 0 Use measurement vectors x 1 , . . . , x N to collect N nonadaptive linear measurements of the signal � s , x 1 � , � s , x 2 � , . . . , � s , x N � Q1. How many measurements are necessary to determine the signal? Q2. How should the measurement vectors be chosen? Q3. What algorithms can perform the reconstruction task? Signal Recovery from Partial Information (Madison, 29 August 2006) 2

  3. Motivations I ❦ Medical Imaging ❧ Tomography provides incomplete, nonadaptive frequency information ❧ The images typically have a sparse gradient ❧ Reference: [Cand` es–Romberg–Tao 2004] Sensor Networks ❧ Limited communication favors nonadaptive measurements ❧ Some types of natural data are approximately sparse ❧ References: [Haupt–Nowak 2005, Baraniuk et al. 2005] Signal Recovery from Partial Information (Madison, 29 August 2006) 3

  4. Motivations II ❦ Sparse, High-Bandwidth A/D Conversion ❧ Signals of interest have few important frequencies ❧ Locations of frequencies are unknown a priori ❧ Frequencies are spread across gigahertz of bandwidth ❧ Current analog-to-digital converters cannot provide resolution and bandwidth simultaneously ❧ Must develop new sampling techniques ❧ References: [Healy 2005] Signal Recovery from Partial Information (Madison, 29 August 2006) 4

  5. Q1: How many measurements? ❦ Adaptive measurements Consider the class of m -sparse signals in R d that have 0–1 entries � d � It is clear that log 2 bits suffice to distinguish members of this class. m By Stirling’s approximation, Storage per signal: O ( m log( d/m )) bits A simple adaptive coding scheme can achieve this rate Nonadaptive measurements The na¨ ıve approach uses d orthogonal measurement vectors Storage per signal: O ( d ) bits But we can do exponentially better. . . Signal Recovery from Partial Information (Madison, 29 August 2006) 5

  6. Q2: What type of measurements? ❦ Idea: Use randomness Random measurement vectors yield summary statistics that are nonadaptive yet highly informative. Examples: Bernoulli measurement vectors Independently draw each x n uniformly from {− 1 , +1 } d Gaussian measurement vectors Independently draw each x n from the distribution 1 (2 π ) d/ 2 e −� x � 2 2 / 2 Signal Recovery from Partial Information (Madison, 29 August 2006) 6

  7. Connection with Sparse Approximation ❦ Define the fat N × d measurement matrix   x T 1 . .   Φ = . x T N The columns of Φ are denoted ϕ 1 , . . . , ϕ d Given an m -sparse signal s , form the data vector v = Φ s   s 1       v 1 s 2   . .  =     ϕ 1  . . . . s 3 ϕ 2 ϕ 3 ϕ d   . .   v N . s d Note that v is a linear combination of m columns from Φ Signal Recovery from Partial Information (Madison, 29 August 2006) 7

  8. Orthogonal Matching Pursuit (OMP) ❦ Input: A measurement matrix Φ , data vector v , and sparsity level m Initialize the residual r 0 = v For t = 1 , . . . , m do A. Find the column index ω t that solves ω t = arg max j =1 ,...,d |� r t − 1 , ϕ j �| B. Calculate the next residual r t = v − P t v where P t is the orthogonal projector onto span { ϕ ω 1 , . . . , ϕ ω t } Output: An m -sparse estimate � s with nonzero entries in components ω 1 , . . . , ω m . These entries appear in the expansion � T = s ω t ϕ ω t P m v t =1 � Signal Recovery from Partial Information (Madison, 29 August 2006) 8

  9. Advantages of OMP ❦ We propose OMP as an effective method for signal recovery because ❧ OMP is fast ❧ OMP is easy to implement ❧ OMP is surprisingly powerful ❧ OMP is provably correct The goal of this lecture is to justify these assertions Signal Recovery from Partial Information (Madison, 29 August 2006) 9

  10. Theoretical Performance of OMP ❦ Theorem 1. [T–G 2005] Choose an error exponent p . ❧ Let s be an arbitrary m -sparse signal in R d ❧ Draw N = O ( p m log d ) Gaussian or Bernoulli(?) measurements of s ❧ Execute OMP with the data vector to obtain an estimate � s s equals the signal s with probability exceeding (1 − 2 d − p ) . The estimate � To achieve 99% success probability in practice, take N ≈ 2 m ln d Signal Recovery from Partial Information (Madison, 29 August 2006) 10

  11. Flowchart for Algorithm Specify a coin-tossing algorithm, including the knowledge of algorithm and distribution of coin flips distribution of coin flips no knowledge of measurement vectors Flip coins and Adversary determine chooses arbitrary measurement m-sparse signal vectors no knowledge of signal choice Measure signal, Run greedy pursuit algorithm Output signal Signal Recovery from Partial Information (Madison, 29 August 2006) 11

  12. Empirical Results on OMP ❦ For each trial. . . ❧ Generate an m -sparse signal s in R d by choosing m components and setting each to one ❧ Draw N Gaussian measurements of s ❧ Execute OMP to obtain an estimate � s ❧ Check whether � s = s Perform 1000 independent trials for each triple ( m, N, d ) Signal Recovery from Partial Information (Madison, 29 August 2006) 12

  13. Percentage Recovered vs. Number of Gaussian Measurements Percentage of input signals recovered correctly (d = 256) (Gaussian) 100 90 80 70 Percentage recovered 60 50 40 30 20 m=4 m=12 m=20 10 m=28 m=36 0 50 100 150 200 250 Number of measurements (N) Signal Recovery from Partial Information (Madison, 29 August 2006) 13

  14. Percentage Recovered vs. Number of Bernoulli Measurements Percentage of input signals recovered correctly (d = 256) (Bernoulli) 100 90 80 70 Percentage recovered 60 50 40 30 20 m=4 m=12 m=20 10 m=28 m=36 0 50 100 150 200 250 Number of measurements (N) Signal Recovery from Partial Information (Madison, 29 August 2006) 14

  15. Percentage Recovered vs. Level of Sparsity Percentage of input signals recovered correctly (d = 256) (Gaussian) 100 N=52 N=100 90 N=148 N=196 N=244 80 70 Percentage recovered 60 50 40 30 20 10 0 10 20 30 40 50 60 70 80 Sparsity level (m) Signal Recovery from Partial Information (Madison, 29 August 2006) 15

  16. Number of Measurements for 95% Recovery Regression Line: N = 1 . 5 m ln d + 15 . 4 Number of measurements to achieve 95% reconstruction probability (Gaussian) 240 220 200 Number of measurements (N) 180 160 140 120 100 80 60 Linear regression d = 256 Empirical value d = 256 40 5 10 15 20 25 30 Sparsity Level (m) Signal Recovery from Partial Information (Madison, 29 August 2006) 16

  17. Number of Measurements for 99% Recovery d = 256 d = 1024 m N N/ ( m ln d ) m N N/ ( m ln d ) 4 56 2.52 5 80 2.31 8 96 2.16 10 140 2.02 12 136 2.04 15 210 2.02 16 184 2.07 20 228 2.05 These data justify the rule of thumb N ≈ 2 m ln d Signal Recovery from Partial Information (Madison, 29 August 2006) 17

  18. Percentage Recovered: Empirical vs. Theoretical Percentage of input signals recovered correctly (d = 1024) (Gaussian) 100 90 80 70 Percentage recovered 60 50 40 30 m=5 empirical 20 m=10 empirical m=15 empirical m=5 theoretical 10 m=10 theoretical m=15 theoretical 0 0 100 200 300 400 500 600 700 800 Number of measurements (N) Signal Recovery from Partial Information (Madison, 29 August 2006) 18

  19. Execution Time for 1000 Complete Trials Execution time for 1000 instances (Bernoulli) 300 time d = 1024, N = 400 quadratic fit d = 1024 time d = 256, N = 250 quadratic fit d = 256 250 Execution time (seconds) 200 150 100 50 0 8 16 24 32 4 0 48 56 64 Sparsity level (m) Signal Recovery from Partial Information (Madison, 29 August 2006) 19

  20. Elements of the Proof I ❦ A Thought Experiment ❧ Fix an m -sparse signal s and draw a measurement matrix Φ ❧ Let Φ opt consist of the m correct columns of Φ ❧ Imagine we could run OMP with the data vector and the matrix Φ opt ❧ It would choose all m columns of Φ opt in some order ❧ If we run OMP with the full matrix Φ and it succeeds, then it must select columns in exactly the same order Signal Recovery from Partial Information (Madison, 29 August 2006) 20

  21. Elements of the Proof II ❦ The Sequence of Residuals ❧ If OMP succeeds, we know the sequence of residuals r 1 , . . . , r m ❧ Each residual lies in the span of the correct columns of Φ ❧ Each residual is stochastically independent of the incorrect columns Signal Recovery from Partial Information (Madison, 29 August 2006) 21

  22. Elements of the Proof III ❦ The Greedy Selection Ratio ❧ Suppose that r is the residual in Step A of OMP ❧ The algorithm picks a correct column of Φ whenever max { j : s j =0 } |� r , ϕ j �| ρ ( r ) = max { j : s j � =0 } |� r , ϕ j �| < 1 ❧ The proof shows that ρ ( r t ) < 1 for all t with high probability Signal Recovery from Partial Information (Madison, 29 August 2006) 22

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend