super resolution and sensor calibration in imaging
play

Super-resolution and sensor calibration in imaging Wenjing Liao - PowerPoint PPT Presentation

Super-resolution and sensor calibration in imaging Wenjing Liao School of Mathematics, Georgia Institute of Technology ICERM September 25, 2017 My collaborators Albert Fannjiang Weilin Li UC Davis UMD Yonina C. Eldar Sui Tang Tel-Aviv


  1. Super-resolution and sensor calibration in imaging Wenjing Liao School of Mathematics, Georgia Institute of Technology ICERM September 25, 2017

  2. My collaborators Albert Fannjiang Weilin Li UC Davis UMD Yonina C. Eldar Sui Tang Tel-Aviv University, Israel JHU 2 / 32

  3. Outline Super-resolution 1 Resolution in imaging Super-resolution limit and min-max error Super-resolution algorithms Sensor calibration 2 Problem formulation Uniqueness An optimization approach Numerical simulations 3 / 32

  4. Source localization with sensor array M Sensors S point sources ✄ ✄ aperture located at ω j ∈ [0 , 1) ✄ ✄ with amplitudes x j far field Point sources: x ( t ) = � S j =1 x j δ ( t − ω j ) , ω j ∈ [0 , 1) Measurement at the m th sensor, m = 0 , . . . , M − 1 : � S x j e − 2 π im ω j + e m y m = j =1 Measurements: { y m : m = 0 , . . . , M − 1 } To recover: source locations { ω j } S j =1 and source amplitudes { x j } S j =1 . 4 / 32

  5. Rayleigh criterion M − 1 � e 2 π im ω x ( ω ) = ˆ y m M m =0 Rayleigh length = 1 / M 5 / 32

  6. Inverse Fourier transform and the MUSIC algorithm Multiple Signal Classification (MUSIC): [Schmidt 1981] 10 13 14 600 12 500 10 400 8 300 6 200 4 100 2 0 0 42 44 46 48 50 52 54 56 42 44 46 48 50 52 54 56 noise-free noisy 6 / 32

  7. Interesting questions 1 What is the super-resolution limit of the “best” algorithm? 2 What is the super-resolution limit of a specific algorithm? ◮ MUSIC [Schmidt 1981] ◮ ESPRIT [Roy and Kailath 1989] ◮ the matrix pencil method [Hua and Sarkar 1990] 7 / 32

  8. Existing works 1 Super-resolution limit with continuous measurements ◮ Donoho 1992, Demanet and Nguyen 2015 2 Performance guarantees for well separated point sources ◮ Total variation minimization [Cand` es and Fernandez-Granda 2013,2014, Tang, Bhaskar, Shah and Recht 2013, Duval and Peyr´ e 2015, Li 2017] ◮ Greedy algorithms [Duarte and Baraniuk 2013, Fannjiang and L. 2012] ◮ MUSIC [L. and Fannjiang 2016] ◮ The matrix pencil method [Moitra 2015] 3 Performance guarantees for super-resolution ◮ Total variation min for positive sources [Morgenshtern and Cand` es 2016] or sources with certain sign pattern [Benedetto and Li 2016] ◮ Lasso for positive sources [Denoyelle, Duval and Peyr´ e 2016] 8 / 32

  9. Discretization on a fine grid N ≪ 1 1 M ω 1 ω 2 ω 3 ω 4 0 1 Point sources: µ = � N − 1 n =0 x n δ n / N with x ∈ C N S Measurement vector y = Φ x + e where Φ ∈ C M × N is the first M rows of the N × N DFT matrix: Φ m , n = e − 2 π imn / N and � e � 2 ≤ δ . Super-resolution factor (SRF) := N M 9 / 32

  10. Connection to compressive sensing Sensing matrices contain certain rows of the DFT matrix. (a) compressive sensing (b) super-resolution 10 / 32

  11. Min-max error Definition ( S -min-max error) Fix positive integers M , N , S such that S ≤ M ≤ N and let δ > 0. The S -min-max error is E ( M , N , S , δ ) = inf sup sup � ˜ x − x � 2 . x ( y , M , N , S ,δ ) ∈ C N ˜ x =˜ x ∈ C N e ∈ C M : � e � 2 ≤ δ S y =Φ x + e 11 / 32

  12. Sharp bound on the min-max error Theorem (Li and L. 2017) There exist constants A ( S ) , B ( S ) , C ( S ) such that: 1 Lower bound. If M ≥ 2 S and N ≥ C (2 S ) M 3 / 2 , then δ SRF 2 S − 1 . E ( M , N , S , δ ) ≥ √ 2 B (2 S ) M 2 Upper bound. If M ≥ 4 S (2 S + 1) and N ≥ M 2 / (2 S 2 ) , then 2 δ SRF 2 S − 1 . E ( M , N , S , δ ) ≤ √ A (2 S ) M The best algorithm in the upper bound: min � z � 0 subject to � Φ z − y � 2 ≤ δ W. Li and W. Liao, “Stable super-resolution limit and smallest singular value of restricted Fourier matrices,”preprint, arXiv:1709.03146. 12 / 32

  13. Multiple Signal Classification (MUSIC) Pioneering work: Prony 1795 MUSIC in signal processing: Schmidt 1981 MUSIC in imaging: Devaney 2000, Devaney, Marengo and Gruber 2005, Cheney 2001, Kirsch 2002 Related: the linear sampling method [Cakoni, Colton and Monk 2011], factorization method [Kirsch and Grinsberg 2008] 13 / 32

  14. MUSIC Assumption: S is known. S � x j e − 2 π im ω j , m = 0 , . . . , M − 1 . y m = j =1   y 0 y 1 . . . y M − L   . . . y 1 y 2 y M − L +1   (Φ M − L +1 ) T  = Φ L H = Hankel ( y ) = X  . . . .  ���� ���� . . . . � �� �  . . . . L × S S × S S × ( M − L +1) y L − 1 y L . . . y M − 1 where X = diag ( x 1 , . . . , x S ) � e − 2 π i ( L − 1) ω � T ∈ C L φ L ( ω ) = e − 2 π i ω 1 . . . Φ L = [ φ L ( ω 1 ) . . . φ L ( ω S )] ∈ C L × S . 14 / 32

  15. MUSIC with noiseless measurements H = Φ L X (Φ M − L +1 ) T Suppose { ω j } S j =1 are distinct. 1 If L ≥ S , rank (Φ L ) = S . 2 If M − L + 1 ≥ S , Range ( H ) = Range (Φ L ). 3 If L ≥ S + 1, rank ([Φ L φ L ( ω )]) = S + 1 if and only if ω / ∈ { ω j } S j =1 . Theorem If L ≥ S + 1 and M − L + 1 ≥ S, ω ∈ { ω j } S j =1 iff φ L ( ω ) ∈ Range ( H ) . Exact recovery with M ≥ 2 S regardless of the support . 15 / 32

  16. Range( H ) = Range(Φ L ) signal space φ L ( ω 2 ) φ L ( ω 1 ) φ L ( ω 3 ) φ L ( ω ) , ω / ∈ { ω j } S j =1 0 noise space Noise-space correlation function: N ( ω ) = �P noise φ L ( ω ) � 2 � φ L ( ω ) � 2 1 Imaging function: J ( ω ) = N ( ω ) N ( ω j ) = 0 and J ( ω j ) = ∞ , j = 1 , . . . , S . 16 / 32

  17. MUSIC with noisy measurements Three sources separated by 0 . 5 RL, e ∼ N (0 , σ 2 I M ) 2 × 10 14 Imaging function Imaging function Imaging function 1600 180 imaging function imaging function imaging function exact frequencies exact frequencies exact frequencies 1.8 160 1400 1.6 140 1200 1.4 120 1000 1.2 100 1 800 80 0.8 600 60 0.6 400 40 0.4 200 20 0.2 0 0 0 19.5 20 20.5 21 21.5 22 22.5 19.5 20 20.5 21 21.5 22 22.5 19.5 20 20.5 21 21.5 22 22.5 dist = 2.7756e-15 RL dist = 0.06 RL dist = 3.96 RL (c) σ = 0 (d) σ = 0 . 001 (e) σ = 0 . 01 Recall upper bound of the min-max error δ SRF 2 S − 1 E ( M , N , S , δ ) � √ M � � 2 S − 1 . 1 The noise that the “best” algorithm can handle is δ ∼ SRF 17 / 32

  18. Phase transition S consecutive point sources on the grid with spacing 1 / N Support error: d ( { ω j } S ω j } S j =1 , { ˆ j =1 ) √ Noise e ∼ N (0 , σ 2 I M ) + i · N (0 , σ 2 I M ), so E � e � 2 = 2 M σ . log 2 (bottleneck error / separation) log 2 (bottleneck error / separation) log 2 (bottleneck error / separation) 8 8 8 -1.5 -1.5 -1.5 6 6 6 -2 4 -2 4 -2 4 noise level log 10 σ noise level log 10 σ noise level log 10 σ 2 2 2 -2.5 0 -2.5 0 -2.5 0 -2 -2 -2 -3 -4 -3 -4 -3 -4 -6 -6 -6 -3.5 -3.5 -3.5 -8 -8 -8 -1 -0.8 -0.6 -0.4 -0.2 0 -1 -0.8 -0.6 -0.4 -0.2 0 -1 -0.8 -0.6 -0.4 -0.2 0 -log 10 SRF -log 10 SRF -log 10 SRF S = 2 S = 3 S = 4 Figure: The average log 2 [ Support error 1 ] over 100 trials with respect to log 10 1 / N SRF (x-axis) and log 10 σ (y-axis). 18 / 32

  19. Super-resolution limit of MUSIC Phase transition curve -1.6 The phase transition curve is S=2 slope = 3.3137 -1.8 S=3 slope = 5.2149 S=4 slope = 7.6786 S=5 slope = 9.9286 -2 � � p ( S ) -2.2 1 noise level log 10 σ σ ∼ -2.4 SRF -2.6 -2.8 where -3 -3.2 2 S − 1 ≤ p ( S ) ≤ 2 S . -3.4 -3.6 -1 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0 -log 10 SRF Future work: Support error by MUSC � SRF p ( S ) · σ. 19 / 32

  20. Outline Super-resolution 1 Resolution in imaging Super-resolution limit and min-max error Super-resolution algorithms Sensor calibration 2 Problem formulation Uniqueness An optimization approach Numerical simulations 20 / 32

  21. Sensor calibration Measurement at the m -th sensor, m = 0 , . . . , M − 1: S � x j ( t ) e − 2 π im ω j + e m ( t ) y m ( t ) = g m j =1 Multiple snapshots of measurements: { y m ( t ) , m = 0 , . . . , M − 1 , t ∈ Γ } To recover: Calibration parameters g = { g m } M − 1 m =0 ∈ C M Source locations { ω j } S j =1 and source amplitudes x j ( t ) 21 / 32

  22. Assumptions Matrix form: y ( t ) = diag ( g ) x ( t ) + e ( t ) A ���� ���� � �� � ���� ���� C M × S C M C M × M C S C M A n , j = e − 2 π im ω j x ( t ) = [ x 1 ( t ) . . . x S ( t )] T , y ( t ) = [ y 0 ( t ) . . . y M − 1 ( t )] T , e ( t ) = [ e 0 ( t ) . . . e M − 1 ( t )] T Assumptions: 1 | g m | � = 0 , m = 0 , . . . , M − 1; 2 E x ( t ) = 0 and E e ( t ) = 0; 3 R x := E x ( t ) x ∗ ( t ) = diag ( { γ 2 j } S j =1 ); 4 E x ( t ) e ∗ ( t ) = 0; 5 E e ( t ) e ∗ ( t ) = σ 2 I M where σ represents noise level. 22 / 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend