sparse approximation of signals and images
play

Sparse Approximation of Signals and Images Gerlind Plonka Institute - PowerPoint PPT Presentation

Sparse Approximation of Signals and Images Gerlind Plonka Institute for Numerical and Applied Mathematics University of G ottingen in collaboration with Dennis Heinen, Armin Iske (Hamburg), Thomas Peter, Daniela Ro sca (Cluj-Napoca),


  1. Sparse Approximation of Signals and Images Gerlind Plonka Institute for Numerical and Applied Mathematics University of G¨ ottingen in collaboration with Dennis Heinen, Armin Iske (Hamburg), Thomas Peter, Daniela Ro¸ sca (Cluj-Napoca), Manfred Tasche (Rostock), Stefanie Tenorth, Katrin Wannenwetsch, Marius Wischerhoff Chemnitz, September 2013

  2. Sparse Approximation of Signals and Images Outline • Talk 1: Prony-like methods and applications I • Talk 2: Prony-like methods and applications II • Talk 3: Adaptive transforms for sparse image approxima- tion 1

  3. Prony-like methods and applications I Outline • Introduction • Classical Prony method • Relations to other methods • Numerical approach • Recovering of spline functions from a few Fourier samples • Reconstruction of sparse linear combinations of translates of a function 2

  4. Introduction (a) Consider the following problem: � M c j e T j x Function f ( x ) = j =1 We have function values f ( ℓ ), ℓ = 0 , . . . , 2 N − 1, N ≥ M We want c j ∈ C , T j ∈ [ − α, 0] + i [ − π, π ), j = 1 , . . . , M . Problem: Find the best M -term approximation, i.e. an optimal linear combina- tion of M terms from the set { e T j x : T j ∈ [ − α, 0] + i [ − π, π ) } . 3

  5. Introduction (b) Determine the breakpoints and the associated jump magnitudes of a compactly supported piecewise polynomial if finitely many values of its Fourier transforms are given. Consider N � c m j B m f ( x ) = j ( x ) , j =1 where B m j is a B-spline of order m with knots T j , . . . , T j + m . How many Fourier samples of f are needed in order to recover f com- pletely? c 1 Example. f ( x ) = c 1 1 1 [ T 1 ,T 2 ) ( x ) T 1 T 2 4

  6. Introduction (c) A vector x ∈ C N is called M -sparse, if M ≪ N and if only M components of x are different from zero. Problem. Recover an M –sparse vector x ∈ C N , if only few scalar products y k = a ∗ k x , k = 1 , . . . , 2 L with L ≥ M with suitable chosen vectors a k ∈ C N are given. With A T = ( a 1 , . . . , a 2 L ) ∈ C N × 2 L , find an M -sparse solution of the underdetermined system Ax = y . 5

  7. Classical Prony method � M c j e T j x Function f ( x ) = j =1 We have M , f ( ℓ ), ℓ = 0 , . . . , 2 M − 1 We want c j ∈ C , T j ∈ [ − α, 0] + i [ − π, π ), j = 1 , . . . , M . 6

  8. Classical Prony method � M c j e T j x Function f ( x ) = j =1 We have M , f ( ℓ ), ℓ = 0 , . . . , 2 M − 1 We want c j ∈ C , T j ∈ [ − α, 0] + i [ − π, π ), j = 1 , . . . , M . Introduce the Prony polynomial � z − e T j � � M � M p ℓ z ℓ P ( z ) := = j =1 ℓ =0 with unknown parameters T j and p M = 1. � M � M � M � M c j e T j m M � c j e T j ( ℓ + m ) = p ℓ e T j ℓ p ℓ f ( ℓ + m ) = p ℓ j =1 j =1 ℓ =0 ℓ =0 ℓ =0 � M c j e T j m P ( e T j ) = 0 , = m = 0 , . . . , M − 1 . j =1 6

  9. Reconstruction algorithm Input : f ( ℓ ), ℓ = 0 , . . . , 2 M − 1 • Solve the Hankel system       f (0) f (1) . . . f ( M − 1) p 0 f ( M )  f (1) f (2) . . . f ( M )     f ( M + 1)  p 1        = −  . . . . . .     . . . . . . . . . . p M − 1 f ( M − 1) f ( M ) . . . f (2 M − 2) f (2 M − 1) • Compute the zeros of the Prony polynomial P ( z ) = � M ℓ =0 p ℓ z ℓ and extract the parameters T j from its zeros z j = e T j , j = 1 , . . . , M . • Compute c j solving the linear system M � c j e T j ℓ , f ( ℓ ) = ℓ = 0 , . . . , 2 M − 1 . j =1 Output : Parameters T j and c j , j = 1 , . . . , M . 7

  10. Literature [Prony] (1795): Reconstruction of a difference equation [Schmidt] (1979): MUSIC (Multiple Signal Classification) [Roy, Kailath] (1989): ESPRIT (Estimation of signal parameters via rotational invariance techniques) [Hua, Sakar] (1990): Matrix-pencil method [Stoica, Moses] (2000): Annihilating filters [Potts, Tasche] (2010, 2011): Approximate Prony method Golub, Milanfar, Varah (’99); Vetterli, Marziliano, Blu (’02); Maravi´ c, Vetterli (’04); Elad, Milanfar, Golub (’04); Beylkin, Monzon (’05,’10); Batenkov, Sarg, Yomdin (’12, ’13); Filbir, Mhaskar, Prestin (’12); Peter, Potts, Tasche (’11,’12,’13); Plonka, Wischerhoff (’13); ... 8

  11. Relation to differential and difference equations Consider a homogeneous linear differential equation with constant co- efficients of the form M � ξ k f ( k ) ( x ) = 0 . k =0 Then a general solution is of the form M � c k e λ k x , f ( x ) = k =1 where λ k are the pairwise distinct zeros of the characteristic polynomial � M k =0 ξ k z k . Hence, the Prony method recovers solutions of differential equations from its functions values f ( l ), l = 0 , 1 , . . . , 2 M − 1. 9

  12. Analogously, consider a homogeneous difference equation with constant coefficients of the form M � p k f ( k + m ) = 0 , m ∈ Z . k =0 Then a general solution is of the form M � c k e λ k x , f ( x ) = k =1 where λ k are the pairwise distinct zeros of the characteristic polynomial � M k =0 p k z k . Hence, the Prony method recovers solutions of difference equations from its values f ( l ), l = 0 , 1 , . . . , 2 M − 1. 10

  13. Relation to linear prediction methods Let h = ( h n ) n ∈ N 0 be a discrete signal. Linear prediction method : Find suitable predictor parameters p j ∈ C such that the signal value h ℓ + M can be expressed as a linear combina- tion of the previous signal values h j , j = ℓ, . . . , ℓ + M − 1, i.e. M − 1 � h ℓ + M = ( − p j ) h ℓ + j , ℓ ∈ N 0 . j =0 With p M := 1, this representation is equivalent to a homogeneous linear difference equation. Assuming that M � c j z k h k = j , k ∈ N 0 , j =1 we obtain the classical Prony problem where the Prony polynomial coincides with the negative value of the forward predictor polynomial. 11

  14. Relation to annihilating filters Consider the discrete signal h = ( h n ) n ∈ Z with M � c j z n h n := j , j ∈ Z , z j ∈ C , | z j | = 1 . j =1 The signal a = ( a n ) n ∈ Z is called an annihilating filter of h , if ∞ � ( a ∗ h ) n := a ℓ h n − ℓ = 0 , n ∈ Z . ℓ = −∞ Consider M � M M � M � � � � � � c j z n − ℓ c j z n a ℓ z − ℓ 0 = a ℓ = j j j j =1 j =1 ℓ =0 ℓ =0 Hence, the z –transform a ( z ) of the annihilating filter a and the Prony polynomial p ( z ) have the same zeros z j , j = 1 , . . . , M , since z M a ( z ) = p ( z ) for all z ∈ C \ { 0 } . 12

  15. Relation to Pad´ e approximation M M � � c j e i T j k = c j z k with z j = e i T j . Consider h ( k ) = j j =1 j =1 By ∞ � 1 z j z − k = z k 1 − z j /z = z − z j k =0 we find the z-transform of ( h ( k )) k ∈ N 0 , M ∞ � � z = a ( z ) h ( k ) z − k = h ( z ) = c j p ( z ) , z − z j j =1 k =0 where p ( z ) is the Prony polynomial and a ( z ) is a polynomial of degree M . Hence the Prony method can be regarded as Pad´ e approximation. 13

  16. Numerical approach to Prony’s method � M c j e T j x . Let f ( x ) = j =1 Given: noisy samples f k = f ( k ) + e k , k = 0 , . . . , 2 N − 1 Given: L ≥ M (upper bound for M ) and N ≥ L Wanted: M , c j ∈ C , T j ∈ [ − α, 0] + i [ − π, π ), j = 1 , . . . , M . We consider the Hankel matrix H 2 N − L,L +1 = ( f ℓ + m ) 2 N − L − 1 ,L ℓ,m =0   f (0) f (1) . . . f ( L )  f (1) f (2) . . . f ( L + 1)    = . . .   . . . . . . h (2 N − L − 1) h (2 N − L ) . . . f (2 N − 1) = ( f 0 , f 1 , . . . , f L ) 14

  17. For the submatrix H M,M = ( f 0 , . . . , f M − 1 ) and f M = ( f ( M + ℓ )) M − 1 ℓ =0 we had H M p = − f M , where p = ( p 0 , . . . , p M − 1 ) T contains the coefficients of the Prony po- lynomial. Consider the companion matrix   0 0 . . . 0 − p 0   1 0 . . . 0 − p 1   C M ( p ) = . . . .   . . . . . . . . 0 0 . . . 1 − p M − 1 Then H M C M ( p ) = ( f 1 , . . . , f M ) =: H M (1) and the eigenvalues of C M ( p ) = H − 1 M H M (1) are the wanted zeros z j = e T j of the Prony polynomial. 15

  18. ESPRIT for equispaced sampling (Potts, Tasche ’13) Input : L , f ( ℓ ), ℓ = 0 , . . . , 2 N − 1, where L ≤ N • Compute the SVD H 2 N − L,L +1 = U 2 N − L D 2 N − L,L +1 W L +1 of the rectangular Hankel matrix H 2 N − L,L +1 . Determine the ap- proximate rank M of H 2 N − L,L +1 . • Put W M,L ( s ) := W L +1 (1 : M, 1 + s : L + s ) , s = 0 , 1 and � W M,L (0) T � † W M,L (1) T , F M := and compute the eigenvalues z j = e T j , j = 1 , . . . , M , of the matrix F M . • Compute c j solving the overdetermined linear system M � c j e T j ℓ , f ( ℓ ) = ℓ = 0 , . . . , 2 N − 1 . j =1 Output : Parameters T j and c j , j = 1 , . . . , M . 16

  19. Prony-like methods and applications II Outline • Recovering of spline functions from a few Fourier samples • Reconstruction of sparse linear combinations of translates of a function • Recapitulation: Classical Prony method • Reconstruction of M-sparse polynomials • The generalized Prony method • Application to the translation and the dilation operator • Recovery of sparse sums of orthogonal polynomials • Recovery of sparse vectors 17

  20. The Prony method � N c j e − i ωT j Function: P ( ω ) = j =1 Wanted: c j ∈ R \ { 0 } and T 1 < T 2 < . . . < T N Given: P ( hℓ ), ℓ = 0 , . . . , N with h > 0, | hT j | < π ∀ j = 1 , . . . , N 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend