signal analysis using sparse representation and proximal
play

Signal analysis using sparse representation and proximal - PowerPoint PPT Presentation

(M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Signal analysis using sparse representation and proximal optimization methods Mai Quyen PHAM GIPSA-Lab, 11 rue des math ematiques, 38402 Saint Martin dH` eres 04 November


  1. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Signal analysis using sparse representation and proximal optimization methods Mai Quyen PHAM GIPSA-Lab, 11 rue des math´ ematiques, 38402 Saint Martin d’H` eres 04 November 2016 1 / 36

  2. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Contents (M ULTIPLE + NOISE ) REMOVAL Formulation Algorithm Results B LIND DECONVOLUTION Formulation Algorithm Results C ONCLUSIONS 2 / 36

  3. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS (Multiple + noise) removal 3 / 36

  4. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Seismic multiple reflection Hydrophone Towed streamer • • • • • • • Solid blue: primaries; dashed red: multiple reflection disturbances. 4 / 36

  5. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS (Multiple + noise) removal strategies multiples noise s ( n ) b ( n ) Measurements z ( n ) = y ( n ) + s ( n ) + b ( n ) System + ( ∀ n ∈ { 0 , . . . , N − 1 } ) Source y ( n ) primary Which strategy for restoring the primary signal y ( n ) corrupted by the unknown multiples s ( n ) , plus noise b ( n ) ? ◮ Variational approach ◮ Methodology for primary/multiple adaptive ◮ Proximal methods to solve the separation based on resulting optimization approximate templates problem 5 / 36

  6. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Multi-model ◮ J models r ( n ) are known (available) j ◮ Imperfect in time, amplitude and frequency s ( n ) throughout time varying ◮ Assumption : models linked to ¯ filters (FIR) p ′ + P j − 1 J − 1 � � s ( n ) = ( p ) r ( n − p ) ¯ h ( n ) ¯ j j j = 0 p = p ′ where ◮ ¯ h ( n ) : unknown impulse reponse of the filter corresponding to j model j and time n ( P j tap coefficients) ◮ p ′ ∈ {− P j + 1 , . . . , 0 } ◮ New definition: P = � J − 1 j = 0 P j . 6 / 36

  7. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Template r 0 Magnitude r 1 ¯ s 0 200 400 600 800 1000 Time First model, Second model, Multiple 7 / 36

  8. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Template Space Space Primary Time Time Time Observed image Template Signal 7 / 36

  9. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Problem reformulation ¯ z = R + ¯ y + b h ���� ���� ���� ���� observed signal filter noise primary where � ¯ s ( N − 1 ) � ⊤ s = � J − 1 j = 0 R j ¯ h j = R ¯ ◮ ¯ s ( 0 ) , · · · , ¯ h = ◮ R = [ R 0 · · · R J − 1 ] , R j is a block diagonal matrix � � ⊤ ◮ ¯ ¯ 0 · · · ¯ h ⊤ h ⊤ h = J − 1 � ( p ′ + P j − 1 ) · · · ◮ ¯ h ( n ) h ( 0 ) ¯ ( p ′ ) · · · ¯ h ( 0 ) = j j j � ⊤ ¯ h ( N − 1 ) ( p ′ ) · · · ¯ h ( N − 1 ) ( p ′ + P j − 1 ) j j 8 / 36

  10. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Estimation of y Assumption : ¯ y is a realization of a random vector Y , whose probability density is given by: ( ∀ y ∈ R N ) f Y ( y ) ∝ exp ( − ϕ ( Fy )) F ∈ R K × N : linear operator. ϕ is chosen separable: � K � ∀ x = ( x k ) 1 ≤ k ≤ K ∈ R K � ϕ ( x ) = ϕ k ( x k ) k = 1 where, for all k ∈ { 1 , . . . , K } , ϕ k : R → ] −∞ , + ∞ ] . 9 / 36

  11. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Estimation : filter h and noise b ◮ Assumption : ¯ h is a realization of a random vector H , whose probability density can be expressed as: ( ∀ h ∈ R NP ) f H ( h ) ∝ exp ( − ρ ( h )) H is independent of Y . ◮ Assumption : b is a realization of a random vector B , of probability density: ( ∀ b ∈ R N ) f B ( b ) ∝ exp ( − ψ ( b )) B is assumed to be independent from Y and H 10 / 36

  12. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Estimation : filter h and noise b ◮ Assumption : ¯ h is a realization of a random vector H , whose probability density can be expressed as: ( ∀ h ∈ R NP ) f H ( h ) ∝ exp ( − ρ ( h )) H is independent of Y . ◮ Assumption : b is a realization of a random vector B , of probability density: ( ∀ b ∈ R N ) f B ( b ) ∝ exp ( − ψ ( b )) B is assumed to be independent from Y and H MAP estimation of ( y , h ) � � ψ z − Rh − y + ϕ ( Fy ) + ρ ( h ) minimize ♣ � �� � ���� � �� � y ∈ R N , h ∈ R NP a priori on the filters fidelity: linked to noise a priori on the signal 10 / 36

  13. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Problem to be solved MAP estimation of ( y , h ) � � minimize ψ z − Rh − y + ϕ ( Fy ) + ρ ( h ) ♣ ���� � �� � � �� � y ∈ R N , h ∈ R NP a priori on the signal a priori on the filters fidelity: linked to noise ◮ Difficulty: Choosing the good regularization parameters ◮ Proposed: Use a constrained minimization problem Problem to be solved � � y ∈ R N , h ∈ R NP ψ z − Rh − y + ι D ( Fy ) + ι C ( h ) minimize 11 / 36

  14. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS About convex set D Problem to be solved � � minimize y ∈ R N , h ∈ R NP ψ z − Rh − y + ι D ( Fy ) + ι C ( h ) � 0 if x ∈ D ι D ( x ) = + ∞ otherwise. ◮ F ∈ R K × N : analysis frame operator ◮ { K l | l ∈ { 1 , . . . , L}} ⊂ { 1 , . . . , K } ◮ D = D 1 × · · · × D L with D l = { ( x k ) k ∈ K l | � k ∈ K l ϕ ℓ ( x k ) ≤ β l } , where ∀ l ∈ { 1 , . . . , L} , β l ∈ ] 0 , + ∞ [ , and ϕ l : R → [ 0 , + ∞ [ is a lower-semicontinuous convex function. 12 / 36

  15. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS About convex set C Problem to be solved � � minimize y ∈ R N , h ∈ R NP ψ z − Rh − y + ι D ( Fy ) + ι C ( h ) C = C 1 ∩ C 2 ∩ C 3 � � h ∈ R PN : ρ ( h ) = � J − 1 ◮ C 1 = j = 0 ρ j ( h j ) ≤ τ ℓ 2 = � N − 1 � p ′ + P j − 1 | h ( n ) ◮ ρ j ( h j ) = � h j � 2 ( p ) | 2 n = 0 p = p ′ j 13 / 36

  16. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS About convex set C Problem to be solved � � minimize y ∈ R N , h ∈ R NP ψ z − Rh − y + ι D ( Fy ) + ι C ( h ) C = C 1 ∩ C 2 ∩ C 3 � � h ∈ R PN : ρ ( h ) = � J − 1 ◮ C 1 = j = 0 ρ j ( h j ) ≤ τ ℓ 2 = � N − 1 � p ′ + P j − 1 | h ( n ) ◮ ρ j ( h j ) = � h j � 2 ( p ) | 2 n = 0 p = p ′ j ◮ ρ j ( h j ) = � h j � ℓ 1 = � N − 1 � p ′ + P j − 1 | h ( n ) ( p ) | n = 0 p = p ′ j 13 / 36

  17. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS About convex set C Problem to be solved � � minimize y ∈ R N , h ∈ R NP ψ z − Rh − y + ι D ( Fy ) + ι C ( h ) C = C 1 ∩ C 2 ∩ C 3 � � h ∈ R PN : ρ ( h ) = � J − 1 ◮ C 1 = j = 0 ρ j ( h j ) ≤ τ ℓ 2 = � N − 1 � p ′ + P j − 1 | h ( n ) ◮ ρ j ( h j ) = � h j � 2 ( p ) | 2 n = 0 p = p ′ j ◮ ρ j ( h j ) = � h j � ℓ 1 = � N − 1 � p ′ + P j − 1 | h ( n ) ( p ) | n = 0 p = p ′ j �� p ′ + P j − 1 ( p ) | 2 � 1 / 2 ◮ ρ j ( h j ) = � h j � ℓ 1 , 2 = � N − 1 | h ( n ) n = 0 p = p ′ j 13 / 36

  18. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Hard constraints on the filters C 2 , C 3 Problem to be solved � � y ∈ R N , h ∈ R NP ψ z − Rh − y + ι D ( Fy ) + ι C ( h ) minimize C = C 1 ∩ C 2 ∩ C 3 Assumption : slow variations of the filters along time. � � | h ( n + 1 ) ( p ) − h ( n ) ∀ ( j , n , p ) ( p ) | ≤ ε j , p j j For computational issues, h ∈ C 2 ∩ C 3 where � � � N � � � � � � � � h ( 2 n + 1 ) ( p ) − h ( 2 n ) ( p ) C 2 = h | ∀ p , ∀ n ∈ 0 , . . . , − 1 � ≤ ε p 2 � � � N − 1 �� � � � � � � h ( 2 n ) ( p ) − h ( 2 n − 1 ) ( p ) C 3 = h | ∀ p , ∀ n ∈ 1 , . . . , � ≤ ε p 2 14 / 36

  19. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Proximity operator Definition Let ϕ be a lower semi-continuous convex function. For all x ∈ R N , prox ϕ is the unique minimizer of ♣ y �→ ϕ ( y ) + 1 2 � x − y � 2 Examples: C a non-empty closed convex subset of R N . ι C ( y ) + 1 2 � x − y � 2 prox ι C ( x ) = minimize y ∈ R N � x − y � 2 = minimize y ∈ C � �� � Π C ( x ): projection operator onto C 15 / 36

  20. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Proximity operator Definition Let ϕ be a lower semi-continuous convex function. For all x ∈ R N , prox ϕ is the unique minimizer of ♣ y �→ ϕ ( y ) + 1 2 � x − y � 2 Examples: C a non-empty closed convex subset of R N . ι C ( y ) + 1 2 � x − y � 2 prox ι C ( x ) = minimize y ∈ R N � x − y � 2 = minimize y ∈ C � �� � Π C ( x ): projection operator onto C 15 / 36

  21. (M ULTIPLE + NOISE ) REMOVAL B LIND DECONVOLUTION C ONCLUSIONS Proximity operator Definition Let ϕ be a lower semi-continuous convex function. For all x ∈ R N , prox ϕ is the unique minimizer of ♣ y �→ ϕ ( y ) + 1 2 � x − y � 2 Examples: prox λ |·| p ( ∀ x ∈ R ) 1 a ) prox λ |·| 2 ( x ) = 1 + 2 λ x p = 1 p = 2 � �� � “Wiener” filter x − λ λ b ) prox λ |·| ( x ) = sign ( x ) max ( | x | − λ, 0 ) � �� � shrinkage operator 15 / 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend