direction finding using sparse linear arrays with missing
play

Direction Finding Using Sparse Linear Arrays with Missing Data - PowerPoint PPT Presentation

CSSIP Direction Finding Using Sparse Linear Arrays with Missing Data Mianzhi Wang, Zhen Zhang, and Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis March 8, 2017 1 CSSIP


  1. CSSIP Direction Finding Using Sparse Linear Arrays with Missing Data Mianzhi Wang, Zhen Zhang, and Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis March 8, 2017 1

  2. CSSIP Outline • Problem formulation • Estimation algorithms • Cram´ er-Rao bound • Numerical examples • Summary and future work 2

  3. CSSIP Notations A H = Hermitian transpose of A A ∗ = Conjugate of A ⊗ = Kronecker Product ⊙ = Khatri-Rao Product vec( A ) = Vectorization of A R ( A ) = Real part of A I ( A ) = Imaginary part of A 3

  4. CSSIP Preliminaries We consider a M -sensor sparse linear array whose sensors are located on a uniform grid, and denote the sensor locations by the integer set ¯ D = { ¯ d 1 , ¯ d 2 , . . . , ¯ d M } . ULA: Co-prime array: Nested array: Sparse linear arrays MRA: Figure 1: Examples of sparse linear arrays. 4

  5. CSSIP Preliminaries (cont.) • We consider the classical far-field narrow-band measurement model: y ( t ) = SA U ( θ ) x ( t ) + n ( t ) , t = 1 , 2 , . . . , N, (1) where A U ( θ ) = [ a U ( θ 1 ) , a U ( θ 2 ) , . . . , a U ( θ K )] is the steering matrix of a M 0 -sensor ULA, M 0 = ¯ d M − ¯ d 1 + 1 , S is a M × M 0 selection matrix that converts a ULA manifold to a sparse linear array manifold, x ( t ) is the source signal, and n ( t ) is the additive noise. • Assumptions: 1. The source signals are temporally and spatially uncorrelated. 2. The noise is temporally and spatially uncorrelated Gaussian that is also uncorrelated from the source signals. 3. The K DOAs are distinct. 5

  6. CSSIP Preliminaries (cont.) • The sample covariance matrix is given by: R = E [ y ( t ) y H ( t )] = SR U S T , (2) U + σ 2 where R U = A U P A H n I , P = diag( p 1 , p 2 , . . . , p K ) , and p k is the power of k -th source. • We can vectorize R and obtain r = ( S ⊗ S )( A ∗ U ⊙ A U ) p + σ 2 n i , (3) where r = vec( R ) , p = [ p 1 , p 2 , · · · , p K ] T , and i = vec( I ) . • Model (3) resembles a difference coaray model with deterministic sources and noise, and ( S ⊗ S )( A ∗ U ⊙ A U ) embeds a steering matrix of a virtual array with enhanced degrees of freedom, whose sensor locations are given by D co = { ( ¯ ¯ d m − ¯ d n ) | ¯ d m , ¯ d n ∈ ¯ D} [1], [2]. 6

  7. CSSIP Preliminaries (cont.) Definition 1. A sparse linear array is called complete if its difference coarray ¯ D co consists of consecutive integers from − M 0 + 1 to M 0 − 1 . Otherwise, we call the sparse linear array incomplete. Example 1. Nested arrays [1] and minimum redundancy linear arrays [3] are complete sparse linear arrays. Co-prime arrays [4] are generally incomplete sparse linear arrays. Implications: • For complete arrays, we can reconstruct R U from the estimate of R . We can identify more sources than the number of sensors. • For incomplete arrays, we can only reconstruct a submatrix of R U from the estimate of R . We can still resolve more sources than the number of sensors if the dimension of the submatrix is large enough. For brevity, we restrict our following discussion to complete arrays, which can be easily extended to handle incomplete arrays. 7

  8. CSSIP Missing Data Problem Formulation • We consider L sampling periods. Without loss of generality, we assume that sensor failure only occurs after the first sampling period. If a sensor fails, it will not recover in the following periods. • We denote the valid snapshots taken during the l -th period by y l ( t ) = T l [ SA U ( θ ) x ( t ) + n ( t )] , (4) for t = N 1 + · · · + N l − 1 + 1 , . . . , N 1 + · · · + N l − 1 + N l , where N l is the number of snapshots collected during the l -th period, and T l is a selection matrix that selects the valid sensors. • Goal: estimate the DOAs from the measurements y l ( t ) . Problem: the coarray structure is destroyed due to sensor failures. Figure 2: An example of the sensor failure pattern. 8

  9. CSSIP Outline • Problem formulation • Estimation algorithms • Cram´ er-Rao bound • Numerical examples • Summary and future work 9

  10. CSSIP General Idea • The sample covariance matrix of the l -th period is given by R l = E [ y l ( t ) y l ( t ) H ] = T l SR U S T T T l + σ 2 n I , (5) which is actually formed by deleting rows and columns in R U , as illustrated in Fig. 3. … 1 2 𝑀 Figure 3: A illustration of the relationships between R U and R 1 , . . . , R L . • We hope to recover R U from the estimates ˆ R 1 , . . . , ˆ R L , and estimate the DOAs based on the reconstructed R U . 10

  11. CSSIP Ad-hoc Estimator Idea: recover R U by averaging the elements in ˆ R l . 1 2 𝑀 … Figure 4: The idea of the ad-hoc estimator. 11

  12. CSSIP Ad-hoc Estimator (cont.) • Extending the results in [5], let V k = { ( m, n ) | ¯ d m − ¯ d n = k, ¯ d m , ¯ d n ∈ ¯ D} . Let A m,n denote the set of snapshot indices when both the m -th and the n -th sensor are working. • Define � � t ∈A m,n y m ( t ) y ∗ n ( t ) ( m,n ) ∈V k u k = , (6) � ( m,n ) ∈V k |A m,n | where y ( t ) = [ y 1 ( t ) , · · · , y M ( t )] is the full measurement vector before discarding invalid data, and |A| denotes the cardinality of A . • We can obtain u k for k = − M 0 + 1 , − M 0 + 2 , . . . , M 0 − 1 , and the ad-hoc estimate of R U is given by  u 0 u − 1 · · · u − M 0 +1  u 1 u 0 · · · u − M 0 +2 R (ad − hoc) ˆ   =  . (7) . . . ...   U . . . . . .  u M 0 u M 0 − 1 · · · u 0 12

  13. CSSIP Maximum-Likelihood Based Estimator • Neglecting constant terms, the negative log-likelihood function is given by L � ˆ N l [log | R l | + tr( R − 1 L ( R 1 , . . . , R L ) = R l )] , (8) l l =1 • We adopt the Toeplitz parameterization of R U [6]: 2 M 0 − 1 � c i Q ( i ) R U = M 0 , (9) i =1 where the basis matrices Q ( i ) M 0 are given by  I M 0 , i = 1 ,  ) T , Q ( i ) I ( i − 1) + ( I ( i − 1) M 0 = 2 ≤ i ≤ M 0 , (10) M 0 M 0 ) T , − j I ( i − M 0 ) + j ( I ( i − M 0 )  M 0 + 1 ≤ i ≤ 2 M 0 − 1 . M 0 M 0 Remark 1. Positive semidefinite Toeplitz matrices can be related to DOAs via Vandermonde decomposition. However, this relationship is not one-to-one. Hence we are relaxing the parameter space. 13

  14. CSSIP Maximum-Likelihood Based Estimator (cont.) • The partial derivatives w.r.t. the Toeplitz parameterization are give by L ∂L ( c ) � M 0 S T T T � � T l SQ ( i ) l R − 1 R l ( c ) − ˆ R − 1 ( c ) � � = N l tr ( c ) (11) R l l l ∂c i l =1 • Let Q M 0 = [ q (1) M 0 , q (2) M 0 , · · · , q (2 M 0 − 1) ] , where q ( i ) M 0 = vec Q ( i ) M 0 . From the M 0 first-order optimality condition of (8), we can obtain the following approximate solution by replace some W l = R T l ⊗ R l with their estimates: L L � − 1 � � � � � N l ˆ N l ˆ c WLS = ˆ G l h l , (12) l =1 l =1 l ˆ W − 1 l ˆ W − 1 r l , and ˆ W l = ˆ l ⊗ ˆ where G l = Q T M 0 Φ T Φ l Q M 0 , h l = Q T M 0 Φ T R T ˆ R l . l l This solution is also the solution to the weighted least-squares problem: L � r l � 2 min N l � Φ l Q M 0 c − ˆ (13) W − 1 ˆ c l l =1 14

  15. CSSIP Without replacing any W l with ˆ W l , the first-order optimality condition of (8) also leads to the following fixed-point iteration procedure: L L �� − 1 � � �� � � c ( k ) c ( k − 1) c ( k − 1) � � ˆ FP = N l G l ˆ N l h l ˆ , (14) FP FP l =1 l =1 where c ( k − 1) c ( k − 1) � � = Q T M 0 Φ T l W − 1 � � G l ˆ ˆ Φ l Q M 0 , (15a) FP l FP c ( k − 1) c ( k − 1) � � = Q T M 0 Φ T l W − 1 � � ˆ ˆ ˆ h l r l , (15b) FP l FP c ( k − 1) = ˆ W l = ˆ c ( k − 1) ⊗ ˆ c ( k − 1) � � R T � � � � ˆ ˆ ˆ . (15c) W l R l FP l FP FP Remark 2. The fixed-point iteration (14) can be initialized with ˆ c WLS and produces good estimates within several iterations in our simulations. 15

  16. CSSIP Outline • Problem formulation • Estimation algorithms • Cram´ er-Rao bound • Numerical examples • Summary and future work 16

  17. CSSIP Cram´ er-Rao Bound • We derive the CRB for both the Toeplitz parameterization and the DOA parameterization based on classical results in [7], [8]. • For complete arrays, the FIM for the Toeplitz parameterization is given by L � N l Q H M 0 Φ H l ( R T l ⊗ R l ) − 1 Φ l Q M 0 . FIM c = (16) l =1 n ] T is given by • For complete arrays, the FIM of the parameters η = [ θ , p , σ 2 L � N l D H Φ H l ( R T l ⊗ R l ) − 1 Φ l D , FIM η = (17) l =1 where D = [ ˙ A d P A d i ] , and ˙ A d = ˙ A ∗ U ⊙ A U + A ∗ U ⊙ ˙ A U , ˙ A U = [ ∂ a U ( θ 1 ) /∂θ 1 , · · · , ∂ a U ( θ K ) /∂θ K ] , A d = A ∗ U ⊙ A U , and i = vec( I M 0 ) . 17

  18. CSSIP Outline • Problem formulation • Estimation algorithms • Cram´ er-Rao bound • Numerical examples • Summary and future work 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend