Direction Finding Using Sparse Linear Arrays with Missing Data - - PowerPoint PPT Presentation

direction finding using sparse linear arrays with missing
SMART_READER_LITE
LIVE PREVIEW

Direction Finding Using Sparse Linear Arrays with Missing Data - - PowerPoint PPT Presentation

CSSIP Direction Finding Using Sparse Linear Arrays with Missing Data Mianzhi Wang, Zhen Zhang, and Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis March 8, 2017 1 CSSIP


slide-1
SLIDE 1

CSSIP

Direction Finding Using Sparse Linear Arrays with Missing Data

Mianzhi Wang, Zhen Zhang, and Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis March 8, 2017

1

slide-2
SLIDE 2

CSSIP

Outline

  • Problem formulation
  • Estimation algorithms
  • Cram´

er-Rao bound

  • Numerical examples
  • Summary and future work

2

slide-3
SLIDE 3

CSSIP

Notations

AH = Hermitian transpose of A A∗ = Conjugate of A ⊗ = Kronecker Product ⊙ = Khatri-Rao Product vec(A) = Vectorization of A R(A) = Real part ofA I(A) = Imaginary part ofA

3

slide-4
SLIDE 4

CSSIP

Preliminaries

We consider a M-sensor sparse linear array whose sensors are located on a uniform grid, and denote the sensor locations by the integer set ¯ D = { ¯ d1, ¯ d2, . . . , ¯ dM}.

ULA: Co-prime array: Nested array: MRA: Sparse linear arrays

Figure 1: Examples of sparse linear arrays.

4

slide-5
SLIDE 5

CSSIP

Preliminaries (cont.)

  • We consider the classical far-field narrow-band measurement model:

y(t) = SAU(θ)x(t) + n(t), t = 1, 2, . . . , N, (1) where AU(θ) = [aU(θ1), aU(θ2), . . . , aU(θK)] is the steering matrix of a M0-sensor ULA, M0 = ¯ dM − ¯ d1 + 1, S is a M × M0 selection matrix that converts a ULA manifold to a sparse linear array manifold, x(t) is the source signal, and n(t) is the additive noise.

  • Assumptions:
  • 1. The source signals are temporally and spatially uncorrelated.
  • 2. The noise is temporally and spatially uncorrelated Gaussian that is also

uncorrelated from the source signals.

  • 3. The K DOAs are distinct.

5

slide-6
SLIDE 6

CSSIP

Preliminaries (cont.)

  • The sample covariance matrix is given by:

R = E[y(t)yH(t)] = SRUST , (2) where RU = AUP AH

U + σ2 nI, P = diag(p1, p2, . . . , pK), and pk is the power of

k-th source.

  • We can vectorize R and obtain

r = (S ⊗ S)(A∗

U ⊙ AU)p + σ2 ni,

(3) where r = vec(R), p = [p1, p2, · · · , pK]T , and i = vec(I).

  • Model (3) resembles a difference coaray model with deterministic sources and

noise, and (S ⊗ S)(A∗

U ⊙ AU) embeds a steering matrix of a virtual array with

enhanced degrees of freedom, whose sensor locations are given by ¯ Dco = {( ¯ dm − ¯ dn)| ¯ dm, ¯ dn ∈ ¯ D} [1], [2].

6

slide-7
SLIDE 7

CSSIP

Preliminaries (cont.)

Definition 1. A sparse linear array is called complete if its difference coarray ¯

Dco consists of consecutive integers from −M0 + 1 to M0 − 1. Otherwise, we call the sparse linear array incomplete.

Example 1. Nested arrays [1] and minimum redundancy linear arrays [3] are complete

sparse linear arrays. Co-prime arrays [4] are generally incomplete sparse linear arrays.

Implications:

  • For complete arrays, we can reconstruct RU from the estimate of R. We can

identify more sources than the number of sensors.

  • For incomplete arrays, we can only reconstruct a submatrix of RU from the

estimate of R. We can still resolve more sources than the number of sensors if the dimension of the submatrix is large enough. For brevity, we restrict our following discussion to complete arrays, which can be easily extended to handle incomplete arrays.

7

slide-8
SLIDE 8

CSSIP

Missing Data Problem Formulation

  • We consider L sampling periods. Without loss of generality, we assume that

sensor failure only occurs after the first sampling period. If a sensor fails, it will not recover in the following periods.

  • We denote the valid snapshots taken during the l-th period by

yl(t) = Tl[SAU(θ)x(t) + n(t)], (4) for t = N1 + · · · + Nl−1 + 1, . . . , N1 + · · · + Nl−1 + Nl, where Nl is the number

  • f snapshots collected during the l-th period, and Tl is a selection matrix that

selects the valid sensors.

  • Goal: estimate the DOAs from the measurements yl(t).

Problem: the coarray structure is destroyed due to sensor failures.

Figure 2: An example of the sensor failure pattern.

8

slide-9
SLIDE 9

CSSIP

Outline

  • Problem formulation
  • Estimation algorithms
  • Cram´

er-Rao bound

  • Numerical examples
  • Summary and future work

9

slide-10
SLIDE 10

CSSIP

General Idea

  • The sample covariance matrix of the l-th period is given by

Rl = E[yl(t)yl(t)H] = TlSRUST T T

l + σ2 nI,

(5) which is actually formed by deleting rows and columns in RU, as illustrated in

  • Fig. 3.

1 2 𝑀

Figure 3: A illustration of the relationships between RU and R1, . . . , RL.

  • We hope to recover RU from the estimates ˆ

R1, . . . , ˆ RL, and estimate the DOAs based on the reconstructed RU.

10

slide-11
SLIDE 11

CSSIP

Ad-hoc Estimator

Idea: recover RU by averaging the elements in ˆ Rl.

1 2 𝑀

Figure 4: The idea of the ad-hoc estimator.

11

slide-12
SLIDE 12

CSSIP

Ad-hoc Estimator (cont.)

  • Extending the results in [5], let Vk = {(m, n)| ¯

dm − ¯ dn = k, ¯ dm, ¯ dn ∈ ¯ D}. Let Am,n denote the set of snapshot indices when both the m-th and the n-th sensor are working.

  • Define

uk =

  • (m,n)∈Vk
  • t∈Am,n ym(t)y∗

n(t)

  • (m,n)∈Vk |Am,n|

, (6) where y(t) = [y1(t), · · · , yM(t)] is the full measurement vector before discarding invalid data, and |A| denotes the cardinality of A.

  • We can obtain uk for k = −M0 + 1, −M0 + 2, . . . , M0 − 1, and the ad-hoc

estimate of RU is given by ˆ R(ad−hoc)

U

=

   

u0 u−1 · · · u−M0+1 u1 u0 · · · u−M0+2 . . . . . . ... . . . uM0 uM0−1 · · · u0

    .

(7)

12

slide-13
SLIDE 13

CSSIP

Maximum-Likelihood Based Estimator

  • Neglecting constant terms, the negative log-likelihood function is given by

L(R1, . . . , RL) =

L

  • l=1

Nl[log |Rl| + tr(R−1

l

ˆ Rl)], (8)

  • We adopt the Toeplitz parameterization of RU [6]:

RU =

2M0−1

  • i=1

ciQ(i)

M0,

(9) where the basis matrices Q(i)

M0 are given by

Q(i)

M0 =

  

IM0, i = 1, I(i−1)

M0

+ (I(i−1)

M0

)T , 2 ≤ i ≤ M0, −jI(i−M0)

M0

+ j(I(i−M0)

M0

)T , M0 + 1 ≤ i ≤ 2M0 − 1. (10)

Remark 1. Positive semidefinite Toeplitz matrices can be related to DOAs via

Vandermonde decomposition. However, this relationship is not one-to-one. Hence we are relaxing the parameter space.

13

slide-14
SLIDE 14

CSSIP

Maximum-Likelihood Based Estimator (cont.)

  • The partial derivatives w.r.t. the Toeplitz parameterization are give by

∂L(c) ∂ci =

L

  • l=1

Nl tr

  • TlSQ(i)

M0ST T T l R−1 l

(c) Rl(c) − ˆ Rl

  • R−1

l

(c)

  • (11)
  • Let QM0 = [q(1)

M0, q(2) M0, · · · , q(2M0−1) M0

], where q(i)

M0 = vec Q(i)

  • M0. From the

first-order optimality condition of (8), we can obtain the following approximate solution by replace some Wl = RT

l ⊗ Rl with their estimates:

ˆ cWLS =

  • L
  • l=1

Nl ˆ Gl

−1

L

  • l=1

Nlˆ hl

  • ,

(12) where Gl = QT

M0ΦT l ˆ

W −1

l

ΦlQM0, hl = QT

M0ΦT l ˆ

W −1

l

ˆ rl, and ˆ Wl = ˆ RT

l ⊗ ˆ

Rl. This solution is also the solution to the weighted least-squares problem: min

c L

  • l=1

NlΦlQM0c − ˆ rl2

ˆ W −1

l

(13)

14

slide-15
SLIDE 15

CSSIP

Without replacing any Wl with ˆ Wl, the first-order optimality condition of (8) also leads to the following fixed-point iteration procedure: ˆ c(k)

FP =

  • L
  • l=1

NlGl

  • ˆ

c(k−1)

FP

−1

L

  • l=1

Nlhl

  • ˆ

c(k−1)

FP

  • ,

(14) where Gl

  • ˆ

c(k−1)

FP

  • = QT

M0ΦT l W −1 l

  • ˆ

c(k−1)

FP

  • ΦlQM0,

(15a) hl

  • ˆ

c(k−1)

FP

  • = QT

M0ΦT l W −1 l

  • ˆ

c(k−1)

FP

  • ˆ

rl, (15b) Wl

  • ˆ

c(k−1)

FP

  • = ˆ

Wl = ˆ RT

l

  • ˆ

c(k−1)

FP

  • ⊗ ˆ

Rl

  • ˆ

c(k−1)

FP

  • .

(15c)

Remark 2. The fixed-point iteration (14) can be initialized with ˆ

cWLS and produces good estimates within several iterations in our simulations.

15

slide-16
SLIDE 16

CSSIP

Outline

  • Problem formulation
  • Estimation algorithms
  • Cram´

er-Rao bound

  • Numerical examples
  • Summary and future work

16

slide-17
SLIDE 17

CSSIP

Cram´ er-Rao Bound

  • We derive the CRB for both the Toeplitz parameterization and the DOA

parameterization based on classical results in [7], [8].

  • For complete arrays, the FIM for the Toeplitz parameterization is given by

FIMc =

L

  • l=1

NlQH

M0ΦH l (RT l ⊗ Rl)−1ΦlQM0.

(16)

  • For complete arrays, the FIM of the parameters η = [θ, p, σ2

n]T is given by

FIMη =

L

  • l=1

NlDHΦH

l (RT l ⊗ Rl)−1ΦlD,

(17) where D = [ ˙ AdP Ad i], and ˙ Ad = ˙ A∗

U ⊙ AU + A∗ U ⊙ ˙

AU, ˙ AU = [∂aU(θ1)/∂θ1, · · · , ∂aU(θK)/∂θK], Ad = A∗

U ⊙ AU, and i = vec(IM0).

17

slide-18
SLIDE 18

CSSIP

Outline

  • Problem formulation
  • Estimation algorithms
  • Cram´

er-Rao bound

  • Numerical examples
  • Summary and future work

18

slide-19
SLIDE 19

CSSIP

Experiment Setup

  • We consider the following two arrays:

◮ Nested array: [0, 1, 2, 3, 7, 11, 15, 19]d0; ◮ Coprime array: [0, 3, 5, 6, 9, 10, 12, 15, 20, 25]d0.

  • We consider 12 sources uniformly distributed between −π/3 and π/3, which is

more than the number of sensors of either array.

  • We set L to be 3. When L = 2 the last sensor of each array fails, and when

L = 3, the last two sensors of each array fail.

  • We set N1 = 50µ, N2 = 100µ, and N3 = 150µ, where µ is a tunable parameter.
  • When making comparisons under different numbers of snapshots, we fixed

SNR = 0dB and varied µ from 1 to 20. When making comparisons under different SNRs, we fixed µ = 1 and varied SNR from -20dB to 20dB.

  • We compare four method of estimating RU for DOA estimation: using the

complete data only, using the ad-hoc estimator (7), using (12), and using (14).

19

slide-20
SLIDE 20

CSSIP

Numerical Examples for the Nested Array

5 10 15 20

µ

10-2

RMSE (deg)

First Ad-hoc TML-WLS TML-FP CRB 5 10 15 20

µ

0.992 0.994 0.996 0.998 1

Rate of success

First Ad-hoc TML-WLS TML-FP

  • 20
  • 10

10 20

SNR (dB)

10-2 10-1

RMSE (deg)

First Ad-hoc TML-WLS TML-FP CRB

  • 20
  • 10

10 20

SNR (dB)

0.94 0.95 0.96 0.97 0.98 0.99 1

Rate of success

First Ad-hoc TML-WLS TML-FP

Figure 5: Performance of different algorithms for the nested array configuration.

20

slide-21
SLIDE 21

CSSIP

Numerical Examples for the Co-Prime Array

5 10 15 20

µ

10-3 10-2

RMSE (deg)

First Ad-hoc TML-WLS TML-FP CRB 5 10 15 20

µ

0.985 0.99 0.995 1

Rate of success

First Ad-hoc TML-WLS TML-FP

  • 20
  • 10

10 20

SNR (dB)

10-2 10-1

RMSE (deg)

First Ad-hoc TML-WLS TML-FP CRB

  • 20
  • 10

10 20

SNR (dB)

0.85 0.9 0.95 1

Rate of success

First Ad-hoc TML-WLS TML-FP

Figure 6: Performance of different algorithms for the co-prime array configuration.

21

slide-22
SLIDE 22

CSSIP

Observations

  • Performance gain is significant by utilizing the incomplete measurements in

addition to the complete measurements.

  • The weighted least squares estimator (12) (TML-WLS) and the fixed-point

iteration based estimator (14) (TML-FP) achieves lower estimation errors than the ad-hoc estimator.

  • In the nested array case, the RMSEs of TML-WLS and TML-FP is very close to

the CRB. In the co-prime array case, there is always a gap between the RMSEs and the CRB.

22

slide-23
SLIDE 23

CSSIP

Summary and Future Work

23

slide-24
SLIDE 24

CSSIP

Summary and Future Work

Summary:

  • We proposed ML-based methods to reconstruct a sample covariance matrix with

enhanced degrees of freedom using the Toeplitz parameterization in the missing data case, which enables us to resolve more sources than the number of sensors using sparse linear arrays.

  • We derived the corresponding CRBs for sparse linear arrays.

Future Work:

  • Detection of malfunctioning sensors
  • Performance analysis in the presence of sensor failures

24

slide-25
SLIDE 25

CSSIP

References I

  • P. Pal and P. Vaidyanathan, “Nested arrays: A novel approach to array

processing with enhanced degrees of freedom,” IEEE Transactions on Signal Processing, vol. 58, no. 8, pp. 4167–4181, Aug. 2010, issn: 1053-587X. doi: 10.1109/TSP.2010.2049264.

  • M. Wang and A. Nehorai, “Coarrays, MUSIC, and the Cram´

e Rao Bound,” IEEE Transactions on Signal Processing, vol. 65, no. 4, pp. 933–946, Feb. 2017, issn: 1053-587X. doi: 10.1109/TSP.2016.2626255.

  • A. Moffet, “Minimum-redundancy linear arrays,” en, IEEE Transactions on

Antennas and Propagation, vol. 16, no. 2, pp. 172–175, Mar. 1968, issn: 0096-1973. doi: 10.1109/TAP.1968.1139138. (visited on 09/09/2014).

  • P. Pal and P. P. Vaidyanathan, “Coprime sampling and the music algorithm,”

in 2011 IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE), Jan. 2011, pp. 289–294. doi: 10.1109/DSP-SPE.2011.5739227.

25

slide-26
SLIDE 26

CSSIP

References II

  • E. G. Larsson and P. Stoica, “High-resolution direction finding: The missing

data case,” IEEE Transactions on Signal Processing, vol. 49, no. 5,

  • pp. 950–958, May 2001, issn: 1053-587X. doi: 10.1109/78.917799.
  • J. P. Burg, D. G. Luenberger, and D. L. Wenger, “Estimation of structured

covariance matrices,” Proceedings of the IEEE, vol. 70, no. 9, pp. 963–974,

  • Sep. 1982, issn: 0018-9219. doi: 10.1109/PROC.1982.12427.
  • S. M. Kay, Fundamentals of statistical signal processing, ser. Prentice Hall

signal processing series. Englewood Cliffs, N.J: Prentice-Hall PTR, 1993, isbn: 978-0-13-345711-7 978-0-13-504135-2 978-0-13-280803-3.

  • H. L. Van Trees, Optimum array processing, eng, ser. Detection, estimation,

and modulation theory / Harry L. Van Trees 4. New York: Wiley, 2002, isbn: 978-0-471-09390-9.

26

slide-27
SLIDE 27

CSSIP

Questions?

27

slide-28
SLIDE 28

CSSIP 28

slide-29
SLIDE 29

CSSIP

Other Formulations

  • Measurement interpolation.

◮ Computationally expensive when the number of snapshots is larger.

  • Low-rank Toeplitz matrix completion via nuclear norm minimization.

◮ Requires solving a SDP problem. 29