Eigenspace estimation for source localization using large random - - PowerPoint PPT Presentation

eigenspace estimation for source localization using large
SMART_READER_LITE
LIVE PREVIEW

Eigenspace estimation for source localization using large random - - PowerPoint PPT Presentation

Eigenspace estimation for source localization using large random matrices Pascal Vallet (1) Joint work with Philippe Loubaton (1) and Xavier Mestre (2) (1) LabInfo IGM (CNRS-UMR 8049) / Universit Paris-Est (2) Centre Tecnologic de


slide-1
SLIDE 1

Eigenspace estimation for source localization using large random matrices

Pascal Vallet(1)

Joint work with Philippe Loubaton(1) and Xavier Mestre(2)

(1) LabInfo IGM (CNRS-UMR 8049) / Université Paris-Est (2) Centre Tecnologic de Telecomunicacions de Catalunya (CTTC) / Barcelona

slide-2
SLIDE 2

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Table of Contents

1

Introduction

2

Random matrix theory results

3

Consistent estimation of eigenspace

4

Numerical evaluations

2 / 68

slide-3
SLIDE 3

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We will assume that K source signals are received by an antenna array of M elements, and K < M. At time n, we receive yn = Asn +vn, with

A = [a(θ1),...,a(θK )] the M ×K "steering vectors" matrix with a(θ1),...,a(θK ) linearly independent. sn = [s1,n,...,sn,K ] the vector of non-observable transmitted signals, assumed deterministic, vn a gaussian white noise (zero mean, covariance σ2IM).

θ1,...,θK are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)...

3 / 68

slide-4
SLIDE 4

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We will assume that K source signals are received by an antenna array of M elements, and K < M. At time n, we receive yn = Asn +vn, with

A = [a(θ1),...,a(θK )] the M ×K "steering vectors" matrix with a(θ1),...,a(θK ) linearly independent. sn = [s1,n,...,sn,K ] the vector of non-observable transmitted signals, assumed deterministic, vn a gaussian white noise (zero mean, covariance σ2IM).

θ1,...,θK are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)...

4 / 68

slide-5
SLIDE 5

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We will assume that K source signals are received by an antenna array of M elements, and K < M. At time n, we receive yn = Asn +vn, with

A = [a(θ1),...,a(θK )] the M ×K "steering vectors" matrix with a(θ1),...,a(θK ) linearly independent. sn = [s1,n,...,sn,K ] the vector of non-observable transmitted signals, assumed deterministic, vn a gaussian white noise (zero mean, covariance σ2IM).

θ1,...,θK are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)...

5 / 68

slide-6
SLIDE 6

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We will assume that K source signals are received by an antenna array of M elements, and K < M. At time n, we receive yn = Asn +vn, with

A = [a(θ1),...,a(θK )] the M ×K "steering vectors" matrix with a(θ1),...,a(θK ) linearly independent. sn = [s1,n,...,sn,K ] the vector of non-observable transmitted signals, assumed deterministic, vn a gaussian white noise (zero mean, covariance σ2IM).

θ1,...,θK are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)...

6 / 68

slide-7
SLIDE 7

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We will assume that K source signals are received by an antenna array of M elements, and K < M. At time n, we receive yn = Asn +vn, with

A = [a(θ1),...,a(θK )] the M ×K "steering vectors" matrix with a(θ1),...,a(θK ) linearly independent. sn = [s1,n,...,sn,K ] the vector of non-observable transmitted signals, assumed deterministic, vn a gaussian white noise (zero mean, covariance σ2IM).

θ1,...,θK are the parameters of interest of the K sources, it can be either frequencies, direction of arrival (DoA)...

7 / 68

slide-8
SLIDE 8

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We collect N observations of the previous model, stacked in YN = [y1,...,yN], and we can write YN = ASN +VN with SN and VN built as YN. The goal is to infer the angles θ1,...,θK from YN. There are essentially two common methods:

Maximum Likelihood (ML) estimation Subspace method.

8 / 68

slide-9
SLIDE 9

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We collect N observations of the previous model, stacked in YN = [y1,...,yN], and we can write YN = ASN +VN with SN and VN built as YN. The goal is to infer the angles θ1,...,θK from YN. There are essentially two common methods:

Maximum Likelihood (ML) estimation Subspace method.

9 / 68

slide-10
SLIDE 10

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We collect N observations of the previous model, stacked in YN = [y1,...,yN], and we can write YN = ASN +VN with SN and VN built as YN. The goal is to infer the angles θ1,...,θK from YN. There are essentially two common methods:

Maximum Likelihood (ML) estimation Subspace method.

10 / 68

slide-11
SLIDE 11

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The ML estimator is given by argmin

ω

1 N Tr

  • IM −A(ω)(A(ω)∗A(ω))−1A(ω)∗

YNY∗

N,

where A(ω) is the matrix in which we have replaced [θ1,...,θK ] by the variable ω = [ω1,...,ωK ]. This estimator is consistent when M,N → ∞, however, it clearly requires a multidimensional optimization. An alternative, requiring a monodimensional search, has been found through the subspace method.

11 / 68

slide-12
SLIDE 12

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The ML estimator is given by argmin

ω

1 N Tr

  • IM −A(ω)(A(ω)∗A(ω))−1A(ω)∗

YNY∗

N,

where A(ω) is the matrix in which we have replaced [θ1,...,θK ] by the variable ω = [ω1,...,ωK ]. This estimator is consistent when M,N → ∞, however, it clearly requires a multidimensional optimization. An alternative, requiring a monodimensional search, has been found through the subspace method.

12 / 68

slide-13
SLIDE 13

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The ML estimator is given by argmin

ω

1 N Tr

  • IM −A(ω)(A(ω)∗A(ω))−1A(ω)∗

YNY∗

N,

where A(ω) is the matrix in which we have replaced [θ1,...,θK ] by the variable ω = [ω1,...,ωK ]. This estimator is consistent when M,N → ∞, however, it clearly requires a multidimensional optimization. An alternative, requiring a monodimensional search, has been found through the subspace method.

13 / 68

slide-14
SLIDE 14

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Assuming SN has full rank K, then 1

N ASNS∗ NA∗ has K non null

eigenvalues 0 = λ1,N = ... = λM−K,N < λM−K+1,N < ... < λM,N. We denote by ΠN the projector onto the eigensubspace associated with eigenvalue 0. Since span{a(θ1),...,a(θK )} is also the eigenspace associated with non null eigenvalues λM−K+1,N,...,λM,N, it is possible to determine the (θk)k=1,...,K . MUSIC algorithm The angles θ1,...,θK are the (unique) solutions of the equation η(θ) := a(θ)∗ΠNa(θ) = 0.

14 / 68

slide-15
SLIDE 15

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Assuming SN has full rank K, then 1

N ASNS∗ NA∗ has K non null

eigenvalues 0 = λ1,N = ... = λM−K,N < λM−K+1,N < ... < λM,N. We denote by ΠN the projector onto the eigensubspace associated with eigenvalue 0. Since span{a(θ1),...,a(θK )} is also the eigenspace associated with non null eigenvalues λM−K+1,N,...,λM,N, it is possible to determine the (θk)k=1,...,K . MUSIC algorithm The angles θ1,...,θK are the (unique) solutions of the equation η(θ) := a(θ)∗ΠNa(θ) = 0.

15 / 68

slide-16
SLIDE 16

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Assuming SN has full rank K, then 1

N ASNS∗ NA∗ has K non null

eigenvalues 0 = λ1,N = ... = λM−K,N < λM−K+1,N < ... < λM,N. We denote by ΠN the projector onto the eigensubspace associated with eigenvalue 0. Since span{a(θ1),...,a(θK )} is also the eigenspace associated with non null eigenvalues λM−K+1,N,...,λM,N, it is possible to determine the (θk)k=1,...,K . MUSIC algorithm The angles θ1,...,θK are the (unique) solutions of the equation η(θ) := a(θ)∗ΠNa(θ) = 0.

16 / 68

slide-17
SLIDE 17

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We denote by ˆ λ1,N ≤ ... ≤ ˆ λM,N the eigenvalues of 1

N YNY∗ N, and

ˆ u1,N,..., ˆ uM,N the associated eigenvectors. In practice, to estimate the angles, we only know YN, and we estimate function η(θ) by ˆ ηtrad(θ) := a(θ)∗ ˆ ΠNa(θ), with ˆ ΠN = M−K

k=1 ˆ

uk,N ˆ u∗

k,N the projector onto the

eigensubspace associated with ˆ λ1,N,..., ˆ λM−K,N. In the case where N → ∞ while M is constant, this estimator is consistent because 1

N YNY∗ N − 1 N ASNS∗ NA∗ → 0 a.s.

However, when M,N → ∞ while cN = M/N → c > 0, the previous convergence fails and the estimator is no more consistent.

17 / 68

slide-18
SLIDE 18

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We denote by ˆ λ1,N ≤ ... ≤ ˆ λM,N the eigenvalues of 1

N YNY∗ N, and

ˆ u1,N,..., ˆ uM,N the associated eigenvectors. In practice, to estimate the angles, we only know YN, and we estimate function η(θ) by ˆ ηtrad(θ) := a(θ)∗ ˆ ΠNa(θ), with ˆ ΠN = M−K

k=1 ˆ

uk,N ˆ u∗

k,N the projector onto the

eigensubspace associated with ˆ λ1,N,..., ˆ λM−K,N. In the case where N → ∞ while M is constant, this estimator is consistent because 1

N YNY∗ N − 1 N ASNS∗ NA∗ → 0 a.s.

However, when M,N → ∞ while cN = M/N → c > 0, the previous convergence fails and the estimator is no more consistent.

18 / 68

slide-19
SLIDE 19

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We denote by ˆ λ1,N ≤ ... ≤ ˆ λM,N the eigenvalues of 1

N YNY∗ N, and

ˆ u1,N,..., ˆ uM,N the associated eigenvectors. In practice, to estimate the angles, we only know YN, and we estimate function η(θ) by ˆ ηtrad(θ) := a(θ)∗ ˆ ΠNa(θ), with ˆ ΠN = M−K

k=1 ˆ

uk,N ˆ u∗

k,N the projector onto the

eigensubspace associated with ˆ λ1,N,..., ˆ λM−K,N. In the case where N → ∞ while M is constant, this estimator is consistent because 1

N YNY∗ N − 1 N ASNS∗ NA∗ → 0 a.s.

However, when M,N → ∞ while cN = M/N → c > 0, the previous convergence fails and the estimator is no more consistent.

19 / 68

slide-20
SLIDE 20

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We denote by ˆ λ1,N ≤ ... ≤ ˆ λM,N the eigenvalues of 1

N YNY∗ N, and

ˆ u1,N,..., ˆ uM,N the associated eigenvectors. In practice, to estimate the angles, we only know YN, and we estimate function η(θ) by ˆ ηtrad(θ) := a(θ)∗ ˆ ΠNa(θ), with ˆ ΠN = M−K

k=1 ˆ

uk,N ˆ u∗

k,N the projector onto the

eigensubspace associated with ˆ λ1,N,..., ˆ λM−K,N. In the case where N → ∞ while M is constant, this estimator is consistent because 1

N YNY∗ N − 1 N ASNS∗ NA∗ → 0 a.s.

However, when M,N → ∞ while cN = M/N → c > 0, the previous convergence fails and the estimator is no more consistent.

20 / 68

slide-21
SLIDE 21

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

For convenience of notations, we rewrite the main model ΣN := YN

  • N

, BN := ASN

  • N

, WN := VN

  • N

, so that ΣN = BN +WN is the well-known gaussian information plus noise model. Problem Find a consistent estimator of the quadratic form d∗

NΠNdN in the

case where supN BN < ∞, supN dN < ∞, WN gaussian i.i.d (zero mean and elements having variance σ2/N), M,N → ∞ while cN = M/N → c ∈ (0,1).

21 / 68

slide-22
SLIDE 22

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

For convenience of notations, we rewrite the main model ΣN := YN

  • N

, BN := ASN

  • N

, WN := VN

  • N

, so that ΣN = BN +WN is the well-known gaussian information plus noise model. Problem Find a consistent estimator of the quadratic form d∗

NΠNdN in the

case where supN BN < ∞, supN dN < ∞, WN gaussian i.i.d (zero mean and elements having variance σ2/N), M,N → ∞ while cN = M/N → c ∈ (0,1).

22 / 68

slide-23
SLIDE 23

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Some related works

Mestre (2008) derived an estimator of the previous quadratic form, in the case where the source signals matrix SN is gaussian i.i.d. In this case, ΣN has the same distribution as (AA∗ +σ2IM)XN with XN gaussian i.i.d. Couillet et al. (2010) extend this work to the case where SN is i.i.d but not necessarily gaussian.

For the remainder of the talk, we define some shortcuts

N → ∞ stands for the previous regime of convergence M,N → ∞ while cN = M/N → c ∈ (0,1). For two sequences of r.v (XN),(YN), XN ≍ YN for XN −YN → 0 a.s as N → ∞

23 / 68

slide-24
SLIDE 24

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Some related works

Mestre (2008) derived an estimator of the previous quadratic form, in the case where the source signals matrix SN is gaussian i.i.d. In this case, ΣN has the same distribution as (AA∗ +σ2IM)XN with XN gaussian i.i.d. Couillet et al. (2010) extend this work to the case where SN is i.i.d but not necessarily gaussian.

For the remainder of the talk, we define some shortcuts

N → ∞ stands for the previous regime of convergence M,N → ∞ while cN = M/N → c ∈ (0,1). For two sequences of r.v (XN),(YN), XN ≍ YN for XN −YN → 0 a.s as N → ∞

24 / 68

slide-25
SLIDE 25

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Some related works

Mestre (2008) derived an estimator of the previous quadratic form, in the case where the source signals matrix SN is gaussian i.i.d. In this case, ΣN has the same distribution as (AA∗ +σ2IM)XN with XN gaussian i.i.d. Couillet et al. (2010) extend this work to the case where SN is i.i.d but not necessarily gaussian.

For the remainder of the talk, we define some shortcuts

N → ∞ stands for the previous regime of convergence M,N → ∞ while cN = M/N → c ∈ (0,1). For two sequences of r.v (XN),(YN), XN ≍ YN for XN −YN → 0 a.s as N → ∞

25 / 68

slide-26
SLIDE 26

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Some related works

Mestre (2008) derived an estimator of the previous quadratic form, in the case where the source signals matrix SN is gaussian i.i.d. In this case, ΣN has the same distribution as (AA∗ +σ2IM)XN with XN gaussian i.i.d. Couillet et al. (2010) extend this work to the case where SN is i.i.d but not necessarily gaussian.

For the remainder of the talk, we define some shortcuts

N → ∞ stands for the previous regime of convergence M,N → ∞ while cN = M/N → c ∈ (0,1). For two sequences of r.v (XN),(YN), XN ≍ YN for XN −YN → 0 a.s as N → ∞

26 / 68

slide-27
SLIDE 27

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Some related works

Mestre (2008) derived an estimator of the previous quadratic form, in the case where the source signals matrix SN is gaussian i.i.d. In this case, ΣN has the same distribution as (AA∗ +σ2IM)XN with XN gaussian i.i.d. Couillet et al. (2010) extend this work to the case where SN is i.i.d but not necessarily gaussian.

For the remainder of the talk, we define some shortcuts

N → ∞ stands for the previous regime of convergence M,N → ∞ while cN = M/N → c ∈ (0,1). For two sequences of r.v (XN),(YN), XN ≍ YN for XN −YN → 0 a.s as N → ∞

27 / 68

slide-28
SLIDE 28

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Table of Contents

1

Introduction

2

Random matrix theory results

3

Consistent estimation of eigenspace

4

Numerical evaluations

28 / 68

slide-29
SLIDE 29

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Let ˆ µN = 1

M

M

k=1 δ ˆ λk,N the e.s.d of ΣNΣ∗ N, and its Stieltjes

transform ˆ mN(z) :=

  • R+

d ˆ µN(λ) λ−z := 1 M Tr(ΣNΣ∗

N −zIM)−1 for z ∈ C\R+.

Theorem (Dozier-Silverstein (2007)) As N → ∞, ˆ mN(z) ≍ mN(z) with mN(z) the Stieltjes transform of a deterministic distribution µN, and unique solution to the equation mN(z) := 1

M TrTN(z) with

TN(z) :=

  • BNB∗

N

1+σ2cNmN(z) −z(1+σ2cNmN(z))IM +σ2(1−cN)IM −1 . The same result holds for quadratic form of the resolvent (Hachem et al.(2010)), for dN uniformly bounded in N, d∗

N(ΣNΣ∗ N −zIM)−1dN ≍ d∗ NTN(z)dN.

29 / 68

slide-30
SLIDE 30

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Let ˆ µN = 1

M

M

k=1 δ ˆ λk,N the e.s.d of ΣNΣ∗ N, and its Stieltjes

transform ˆ mN(z) :=

  • R+

d ˆ µN(λ) λ−z := 1 M Tr(ΣNΣ∗

N −zIM)−1 for z ∈ C\R+.

Theorem (Dozier-Silverstein (2007)) As N → ∞, ˆ mN(z) ≍ mN(z) with mN(z) the Stieltjes transform of a deterministic distribution µN, and unique solution to the equation mN(z) := 1

M TrTN(z) with

TN(z) :=

  • BNB∗

N

1+σ2cNmN(z) −z(1+σ2cNmN(z))IM +σ2(1−cN)IM −1 . The same result holds for quadratic form of the resolvent (Hachem et al.(2010)), for dN uniformly bounded in N, d∗

N(ΣNΣ∗ N −zIM)−1dN ≍ d∗ NTN(z)dN.

30 / 68

slide-31
SLIDE 31

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Let ˆ µN = 1

M

M

k=1 δ ˆ λk,N the e.s.d of ΣNΣ∗ N, and its Stieltjes

transform ˆ mN(z) :=

  • R+

d ˆ µN(λ) λ−z := 1 M Tr(ΣNΣ∗

N −zIM)−1 for z ∈ C\R+.

Theorem (Dozier-Silverstein (2007)) As N → ∞, ˆ mN(z) ≍ mN(z) with mN(z) the Stieltjes transform of a deterministic distribution µN, and unique solution to the equation mN(z) := 1

M TrTN(z) with

TN(z) :=

  • BNB∗

N

1+σ2cNmN(z) −z(1+σ2cNmN(z))IM +σ2(1−cN)IM −1 . The same result holds for quadratic form of the resolvent (Hachem et al.(2010)), for dN uniformly bounded in N, d∗

N(ΣNΣ∗ N −zIM)−1dN ≍ d∗ NTN(z)dN.

31 / 68

slide-32
SLIDE 32

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The following result is a rephrasing of the result of Dozier-Silverstein (2007) about the support of µN. Let fN(w) = 1

M Tr(BNB∗ N −wIM)−1 and

φN(w) = w

  • 1−σ2cNfN(w)

2 +σ2(1−c)

  • 1−σ2cNfN(w)
  • .

Theorem The support supp(µN) is the union of 1 ≤ Q ≤ K +1 compact intervals supp(µN) =

Q

  • q=1

[x−

q,N,x+ q,N],

with {x−

q,N,x+ q,N}q=1,...,Q the positive local extrema of φN and x− 1,N > 0.

32 / 68

slide-33
SLIDE 33

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The following result is a rephrasing of the result of Dozier-Silverstein (2007) about the support of µN. Let fN(w) = 1

M Tr(BNB∗ N −wIM)−1 and

φN(w) = w

  • 1−σ2cNfN(w)

2 +σ2(1−c)

  • 1−σ2cNfN(w)
  • .

Theorem The support supp(µN) is the union of 1 ≤ Q ≤ K +1 compact intervals supp(µN) =

Q

  • q=1

[x−

q,N,x+ q,N],

with {x−

q,N,x+ q,N}q=1,...,Q the positive local extrema of φN and x− 1,N > 0.

33 / 68

slide-34
SLIDE 34

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The following result is a rephrasing of the result of Dozier-Silverstein (2007) about the support of µN. Let fN(w) = 1

M Tr(BNB∗ N −wIM)−1 and

φN(w) = w

  • 1−σ2cNfN(w)

2 +σ2(1−c)

  • 1−σ2cNfN(w)
  • .

Theorem The support supp(µN) is the union of 1 ≤ Q ≤ K +1 compact intervals supp(µN) =

Q

  • q=1

[x−

q,N,x+ q,N],

with {x−

q,N,x+ q,N}q=1,...,Q the positive local extrema of φN and x− 1,N > 0.

34 / 68

slide-35
SLIDE 35

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

w φ(w) w− 1 w+ 1 w− 2 w+ 2 w− 3 w+ 3 x− 1 x+ 1 x− 2 x+ 2 x− 3 x+ 3 λ4 λ2 Support S λ1 λ3

35 / 68

slide-36
SLIDE 36

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Each eigenvalue 0,λ1,N,...,λM,N of BNB∗

N belongs to an

interval ]w−

q,N,w+ q,N[.

An eigenvalue λk,N of BNB∗

N is said to be associated to the

cluster [x−

q,N,x+ q,N] if λk,N ∈]w− q,N,w+ q,N[.

Two eigenvalues of BNB∗

N are "separated" if they are associated

with two different clusters. If the eigenvalues of BNB∗

N are sufficiently spaced, σ and/or cN

are small enough, all the eigenvalues of BNB∗

N are separated,

i.e we have exactly Q = K +1 disjoint compact intervals in the support of µN.

36 / 68

slide-37
SLIDE 37

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Each eigenvalue 0,λ1,N,...,λM,N of BNB∗

N belongs to an

interval ]w−

q,N,w+ q,N[.

An eigenvalue λk,N of BNB∗

N is said to be associated to the

cluster [x−

q,N,x+ q,N] if λk,N ∈]w− q,N,w+ q,N[.

Two eigenvalues of BNB∗

N are "separated" if they are associated

with two different clusters. If the eigenvalues of BNB∗

N are sufficiently spaced, σ and/or cN

are small enough, all the eigenvalues of BNB∗

N are separated,

i.e we have exactly Q = K +1 disjoint compact intervals in the support of µN.

37 / 68

slide-38
SLIDE 38

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Each eigenvalue 0,λ1,N,...,λM,N of BNB∗

N belongs to an

interval ]w−

q,N,w+ q,N[.

An eigenvalue λk,N of BNB∗

N is said to be associated to the

cluster [x−

q,N,x+ q,N] if λk,N ∈]w− q,N,w+ q,N[.

Two eigenvalues of BNB∗

N are "separated" if they are associated

with two different clusters. If the eigenvalues of BNB∗

N are sufficiently spaced, σ and/or cN

are small enough, all the eigenvalues of BNB∗

N are separated,

i.e we have exactly Q = K +1 disjoint compact intervals in the support of µN.

38 / 68

slide-39
SLIDE 39

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Each eigenvalue 0,λ1,N,...,λM,N of BNB∗

N belongs to an

interval ]w−

q,N,w+ q,N[.

An eigenvalue λk,N of BNB∗

N is said to be associated to the

cluster [x−

q,N,x+ q,N] if λk,N ∈]w− q,N,w+ q,N[.

Two eigenvalues of BNB∗

N are "separated" if they are associated

with two different clusters. If the eigenvalues of BNB∗

N are sufficiently spaced, σ and/or cN

are small enough, all the eigenvalues of BNB∗

N are separated,

i.e we have exactly Q = K +1 disjoint compact intervals in the support of µN.

39 / 68

slide-40
SLIDE 40

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Theorem Assume 0 is separated from the other eigenvalues, i.e 0 is the unique eigenvalue associated with [x−

1,N,x+ 1,N],

∃ t−

1 ,t+ 1 ,t− 2 independent of N s.t 0 < t− 1 < infN x− 1,N and

t−

2 > t+ 1 > supN x+ 1,N,

then, for all large N, w.p.1, ˆ λ1,N,..., ˆ λM−K,N ∈]t−

1 ,t+ 1 [

and ˆ λM−K+1,N > t−

2 .

t− 1 x− 1 x+ 2 x+ 1 t+ 1 t− 2 x− 2 40 / 68

slide-41
SLIDE 41

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Theorem Assume 0 is separated from the other eigenvalues, i.e 0 is the unique eigenvalue associated with [x−

1,N,x+ 1,N],

∃ t−

1 ,t+ 1 ,t− 2 independent of N s.t 0 < t− 1 < infN x− 1,N and

t−

2 > t+ 1 > supN x+ 1,N,

then, for all large N, w.p.1, ˆ λ1,N,..., ˆ λM−K,N ∈]t−

1 ,t+ 1 [

and ˆ λM−K+1,N > t−

2 .

t− 1 x− 1 x+ 2 x+ 1 t+ 1 t− 2 x− 2 41 / 68

slide-42
SLIDE 42

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Theorem Assume 0 is separated from the other eigenvalues, i.e 0 is the unique eigenvalue associated with [x−

1,N,x+ 1,N],

∃ t−

1 ,t+ 1 ,t− 2 independent of N s.t 0 < t− 1 < infN x− 1,N and

t−

2 > t+ 1 > supN x+ 1,N,

then, for all large N, w.p.1, ˆ λ1,N,..., ˆ λM−K,N ∈]t−

1 ,t+ 1 [

and ˆ λM−K+1,N > t−

2 .

t− 1 x− 1 x+ 2 x+ 1 t+ 1 t− 2 x− 2 42 / 68

slide-43
SLIDE 43

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Table of Contents

1

Introduction

2

Random matrix theory results

3

Consistent estimation of eigenspace

4

Numerical evaluations

43 / 68

slide-44
SLIDE 44

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We want to estimate the quadratic form ηN := d∗

NΠNdN of the

noise subspace projector, under the assumption that 0 is the unique eigenvalue associated to [x−

1,N,x+ 1,N] for all large N.

No assumption is made on the number of sources K which may scale-up with N or stay constant. From residues theorem, we get ηN = 1 2πi

  • C − d∗

N

  • BNB∗

N −λIM

−1 dNdλ, with C − a clockwise oriented closed path enclosing 0 and no

  • ther eigenvalue of BNB∗

N.

The fundamental point is that we can find such a path which can be parametrized by a function of mN, the Stieltjes transform of µN.

44 / 68

slide-45
SLIDE 45

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We want to estimate the quadratic form ηN := d∗

NΠNdN of the

noise subspace projector, under the assumption that 0 is the unique eigenvalue associated to [x−

1,N,x+ 1,N] for all large N.

No assumption is made on the number of sources K which may scale-up with N or stay constant. From residues theorem, we get ηN = 1 2πi

  • C − d∗

N

  • BNB∗

N −λIM

−1 dNdλ, with C − a clockwise oriented closed path enclosing 0 and no

  • ther eigenvalue of BNB∗

N.

The fundamental point is that we can find such a path which can be parametrized by a function of mN, the Stieltjes transform of µN.

45 / 68

slide-46
SLIDE 46

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We want to estimate the quadratic form ηN := d∗

NΠNdN of the

noise subspace projector, under the assumption that 0 is the unique eigenvalue associated to [x−

1,N,x+ 1,N] for all large N.

No assumption is made on the number of sources K which may scale-up with N or stay constant. From residues theorem, we get ηN = 1 2πi

  • C − d∗

N

  • BNB∗

N −λIM

−1 dNdλ, with C − a clockwise oriented closed path enclosing 0 and no

  • ther eigenvalue of BNB∗

N.

The fundamental point is that we can find such a path which can be parametrized by a function of mN, the Stieltjes transform of µN.

46 / 68

slide-47
SLIDE 47

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We want to estimate the quadratic form ηN := d∗

NΠNdN of the

noise subspace projector, under the assumption that 0 is the unique eigenvalue associated to [x−

1,N,x+ 1,N] for all large N.

No assumption is made on the number of sources K which may scale-up with N or stay constant. From residues theorem, we get ηN = 1 2πi

  • C − d∗

N

  • BNB∗

N −λIM

−1 dNdλ, with C − a clockwise oriented closed path enclosing 0 and no

  • ther eigenvalue of BNB∗

N.

The fundamental point is that we can find such a path which can be parametrized by a function of mN, the Stieltjes transform of µN.

47 / 68

slide-48
SLIDE 48

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Consider the function wN(z) = z(1+σ2cNmN(z))2 −σ2cN(1+σ2cNmN(z)) z ∈ C\R+. The following limit exists (Dozier-Silverstein (2007)), for x ∈ R, wN(x) := lim

z→x Im{z}>0

wN(z). We consider C =

  • wN(x) : x ∈ [t−

1 ,t+ 1 ]

  • wN(x)∗ : x ∈ [t−

1 ,t+ 1 ]

  • .

48 / 68

slide-49
SLIDE 49

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Consider the function wN(z) = z(1+σ2cNmN(z))2 −σ2cN(1+σ2cNmN(z)) z ∈ C\R+. The following limit exists (Dozier-Silverstein (2007)), for x ∈ R, wN(x) := lim

z→x Im{z}>0

wN(z). We consider C =

  • wN(x) : x ∈ [t−

1 ,t+ 1 ]

  • wN(x)∗ : x ∈ [t−

1 ,t+ 1 ]

  • .

49 / 68

slide-50
SLIDE 50

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Consider the function wN(z) = z(1+σ2cNmN(z))2 −σ2cN(1+σ2cNmN(z)) z ∈ C\R+. The following limit exists (Dozier-Silverstein (2007)), for x ∈ R, wN(x) := lim

z→x Im{z}>0

wN(z). We consider C =

  • wN(x) : x ∈ [t−

1 ,t+ 1 ]

  • wN(x)∗ : x ∈ [t−

1 ,t+ 1 ]

  • .

50 / 68

slide-51
SLIDE 51

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Re{w(x)} Im{w(x)} λ1 w(x− 1 ) = w− 1 w(x+ 1 ) = w+ 1 w(t− 1 ) w(t+ 1 )

51 / 68

slide-52
SLIDE 52

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

This allows to rewrite the previous integral as ηN = 1 πIm t+

1

t−

1

d∗

N

  • BNB∗

N −wN(x)IM

−1 dNw′

N(x)dx

  • .

Dominated convergence can be applied to obtain ηN = 1 π lim

y↓0 Im

t+

1

t−

1

d∗

N

  • BNB∗

N −wN(x+iy)IM

−1 dNw′

N(x+iy)dx

  • = lim

y↓0

1 2πi

  • ∂R−

y

d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z)dz,

with, for y > 0, ∂R−

y the boundary of the rectangle

Ry =

  • u+iv : u ∈ [t−

1 ,t+ 1 ],v ∈ [−y,y]

  • .

The previous limit can be dropped, due to the holomorphy of d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z) on C\supp(µN).

52 / 68

slide-53
SLIDE 53

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

This allows to rewrite the previous integral as ηN = 1 πIm t+

1

t−

1

d∗

N

  • BNB∗

N −wN(x)IM

−1 dNw′

N(x)dx

  • .

Dominated convergence can be applied to obtain ηN = 1 π lim

y↓0 Im

t+

1

t−

1

d∗

N

  • BNB∗

N −wN(x+iy)IM

−1 dNw′

N(x+iy)dx

  • = lim

y↓0

1 2πi

  • ∂R−

y

d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z)dz,

with, for y > 0, ∂R−

y the boundary of the rectangle

Ry =

  • u+iv : u ∈ [t−

1 ,t+ 1 ],v ∈ [−y,y]

  • .

The previous limit can be dropped, due to the holomorphy of d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z) on C\supp(µN).

53 / 68

slide-54
SLIDE 54

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

This allows to rewrite the previous integral as ηN = 1 πIm t+

1

t−

1

d∗

N

  • BNB∗

N −wN(x)IM

−1 dNw′

N(x)dx

  • .

Dominated convergence can be applied to obtain ηN = 1 π lim

y↓0 Im

t+

1

t−

1

d∗

N

  • BNB∗

N −wN(x+iy)IM

−1 dNw′

N(x+iy)dx

  • = lim

y↓0

1 2πi

  • ∂R−

y

d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z)dz,

with, for y > 0, ∂R−

y the boundary of the rectangle

Ry =

  • u+iv : u ∈ [t−

1 ,t+ 1 ],v ∈ [−y,y]

  • .

The previous limit can be dropped, due to the holomorphy of d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z) on C\supp(µN).

54 / 68

slide-55
SLIDE 55

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The previous integrand can be written as gN(z) := d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z)

= d∗

NTN(z)dN

w′

N(z)

1+σ2cNmN(z). From the previous result, we have the following convergence mN(z) ≍ ˆ mN(z) = 1 M TrQN(z) and d∗

NTN(z)dN ≍ d∗ NQN(z)dN,

with QN(z) = (ΣNΣ∗

N −zIM)−1.

Let ˆ gN(z) := d∗

NQN(z)dN ˆ w′

N(z)

1+σ2cN ˆ mN(z). We can show that

  • 1

2πi

  • ∂R−

y

  • gN(z)− ˆ

gN(z)

  • dz
  • ≍ 0,

with ˆ wN(z) = z(1+σ2cN ˆ mN(z))2 −σ2cN(1+σ2cN ˆ mN(z)).

55 / 68

slide-56
SLIDE 56

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The previous integrand can be written as gN(z) := d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z)

= d∗

NTN(z)dN

w′

N(z)

1+σ2cNmN(z). From the previous result, we have the following convergence mN(z) ≍ ˆ mN(z) = 1 M TrQN(z) and d∗

NTN(z)dN ≍ d∗ NQN(z)dN,

with QN(z) = (ΣNΣ∗

N −zIM)−1.

Let ˆ gN(z) := d∗

NQN(z)dN ˆ w′

N(z)

1+σ2cN ˆ mN(z). We can show that

  • 1

2πi

  • ∂R−

y

  • gN(z)− ˆ

gN(z)

  • dz
  • ≍ 0,

with ˆ wN(z) = z(1+σ2cN ˆ mN(z))2 −σ2cN(1+σ2cN ˆ mN(z)).

56 / 68

slide-57
SLIDE 57

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The previous integrand can be written as gN(z) := d∗

N

  • BNB∗

N −wN(z)IM

−1 dNw′

N(z)

= d∗

NTN(z)dN

w′

N(z)

1+σ2cNmN(z). From the previous result, we have the following convergence mN(z) ≍ ˆ mN(z) = 1 M TrQN(z) and d∗

NTN(z)dN ≍ d∗ NQN(z)dN,

with QN(z) = (ΣNΣ∗

N −zIM)−1.

Let ˆ gN(z) := d∗

NQN(z)dN ˆ w′

N(z)

1+σ2cN ˆ mN(z). We can show that

  • 1

2πi

  • ∂R−

y

  • gN(z)− ˆ

gN(z)

  • dz
  • ≍ 0,

with ˆ wN(z) = z(1+σ2cN ˆ mN(z))2 −σ2cN(1+σ2cN ˆ mN(z)).

57 / 68

slide-58
SLIDE 58

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The new estimator is thus given by ˆ ηnew =

1 2πi

  • ∂R−

y ˆ

gN(z)dz. This integral can be solved using residues theorem. Since 0 is separated by assumption, we deduce from the separation property that for N large enough, w.p.1 ˆ λ1,N,..., ˆ λM−K,N ∈ Ry and ˆ λM−K+1,N,..., ˆ λM,N ∈ Ry. Using argument principle, it is possible to show that for N large enough, w.p.1 ˆ ω1,N,..., ˆ ωM−K,N ∈ Ry and ˆ ωM−K+1,N,..., ˆ ωM,N ∈ Ry, with ˆ ω1,N ≤ ... ≤ ˆ ωM,N the solutions of the equation 1+σ2cN ˆ mN(x) = 0. We obtain ˆ ηnew

N

= M

k=1 ˆ

ξk,Nd∗

N ˆ

uk,N ˆ u∗

k,NdN with (ˆ

ξk,N) depending on ˆ λ1,N,..., ˆ λM,N and ˆ ω1,N,..., ˆ ωM,N.

58 / 68

slide-59
SLIDE 59

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The new estimator is thus given by ˆ ηnew =

1 2πi

  • ∂R−

y ˆ

gN(z)dz. This integral can be solved using residues theorem. Since 0 is separated by assumption, we deduce from the separation property that for N large enough, w.p.1 ˆ λ1,N,..., ˆ λM−K,N ∈ Ry and ˆ λM−K+1,N,..., ˆ λM,N ∈ Ry. Using argument principle, it is possible to show that for N large enough, w.p.1 ˆ ω1,N,..., ˆ ωM−K,N ∈ Ry and ˆ ωM−K+1,N,..., ˆ ωM,N ∈ Ry, with ˆ ω1,N ≤ ... ≤ ˆ ωM,N the solutions of the equation 1+σ2cN ˆ mN(x) = 0. We obtain ˆ ηnew

N

= M

k=1 ˆ

ξk,Nd∗

N ˆ

uk,N ˆ u∗

k,NdN with (ˆ

ξk,N) depending on ˆ λ1,N,..., ˆ λM,N and ˆ ω1,N,..., ˆ ωM,N.

59 / 68

slide-60
SLIDE 60

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The new estimator is thus given by ˆ ηnew =

1 2πi

  • ∂R−

y ˆ

gN(z)dz. This integral can be solved using residues theorem. Since 0 is separated by assumption, we deduce from the separation property that for N large enough, w.p.1 ˆ λ1,N,..., ˆ λM−K,N ∈ Ry and ˆ λM−K+1,N,..., ˆ λM,N ∈ Ry. Using argument principle, it is possible to show that for N large enough, w.p.1 ˆ ω1,N,..., ˆ ωM−K,N ∈ Ry and ˆ ωM−K+1,N,..., ˆ ωM,N ∈ Ry, with ˆ ω1,N ≤ ... ≤ ˆ ωM,N the solutions of the equation 1+σ2cN ˆ mN(x) = 0. We obtain ˆ ηnew

N

= M

k=1 ˆ

ξk,Nd∗

N ˆ

uk,N ˆ u∗

k,NdN with (ˆ

ξk,N) depending on ˆ λ1,N,..., ˆ λM,N and ˆ ω1,N,..., ˆ ωM,N.

60 / 68

slide-61
SLIDE 61

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

The new estimator is thus given by ˆ ηnew =

1 2πi

  • ∂R−

y ˆ

gN(z)dz. This integral can be solved using residues theorem. Since 0 is separated by assumption, we deduce from the separation property that for N large enough, w.p.1 ˆ λ1,N,..., ˆ λM−K,N ∈ Ry and ˆ λM−K+1,N,..., ˆ λM,N ∈ Ry. Using argument principle, it is possible to show that for N large enough, w.p.1 ˆ ω1,N,..., ˆ ωM−K,N ∈ Ry and ˆ ωM−K+1,N,..., ˆ ωM,N ∈ Ry, with ˆ ω1,N ≤ ... ≤ ˆ ωM,N the solutions of the equation 1+σ2cN ˆ mN(x) = 0. We obtain ˆ ηnew

N

= M

k=1 ˆ

ξk,Nd∗

N ˆ

uk,N ˆ u∗

k,NdN with (ˆ

ξk,N) depending on ˆ λ1,N,..., ˆ λM,N and ˆ ω1,N,..., ˆ ωM,N.

61 / 68

slide-62
SLIDE 62

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Table of Contents

1

Introduction

2

Random matrix theory results

3

Consistent estimation of eigenspace

4

Numerical evaluations

62 / 68

slide-63
SLIDE 63

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We evaluate the estimator in the following context:

a(θ) =

1

  • M [1,eiπsin(θ),...,ei(M−1)πsin(θ)],

source signals are AR(1) processes with correlation coefficient

  • f 0.9,

M = 20, N = 40, K = 2 and θ1 = 16◦, θ2 = 18◦.

63 / 68

slide-64
SLIDE 64

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We evaluate the estimator in the following context:

a(θ) =

1

  • M [1,eiπsin(θ),...,ei(M−1)πsin(θ)],

source signals are AR(1) processes with correlation coefficient

  • f 0.9,

M = 20, N = 40, K = 2 and θ1 = 16◦, θ2 = 18◦.

64 / 68

slide-65
SLIDE 65

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We evaluate the estimator in the following context:

a(θ) =

1

  • M [1,eiπsin(θ),...,ei(M−1)πsin(θ)],

source signals are AR(1) processes with correlation coefficient

  • f 0.9,

M = 20, N = 40, K = 2 and θ1 = 16◦, θ2 = 18◦.

65 / 68

slide-66
SLIDE 66

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

We evaluate the estimator in the following context:

a(θ) =

1

  • M [1,eiπsin(θ),...,ei(M−1)πsin(θ)],

source signals are AR(1) processes with correlation coefficient

  • f 0.9,

M = 20, N = 40, K = 2 and θ1 = 16◦, θ2 = 18◦.

66 / 68

slide-67
SLIDE 67

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations 5 10 15 20 25 30 10

−3

10

−2

10

−1

10 10

1

10

2

SNR MSE Trad−MUSIC G−MUSIC

Figure: Mean of the MSE of ˆ θ1 and ˆ θ2 versus SNR = 10log( 1

σ2 )

67 / 68

slide-68
SLIDE 68

Introduction Random matrix theory results Consistent estimation of eigenspace Numerical evaluations

Thank you for your attention.

68 / 68