Performance analysis in wireless communications and large - - PowerPoint PPT Presentation

performance analysis in wireless communications and large
SMART_READER_LITE
LIVE PREVIEW

Performance analysis in wireless communications and large - - PowerPoint PPT Presentation

Performance analysis in wireless communications and large deviations of extreme eigenvalues of deformed random matrices Myl` ene Ma da LM Orsay, Universit e Paris-Sud Joint work with P. Bianchi, M. Debbah, J. Najim and F.


slide-1
SLIDE 1

1

Performance analysis in wireless communications and large deviations of extreme eigenvalues

  • f deformed random matrices

Myl` ene Ma¨ ıda

LM Orsay, Universit´ e Paris-Sud

Joint work with P. Bianchi, M. Debbah, J. Najim and F. Benaych-Georges, A. Guionnet

slide-2
SLIDE 2

2

Outline of the talk

slide-3
SLIDE 3

2

Outline of the talk

◮ Performance analysis of a test in wireless

communications

slide-4
SLIDE 4

2

Outline of the talk

◮ Performance analysis of a test in wireless

communications

◮ Presentation of the source detection problem ◮ Performances of the GLRT ◮ Study of the largest eigenvalue in a one spike

model

slide-5
SLIDE 5

2

Outline of the talk

◮ Performance analysis of a test in wireless

communications

◮ Presentation of the source detection problem ◮ Performances of the GLRT ◮ Study of the largest eigenvalue in a one spike

model

◮ Large deviations of extreme eigenvalues of some

deformed models

slide-6
SLIDE 6

2

Outline of the talk

◮ Performance analysis of a test in wireless

communications

◮ Presentation of the source detection problem ◮ Performances of the GLRT ◮ Study of the largest eigenvalue in a one spike

model

◮ Large deviations of extreme eigenvalues of some

deformed models

◮ Presentation of the models ◮ General results ◮ Application to some classical models

slide-7
SLIDE 7

2

Outline of the talk

◮ Performance analysis of a test in wireless

communications

◮ Presentation of the source detection problem ◮ Performances of the GLRT ◮ Study of the largest eigenvalue in a one spike

model

◮ Large deviations of extreme eigenvalues of some

deformed models

◮ Presentation of the models ◮ General results ◮ Application to some classical models

◮ Conclusion

slide-8
SLIDE 8

3

Source detection in cooperative spectrum sensing

Secondary sensors try to find a bandwidth to occupy. Those K sensors can share information, each of them receiving N samples of the signal.

slide-9
SLIDE 9

4

Modelisation of the statistical test

slide-10
SLIDE 10

4

Modelisation of the statistical test

We want to test

◮ Hypothesis H0 : No signal. Secondary sensor number k receives

a series of data yk(n) of length N of the form : yk(n) = wk(n) , n = 1 . . . N where wk(n) ∼ CN(0, σ2) is a white noise.

slide-11
SLIDE 11

4

Modelisation of the statistical test

We want to test

◮ Hypothesis H0 : No signal. Secondary sensor number k receives

a series of data yk(n) of length N of the form : yk(n) = wk(n) , n = 1 . . . N where wk(n) ∼ CN(0, σ2) is a white noise.

◮ Hypothesis H1 : Presence of a signal. The data received by

sensor number k is now of the form : yk(n) = hk s(n) + wk(n) , n = 1 . . . N where s(n) is a Gaussian primary signal and hk the fading coefficient associated to the secondary sensor k.

slide-12
SLIDE 12

4

Modelisation of the statistical test

We want to test

◮ Hypothesis H0 : No signal. Secondary sensor number k receives

a series of data yk(n) of length N of the form : yk(n) = wk(n) , n = 1 . . . N where wk(n) ∼ CN(0, σ2) is a white noise.

◮ Hypothesis H1 : Presence of a signal. The data received by

sensor number k is now of the form : yk(n) = hk s(n) + wk(n) , n = 1 . . . N where s(n) is a Gaussian primary signal and hk the fading coefficient associated to the secondary sensor k. As σ and h are unknown, the Neyman-Pearson test cannot be implemented.

slide-13
SLIDE 13

5

Generalized Likelihood Ratio Test (GLRT)

slide-14
SLIDE 14

5

Generalized Likelihood Ratio Test (GLRT)

We gather the observation in the matrix Y = [yk(n)]k=1:K, n=1:N

slide-15
SLIDE 15

5

Generalized Likelihood Ratio Test (GLRT)

We gather the observation in the matrix Y = [yk(n)]k=1:K, n=1:N

◮ Under H0, the entries of Y are i.i.d. CN(0, σ2). The likelihood

writes : p0(Y; σ2) = (πσ2)−NK exp

  • − N

σ2 tr R

  • .

where R = 1

N YY∗ is the empirical covariance matrix.

slide-16
SLIDE 16

5

Generalized Likelihood Ratio Test (GLRT)

We gather the observation in the matrix Y = [yk(n)]k=1:K, n=1:N

◮ Under H0, the entries of Y are i.i.d. CN(0, σ2). The likelihood

writes : p0(Y; σ2) = (πσ2)−NK exp

  • − N

σ2 tr R

  • .

where R = 1

N YY∗ is the empirical covariance matrix.

◮ Under H1, the column vectors of Y are i.i.d. CN(0, hh∗ + σ2IK)

where h = [h1, . . . , hK]T is the fading vector corresponding to the K secondary sensors. The likelihood writes : p1(Y; h, σ2) = (πK det(hh∗+σ2IK))−N exp

  • −Ntr (R(hh∗ + σ2IK)−1)
  • .
slide-17
SLIDE 17

6

Generalized Likelihood Ratio Test (GLRT)

slide-18
SLIDE 18

6

Generalized Likelihood Ratio Test (GLRT)

Recall that σ2, h are unknown. The GLRT will reject H0 for high values

  • f the statistics :

suph,σ2p1(Y; h, σ2) supσ2p0(Y; σ2)

slide-19
SLIDE 19

6

Generalized Likelihood Ratio Test (GLRT)

Recall that σ2, h are unknown. The GLRT will reject H0 for high values

  • f the statistics :

suph,σ2p1(Y; h, σ2) supσ2p0(Y; σ2)

slide-20
SLIDE 20

6

Generalized Likelihood Ratio Test (GLRT)

Recall that σ2, h are unknown. The GLRT will reject H0 for high values

  • f the statistics :

suph,σ2p1(Y; h, σ2) supσ2p0(Y; σ2) After some standard computations, we get the following test : Reject H0 whenever the statistics : TN := λmax

1 K trR

is above the threshold γ where λmax is the largest eigenvalue of R := 1

N YY∗.

slide-21
SLIDE 21

7

Performance analysis of the GLRT

slide-22
SLIDE 22

7

Performance analysis of the GLRT

For a given threshold γ, we define :

◮ the type I Error (probability of false alarm) P0[TN > γ] is the

probability of deciding H1 when H0 holds,

◮ the type II Error P1[TN < γ] is the probability of deciding H0

when H1 holds (N.B. Type II Error depends on h and σ2)

slide-23
SLIDE 23

7

Performance analysis of the GLRT

For a given threshold γ, we define :

◮ the type I Error (probability of false alarm) P0[TN > γ] is the

probability of deciding H1 when H0 holds,

◮ the type II Error P1[TN < γ] is the probability of deciding H0

when H1 holds (N.B. Type II Error depends on h and σ2) The Receiver Operating Characterictic (ROC curve) is the set of points (Type I Error, Type II Error) for all possible thresholds. ROC := {(P0[TN > γ], P1[TN < γ]) : γ ∈ R+} .

slide-24
SLIDE 24

7

Performance analysis of the GLRT

For a given threshold γ, we define :

◮ the type I Error (probability of false alarm) P0[TN > γ] is the

probability of deciding H1 when H0 holds,

◮ the type II Error P1[TN < γ] is the probability of deciding H0

when H1 holds (N.B. Type II Error depends on h and σ2) The Receiver Operating Characterictic (ROC curve) is the set of points (Type I Error, Type II Error) for all possible thresholds. ROC := {(P0[TN > γ], P1[TN < γ]) : γ ∈ R+} . ⇒ We study the ROC curve in the asymptotic regime : K → ∞, N → ∞, K

N → c ∈ (0, 1)

slide-25
SLIDE 25

8

Asymptotic behavior of TN under H0

slide-26
SLIDE 26

8

Asymptotic behavior of TN under H0

Recall that R := 1

N YY∗ with Y having i.i.d. entries CN(0, σ2).

slide-27
SLIDE 27

8

Asymptotic behavior of TN under H0

Recall that R := 1

N YY∗ with Y having i.i.d. entries CN(0, σ2).

◮ By the law of large numbers,

1 K tr R

(H0)

− − − − →

N→∞ σ2

slide-28
SLIDE 28

8

Asymptotic behavior of TN under H0

Recall that R := 1

N YY∗ with Y having i.i.d. entries CN(0, σ2).

◮ By the law of large numbers,

1 K tr R

(H0)

− − − − →

N→∞ σ2

◮ λmax

(H0)

− − − − →

N→∞ σ2(1 + √c)2 the right edge of the Marcenko-Pastur

distribution and has Tracy-Widom fluctuations.

slide-29
SLIDE 29

8

Asymptotic behavior of TN under H0

Recall that R := 1

N YY∗ with Y having i.i.d. entries CN(0, σ2).

◮ By the law of large numbers,

1 K tr R

(H0)

− − − − →

N→∞ σ2

◮ λmax

(H0)

− − − − →

N→∞ σ2(1 + √c)2 the right edge of the Marcenko-Pastur

distribution and has Tracy-Widom fluctuations.

◮ We get that, if TN =

λmax

1 K tr R and cN = K

N ,

˜ TN = N2/3 TN − (1 + √cN)2 (1 + √cN)(1 +

1 √cN )1/3

converges in distribution to a Tracy-Widom distribution.

slide-30
SLIDE 30

8

Asymptotic behavior of TN under H0

Recall that R := 1

N YY∗ with Y having i.i.d. entries CN(0, σ2).

◮ By the law of large numbers,

1 K tr R

(H0)

− − − − →

N→∞ σ2

◮ λmax

(H0)

− − − − →

N→∞ σ2(1 + √c)2 the right edge of the Marcenko-Pastur

distribution and has Tracy-Widom fluctuations.

◮ We get that, if TN =

λmax

1 K tr R and cN = K

N ,

˜ TN = N2/3 TN − (1 + √cN)2 (1 + √cN)(1 +

1 √cN )1/3

converges in distribution to a Tracy-Widom distribution. ⇒ This determines the asymptotic threshold γ for a fixed PFA.

slide-31
SLIDE 31

9

Asymptotic behavior of TN under H1

slide-32
SLIDE 32

9

Asymptotic behavior of TN under H1

Recall that R := 1

N YY∗ with

Y =

  • hh∗ + σ2IK

1/2 XK×N with Xi,j

iid

∼ CN(0, 1)

slide-33
SLIDE 33

9

Asymptotic behavior of TN under H1

Recall that R := 1

N YY∗ with

Y =

  • hh∗ + σ2IK

1/2 XK×N with Xi,j

iid

∼ CN(0, 1) Hypothesis : ρ := h2

σ2

> √c

slide-34
SLIDE 34

9

Asymptotic behavior of TN under H1

Recall that R := 1

N YY∗ with

Y =

  • hh∗ + σ2IK

1/2 XK×N with Xi,j

iid

∼ CN(0, 1) Hypothesis : ρ := h2

σ2

> √c

◮ λmax converges out of the bulk de MP [Baik-Silv-06]

λmax

(H1)

− − − − →

N→∞ σ2(1 + ρ)

  • 1 + c

ρ

  • .
slide-35
SLIDE 35

9

Asymptotic behavior of TN under H1

Recall that R := 1

N YY∗ with

Y =

  • hh∗ + σ2IK

1/2 XK×N with Xi,j

iid

∼ CN(0, 1) Hypothesis : ρ := h2

σ2

> √c

◮ λmax converges out of the bulk de MP [Baik-Silv-06]

λmax

(H1)

− − − − →

N→∞ σ2(1 + ρ)

  • 1 + c

ρ

  • .

◮ Consequently, TN =

λmax

1 K tr R converges to

λspiked := (1 + ρ)

  • 1 + c

ρ

  • > (1 + √c)2 := λ+ .
slide-36
SLIDE 36

10

Analysis of the ROC curve

slide-37
SLIDE 37

10

Analysis of the ROC curve

As TN

(H0)

− − − → λ+, P0[TN > γ] is a rare event whenever γ > λ+. As TN

(H1)

− − − → λspiked, P1[TN < γ] is a rare event whenever γ < λspiked.

slide-38
SLIDE 38

10

Analysis of the ROC curve

As TN

(H0)

− − − → λ+, P0[TN > γ] is a rare event whenever γ > λ+. As TN

(H1)

− − − → λspiked, P1[TN < γ] is a rare event whenever γ < λspiked. We show that Under H0 (resp. H1), TN satisfies a large deviations principle in the scale N with rate function E0 (resp. E1)

slide-39
SLIDE 39

10

Analysis of the ROC curve

As TN

(H0)

− − − → λ+, P0[TN > γ] is a rare event whenever γ > λ+. As TN

(H1)

− − − → λspiked, P1[TN < γ] is a rare event whenever γ < λspiked. We show that Under H0 (resp. H1), TN satisfies a large deviations principle in the scale N with rate function E0 (resp. E1) Otherwise stated, P0[TN > γ] ≃ e−N E0(γ) P1[TN < γ] ≃ e−N E1(γ) . The set of couples (E0(γ), E1(γ)) is called asymptotic error exponent curve

slide-40
SLIDE 40

11

A few words on the proof

slide-41
SLIDE 41

11

A few words on the proof

TN = λmax/(K −1trR)

slide-42
SLIDE 42

11

A few words on the proof

TN = λmax/(K −1trR)

◮ The denominator of TN is strongly localised around its limit σ2 :

lim

N→∞

1 N log P{K −1trR / ∈ [σ2 − δ, σ2 + δ]} = −∞ The large deviations of TN are governed by those of λmax

slide-43
SLIDE 43

11

A few words on the proof

TN = λmax/(K −1trR)

◮ The denominator of TN is strongly localised around its limit σ2 :

lim

N→∞

1 N log P{K −1trR / ∈ [σ2 − δ, σ2 + δ]} = −∞ The large deviations of TN are governed by those of λmax

◮ Deviations of λmax under H0 (cf Ben Arous, Dembo, Guionnet)

slide-44
SLIDE 44

11

A few words on the proof

TN = λmax/(K −1trR)

◮ The denominator of TN is strongly localised around its limit σ2 :

lim

N→∞

1 N log P{K −1trR / ∈ [σ2 − δ, σ2 + δ]} = −∞ The large deviations of TN are governed by those of λmax

◮ Deviations of λmax under H0 (cf Ben Arous, Dembo, Guionnet)

Deviations of λmax under H1 (“spiked” model) (cf Ma¨ ıda)

slide-45
SLIDE 45

12

Comparison with a reference test

slide-46
SLIDE 46

12

Comparison with a reference test

A statistics that drew a lot of attention in this context is the Extreme Eigenvalue Ratio (EER) λmax

λmin .

slide-47
SLIDE 47

12

Comparison with a reference test

A statistics that drew a lot of attention in this context is the Extreme Eigenvalue Ratio (EER) λmax

λmin . One can do a very similar analysis, compare

the error exponent curves and show that GLR is more powerful than EER.

slide-48
SLIDE 48

12

Comparison with a reference test

A statistics that drew a lot of attention in this context is the Extreme Eigenvalue Ratio (EER) λmax

λmin . One can do a very similar analysis, compare

the error exponent curves and show that GLR is more powerful than EER.

slide-49
SLIDE 49

13

Large deviation principles : the model

slide-50
SLIDE 50

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n

slide-51
SLIDE 51

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n

Rn finite rank perturbation

  • Xn = Xn + Rn
slide-52
SLIDE 52

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n such that

(H1) 1 n

n

  • i=1

δλn

i −

→ µX, with µX compactly supported, Rn finite rank perturbation

  • Xn = Xn + Rn
slide-53
SLIDE 53

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n such that

(H1) 1 n

n

  • i=1

δλn

i −

→ µX, λn

1 −

→ a, λn

n −

→ b with µX compactly supported, with edges of support a and b. Rn finite rank perturbation

  • Xn = Xn + Rn
slide-54
SLIDE 54

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n such that

(H1) 1 n

n

  • i=1

δλn

i −

→ µX, λn

1 −

→ a, λn

n −

→ b with µX compactly supported, with edges of support a and b. Rn finite rank perturbation

  • Xn = Xn + Rn = Xn +

r

  • j=1

θiG n

i (G n i )∗,

with θ1 · · · θr0 > 0 > θr0+1 · · · θr

slide-55
SLIDE 55

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n such that

(H1) 1 n

n

  • i=1

δλn

i −

→ µX, λn

1 −

→ a, λn

n −

→ b with µX compactly supported, with edges of support a and b. Rn finite rank perturbation

  • Xn = Xn + Rn = Xn +

r

  • j=1

θiG n

i (G n i )∗,

with θ1 · · · θr0 > 0 > θr0+1 · · · θr and if G = (g1, . . . , gr) a random vector satisfying that E(eα P |g2

i |) < ∞

for some α > 0 (and not charging any hyperplane)

slide-56
SLIDE 56

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n such that

(H1) 1 n

n

  • i=1

δλn

i −

→ µX, λn

1 −

→ a, λn

n −

→ b with µX compactly supported, with edges of support a and b. Rn finite rank perturbation

  • Xn = Xn + Rn = Xn +

r

  • j=1

θiG n

i (G n i )∗,

with θ1 · · · θr0 > 0 > θr0+1 · · · θr and if G = (g1, . . . , gr) a random vector satisfying that E(eα P |g2

i |) < ∞

for some α > 0 (and not charging any hyperplane) G n

i random vector whose entries are 1/√n times independent copies of G

slide-57
SLIDE 57

13

Large deviation principles : the model

Xn diagonal, deterministic with eigenvalues λn

1 . . . λn n such that

(H1) 1 n

n

  • i=1

δλn

i −

→ µX, λn

1 −

→ a, λn

n −

→ b with µX compactly supported, with edges of support a and b. Rn finite rank perturbation

  • Xn = Xn + Rn = Xn +

r

  • j=1

θiUn

i (Un i )∗,

with θ1 · · · θr0 > 0 > θr0+1 · · · θr and if G = (g1, . . . , gr) a random vector satisfying that E(eα P |g2

i |) < ∞

for some α > 0 (and not charging any hyperplane) G n

i random vector whose entries are 1/√n times independent copies of G

and Un

i obtained by orthonormalization.

slide-58
SLIDE 58

14

Large deviation principle : the statement

slide-59
SLIDE 59

14

Large deviation principle : the statement

Theorem

The law of the r0 largest eigenvalues of Xn satisfies a LDP in the scale n with a good rate function L.

slide-60
SLIDE 60

14

Large deviation principle : the statement

Theorem

The law of the r0 largest eigenvalues of Xn satisfies a LDP in the scale n with a good rate function L. This means that for any open set O ⊂ Rr0, lim inf

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ O

  • − inf

0 L,

and for any closed set F ⊂ Rr0, lim sup

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ F

  • − inf

F L,

slide-61
SLIDE 61

14

Large deviation principle : the statement

Theorem

The law of the r0 largest eigenvalues of Xn satisfies a LDP in the scale n with a good rate function L. It has a unique minimizer towards which we have almost sure convergence. This means that for any open set O ⊂ Rr0, lim inf

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ O

  • − inf

0 L,

and for any closed set F ⊂ Rr0, lim sup

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ F

  • − inf

F L,

slide-62
SLIDE 62

14

Large deviation principle : the statement

Theorem

The law of the r0 largest eigenvalues of Xn satisfies a LDP in the scale n with a good rate function L. It has a unique minimizer towards which we have almost sure convergence. This means that for any open set O ⊂ Rr0, lim inf

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ O

  • − inf

0 L,

and for any closed set F ⊂ Rr0, lim sup

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ F

  • − inf

F L,

Remark : minimizers depend on G only through its covariance matrix.

slide-63
SLIDE 63

14

Large deviation principle : the statement

Theorem

The law of the r0 largest eigenvalues of Xn satisfies a LDP in the scale n with a good rate function L. It has a unique minimizer towards which we have almost sure convergence. This means that for any open set O ⊂ Rr0, lim inf

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ O

  • − inf

0 L,

and for any closed set F ⊂ Rr0, lim sup

n→∞

1 n log P

  • (

λ1, . . . , λr0) ∈ F

  • − inf

F L,

Remark : minimizers depend on G only through its covariance matrix. Important generalisation : we can relax the hypothesis on the extreme eigenvalues, provided the law of

G √n satisfies a LDP.

slide-64
SLIDE 64

15

LDP for Wishart matrices

slide-65
SLIDE 65

15

LDP for Wishart matrices

Consider the i.i.d. case when Xn = 0. If Gn are n × r matrices whose rows are i.i.d. copies of G and Θ = diag(θ1, . . . , θr), we can study the eigenvalues of Wn = 1

nG ∗ n ΘGn (see Fey, van der Hofstad, Klok, Θ = Id).

slide-66
SLIDE 66

15

LDP for Wishart matrices

Consider the i.i.d. case when Xn = 0. If Gn are n × r matrices whose rows are i.i.d. copies of G and Θ = diag(θ1, . . . , θr), we can study the eigenvalues of Wn = 1

nG ∗ n ΘGn (see Fey, van der Hofstad, Klok, Θ = Id).

Theorem

The law of the eigenvalues of Wn satisfies a LDP in the scale n with a good rate function.

slide-67
SLIDE 67

15

LDP for Wishart matrices

Consider the i.i.d. case when Xn = 0. If Gn are n × r matrices whose rows are i.i.d. copies of G and Θ = diag(θ1, . . . , θr), we can study the eigenvalues of Wn = 1

nG ∗ n ΘGn (see Fey, van der Hofstad, Klok, Θ = Id).

Theorem

The law of the eigenvalues of Wn satisfies a LDP in the scale n with a good rate function. When G is a Gaussian vector with positive definite covariance matrix, the rate function can be made very explicit.

slide-68
SLIDE 68

15

LDP for Wishart matrices

Consider the i.i.d. case when Xn = 0. If Gn are n × r matrices whose rows are i.i.d. copies of G and Θ = diag(θ1, . . . , θr), we can study the eigenvalues of Wn = 1

nG ∗ n ΘGn (see Fey, van der Hofstad, Klok, Θ = Id).

Theorem

The law of the eigenvalues of Wn satisfies a LDP in the scale n with a good rate function. When G is a Gaussian vector with positive definite covariance matrix, the rate function can be made very explicit. In the standard case, L(α1, . . . , αr) = 1 2

r

  • i=1

αi θi − 1 − log αi θi

  • .
slide-69
SLIDE 69

15

LDP for Wishart matrices

Consider the i.i.d. case when Xn = 0. If Gn are n × r matrices whose rows are i.i.d. copies of G and Θ = diag(θ1, . . . , θr), we can study the eigenvalues of Wn = 1

nG ∗ n ΘGn (see Fey, van der Hofstad, Klok, Θ = Id).

Theorem

The law of the eigenvalues of Wn satisfies a LDP in the scale n with a good rate function. When G is a Gaussian vector with positive definite covariance matrix, the rate function can be made very explicit. In the standard case, L(α1, . . . , αr) = 1 2

r

  • i=1

αi θi − 1 − log αi θi

  • .

From there it is easy to deduce the rate function for the largest eigenvalue Lmax(x) =

  • 1

2(x − 1 − log x)

if x 1

r 2(x − 1 − log x)

if x ∈ (0, 1)

slide-70
SLIDE 70

16

LDP for perturbed Coulomb gases

slide-71
SLIDE 71

16

LDP for perturbed Coulomb gases

We consider the case when Xn is now random with a law with density ∼ e−ntrV (X).

slide-72
SLIDE 72

16

LDP for perturbed Coulomb gases

We consider the case when Xn is now random with a law with density ∼ e−ntrV (X). We assume the Un

i ’s to be a family of orthonormal vectors, either

deterministic or independent of Xn.

slide-73
SLIDE 73

16

LDP for perturbed Coulomb gases

We consider the case when Xn is now random with a law with density ∼ e−ntrV (X). We assume the Un

i ’s to be a family of orthonormal vectors, either

deterministic or independent of Xn.

Theorem

Under appropriate assumptions on V, for any fixed k, the law of the k largest eigenvalues of Xn satisfies a large deviation principle with a good rate function.

slide-74
SLIDE 74

16

LDP for perturbed Coulomb gases

We consider the case when Xn is now random with a law with density ∼ e−ntrV (X). We assume the Un

i ’s to be a family of orthonormal vectors, either

deterministic or independent of Xn.

Theorem

Under appropriate assumptions on V, for any fixed k, the law of the k largest eigenvalues of Xn satisfies a large deviation principle with a good rate function. Rq : we first condition on the deviation of the eigenvalues of Xn so that we can consider those as outliers.

slide-75
SLIDE 75

17

Rough sketch of proof

slide-76
SLIDE 76

17

Rough sketch of proof

fn(z) = det

  • G n

i,j(z)

r

i,j=1 − diag

  • θ−1

1 , . . . , θ−1 r

  • ,

with G n

i,j(z) = Un i , (z − Xn)−1Un j .

slide-77
SLIDE 77

17

Rough sketch of proof

fn(z) = det

  • G n

i,j(z)

r

i,j=1 − diag

  • θ−1

1 , . . . , θ−1 r

  • ,

with G n

i,j(z) = Un i , (z − Xn)−1Un j .

If K n

i,j(z) = G n i , (z − Xn)−1G n j = 1

n

n

  • k=1

gi(k)gj(k) z − λk and C n

i,j = 1 n

n

k=1 gi(k)gj(k) then fn(z) = PΘ,r(K n(z), C n)

slide-78
SLIDE 78

18

Conclusion

slide-79
SLIDE 79

18

Conclusion

Can we use large deviation principles of this type to analyse the performance for some other models relevant in wireless communication context ?