Principal components and linear mixed models Zhou Fan Yale - - PowerPoint PPT Presentation

principal components and linear mixed models
SMART_READER_LITE
LIVE PREVIEW

Principal components and linear mixed models Zhou Fan Yale - - PowerPoint PPT Presentation

Principal components and linear mixed models Zhou Fan Yale University, Statistics and Data Science (joint w/ Iain Johnstone, Yi Sun, Zhichao Wang) Random Matrices and Related Topics Korea Institute for Advanced Study, Seoul May 6, 2019 1/37


slide-1
SLIDE 1

1/37

Principal components and linear mixed models

Zhou Fan Yale University, Statistics and Data Science (joint w/ Iain Johnstone, Yi Sun, Zhichao Wang) Random Matrices and Related Topics Korea Institute for Advanced Study, Seoul May 6, 2019

slide-2
SLIDE 2

2/37

Linear mixed models capture multiple “levels” of variation in data. They were introduced by R. A. Fisher in 1918 to study genetic and non-genetic components of variance in quantitative traits.

slide-3
SLIDE 3

2/37

Linear mixed models capture multiple “levels” of variation in data. They were introduced by R. A. Fisher in 1918 to study genetic and non-genetic components of variance in quantitative traits. This talk will describe some applications of random matrix theory to understand spectral behavior and principal components analysis for classical covariance estimates in these models.

slide-4
SLIDE 4

3/37

Outline

Model and motivation Results on spectral behavior A few general tools

slide-5
SLIDE 5

4/37

Model and motivation

slide-6
SLIDE 6

5/37

Example: A twin study

Measure p quantitative traits in n/2 pairs of twins. For i = 1, . . . , n/2, model this with two “levels” of variation as Yi,1 = αi + εi,1 ∈ Rp Yi,2 = αi + εi,2 ∈ Rp

slide-7
SLIDE 7

5/37

Example: A twin study

Measure p quantitative traits in n/2 pairs of twins. For i = 1, . . . , n/2, model this with two “levels” of variation as Yi,1 = αi + εi,1 ∈ Rp Yi,2 = αi + εi,2 ∈ Rp Here, αi ∈ Rp is the shared genetic effect in the ith twin pair, and εi,1, εi,2 ∈ Rp are individual variations.

slide-8
SLIDE 8

5/37

Example: A twin study

Measure p quantitative traits in n/2 pairs of twins. For i = 1, . . . , n/2, model this with two “levels” of variation as Yi,1 = αi + εi,1 ∈ Rp Yi,2 = αi + εi,2 ∈ Rp Here, αi ∈ Rp is the shared genetic effect in the ith twin pair, and εi,1, εi,2 ∈ Rp are individual variations. Assume these are random and independent, αi

iid

∼ N(0, ΣA), εi,j

iid

∼ N(0, ΣE)

slide-9
SLIDE 9

5/37

Example: A twin study

Measure p quantitative traits in n/2 pairs of twins. For i = 1, . . . , n/2, model this with two “levels” of variation as Yi,1 = αi + εi,1 ∈ Rp Yi,2 = αi + εi,2 ∈ Rp Here, αi ∈ Rp is the shared genetic effect in the ith twin pair, and εi,1, εi,2 ∈ Rp are individual variations. Assume these are random and independent, αi

iid

∼ N(0, ΣA), εi,j

iid

∼ N(0, ΣE) Only the Yi,j’s (not the αi’s or εi,j’s) are observed. From this, we wish to separately understand ΣA and ΣE.

slide-10
SLIDE 10

6/37

Example: Mutations in fruit flies [McGuigan et al ’14]

Ancestors Inbreeding Offspring

... ... ... ...

In inbred lines of fruit lines, how much phenotypic variation arises due to genetic mutations across the generations?

slide-11
SLIDE 11

6/37

Example: Mutations in fruit flies [McGuigan et al ’14]

Ancestors Inbreeding Offspring

... ... ... ...

In inbred lines of fruit lines, how much phenotypic variation arises due to genetic mutations across the generations? Model traits (gene expression measurements) in the jth offspring of the ith inbred line as Yi,j = αi + εi,j. The covariance ΣA of αi’s is the mutational variation of interest.

slide-12
SLIDE 12

7/37

Example: Genome-wide association studies

In n individuals, measure:

  • p quantitative traits, Y ∈ Rn×p
  • genotypes {0, 1, 2} at m SNPs, X ∈ Rn×m
slide-13
SLIDE 13

7/37

Example: Genome-wide association studies

In n individuals, measure:

  • p quantitative traits, Y ∈ Rn×p
  • genotypes {0, 1, 2} at m SNPs, X ∈ Rn×m

Fisher’s infinitesimal model: Y = XA + E

slide-14
SLIDE 14

7/37

Example: Genome-wide association studies

In n individuals, measure:

  • p quantitative traits, Y ∈ Rn×p
  • genotypes {0, 1, 2} at m SNPs, X ∈ Rn×m

Fisher’s infinitesimal model: Y = XA + E

  • A ∈ Rm×p has independent rows α1, . . . , αm. Each αi ∈ Rp is

the contribution of the ith SNP to the observed traits.

slide-15
SLIDE 15

7/37

Example: Genome-wide association studies

In n individuals, measure:

  • p quantitative traits, Y ∈ Rn×p
  • genotypes {0, 1, 2} at m SNPs, X ∈ Rn×m

Fisher’s infinitesimal model: Y = XA + E

  • A ∈ Rm×p has independent rows α1, . . . , αm. Each αi ∈ Rp is

the contribution of the ith SNP to the observed traits.

  • E ∈ Rn×p has independent rows ε1, . . . , εn. Each εj ∈ Rp is

the residual trait variation in the jth individual.

slide-16
SLIDE 16

7/37

Example: Genome-wide association studies

In n individuals, measure:

  • p quantitative traits, Y ∈ Rn×p
  • genotypes {0, 1, 2} at m SNPs, X ∈ Rn×m

Fisher’s infinitesimal model: Y = XA + E

  • A ∈ Rm×p has independent rows α1, . . . , αm. Each αi ∈ Rp is

the contribution of the ith SNP to the observed traits.

  • E ∈ Rn×p has independent rows ε1, . . . , εn. Each εj ∈ Rp is

the residual trait variation in the jth individual. The covariance ΣA of αi’s is the (additive) genetic covariance. The relative size of ΣA to ΣE provides a measure of heritability.

slide-17
SLIDE 17

8/37

The linear mixed model

A general model with k levels of variation is Y = U1A1 + . . . + UkAk ∈ Rn×p

slide-18
SLIDE 18

8/37

The linear mixed model

A general model with k levels of variation is Y = U1A1 + . . . + UkAk ∈ Rn×p

  • A1, . . . , Ak are random and unobserved, with n1, . . . , nk

independent rows distributed as N(0, Σ1), . . . , N(0, Σk).

slide-19
SLIDE 19

8/37

The linear mixed model

A general model with k levels of variation is Y = U1A1 + . . . + UkAk ∈ Rn×p

  • A1, . . . , Ak are random and unobserved, with n1, . . . , nk

independent rows distributed as N(0, Σ1), . . . , N(0, Σk).

  • U1, . . . , Uk are known, deterministic, and specified by the

experimental design. E.g. for the twin study, k = 2 and U1 =        1 1 ... 1 1        , U2 = Id

slide-20
SLIDE 20

8/37

The linear mixed model

A general model with k levels of variation is Y = U1A1 + . . . + UkAk ∈ Rn×p

  • A1, . . . , Ak are random and unobserved, with n1, . . . , nk

independent rows distributed as N(0, Σ1), . . . , N(0, Σk).

  • U1, . . . , Uk are known, deterministic, and specified by the

experimental design. E.g. for the twin study, k = 2 and U1 =        1 1 ... 1 1        , U2 = Id (k = 1, U1 = Id is the setting of n independent observations in Rp)

slide-21
SLIDE 21

9/37

The MANOVA covariance estimator

For r ∈ {1, . . . , k}, a classical estimator for Σr is the MANOVA

  • estimator. This is a matrix
  • Σ = Y TBY .

Here, B ∈ Rn×n is symmetric and chosen so that E[ Σ] = Σr.

slide-22
SLIDE 22

9/37

The MANOVA covariance estimator

For r ∈ {1, . . . , k}, a classical estimator for Σr is the MANOVA

  • estimator. This is a matrix
  • Σ = Y TBY .

Here, B ∈ Rn×n is symmetric and chosen so that E[ Σ] = Σr. Some examples:

  • For k = 1 and independent observations, we take B = 1

nI.

This gives the usual sample covariance matrix Σ = 1

nY TY .

slide-23
SLIDE 23

9/37

The MANOVA covariance estimator

For r ∈ {1, . . . , k}, a classical estimator for Σr is the MANOVA

  • estimator. This is a matrix
  • Σ = Y TBY .

Here, B ∈ Rn×n is symmetric and chosen so that E[ Σ] = Σr. Some examples:

  • For k = 1 and independent observations, we take B = 1

nI.

This gives the usual sample covariance matrix Σ = 1

nY TY .

  • For k = 2 and the twin study, we take B = 1

n(π − π⊥) where

π, π⊥ are orthogonal projections onto the column span of U1 and its complement.

slide-24
SLIDE 24

10/37

The MANOVA covariance estimator

Substituting Y =

r UrAr, we may express the estimator as

  • Σ =

k

  • r=1

k

  • s=1

HT

r G T r FrsGsHs

  • Hr ≡ Σ1/2

r

and Frs ≡ UT

r BUs are deterministic

  • Gr are independent and random, with i.i.d. Gaussian entries
slide-25
SLIDE 25

10/37

The MANOVA covariance estimator

Substituting Y =

r UrAr, we may express the estimator as

  • Σ =

k

  • r=1

k

  • s=1

HT

r G T r FrsGsHs

  • Hr ≡ Σ1/2

r

and Frs ≡ UT

r BUs are deterministic

  • Gr are independent and random, with i.i.d. Gaussian entries

This is the random matrix model that I’ll discuss.

slide-26
SLIDE 26

10/37

The MANOVA covariance estimator

Substituting Y =

r UrAr, we may express the estimator as

  • Σ =

k

  • r=1

k

  • s=1

HT

r G T r FrsGsHs

  • Hr ≡ Σ1/2

r

and Frs ≡ UT

r BUs are deterministic

  • Gr are independent and random, with i.i.d. Gaussian entries

This is the random matrix model that I’ll discuss.

  • 1. What is the bulk eigenvalue distribution for large

n, n1, . . . , nk, p?

slide-27
SLIDE 27

10/37

The MANOVA covariance estimator

Substituting Y =

r UrAr, we may express the estimator as

  • Σ =

k

  • r=1

k

  • s=1

HT

r G T r FrsGsHs

  • Hr ≡ Σ1/2

r

and Frs ≡ UT

r BUs are deterministic

  • Gr are independent and random, with i.i.d. Gaussian entries

This is the random matrix model that I’ll discuss.

  • 1. What is the bulk eigenvalue distribution for large

n, n1, . . . , nk, p?

  • 2. What is the behavior of principal components in spiked

settings?

slide-28
SLIDE 28

11/37

Aside: The case of isotropic noise

A simple statistical null hypothesis in this model is Σr = σ2

r Id

for every r ∈ {1, . . . , k}, i.e. the distribution of every random effect is isotropic noise.

slide-29
SLIDE 29

11/37

Aside: The case of isotropic noise

A simple statistical null hypothesis in this model is Σr = σ2

r Id

for every r ∈ {1, . . . , k}, i.e. the distribution of every random effect is isotropic noise. This setting is special: We may write Σ = G TFG where G =    G1 . . . Gk    , F = (σrFrsσs)k

r,s=1.

slide-30
SLIDE 30

11/37

Aside: The case of isotropic noise

A simple statistical null hypothesis in this model is Σr = σ2

r Id

for every r ∈ {1, . . . , k}, i.e. the distribution of every random effect is isotropic noise. This setting is special: We may write Σ = G TFG where G =    G1 . . . Gk    , F = (σrFrsσs)k

r,s=1.

Here, Σ is related to the sample covariance model. We’ve obtained more refined results in this setting than in the general case, due a known local law [Knowles, Yin ’17].

slide-31
SLIDE 31

12/37

Results on spectral behavior

slide-32
SLIDE 32

13/37

Empirical spectral distribution

Theorem (Fan, Johnstone ’16)

As n, n1, . . . , nk, p → ∞ proportionally, the e.s.d. ˆ µ of Σ is approximated (weakly a.s.) by a deterministic equivalent measure µ0.

slide-33
SLIDE 33

13/37

Empirical spectral distribution

Theorem (Fan, Johnstone ’16)

As n, n1, . . . , nk, p → ∞ proportionally, the e.s.d. ˆ µ of Σ is approximated (weakly a.s.) by a deterministic equivalent measure µ0. This has Stieltjes transform m0(z) characterized by ar(z) = −n−1

r

Tr

  • (z Id +b · Σ)−1Σr
  • br(z) = −n−1

r

Trr

  • (Id +FD(a))−1F
  • m0(z) = −p−1 Tr
  • (z Id +b · Σ)−1

for r = 1, . . . , k. Here, b · Σ ≡ b1(z)Σ1 + . . . + bk(z)Σk, F ≡ (Frs)k

r,s=1

D(a) ≡ diag(a1(z) Idn1, . . . , ak(z) Idnk), and Trr is the trace of the (r, r) block, of size nr.

slide-34
SLIDE 34

14/37

Empirical spectral distribution

Sample eigenvalues

Frequency −2 −1 1 2 3 4 5 10 15 20 25 30

Spectrum of ΣA in a twin study design, 300 twin pairs, 300 traits.

slide-35
SLIDE 35

15/37

Empirical spectral distribution

Sample eigenvalues

Frequency −4 −2 2 4 6 8 10 20 30 40

Spectrum of ΣA in a twin study design, 150 twin pairs, 600 traits.

slide-36
SLIDE 36

16/37

A free approximation

The measure µ0 is the τ-law of an operator w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

in a non-commutative probability space (A, τ):

slide-37
SLIDE 37

16/37

A free approximation

The measure µ0 is the τ-law of an operator w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

in a non-commutative probability space (A, τ):

  • {hr}k

r=1 and {frs}k r,s=1 have the same joint moments under τ

as {Hr}k

r=1 and {Frs}k r,s=1 under the normalized matrix trace.

slide-38
SLIDE 38

16/37

A free approximation

The measure µ0 is the τ-law of an operator w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

in a non-commutative probability space (A, τ):

  • {hr}k

r=1 and {frs}k r,s=1 have the same joint moments under τ

as {Hr}k

r=1 and {Frs}k r,s=1 under the normalized matrix trace.

  • Each g∗

r gr has Marcenko-Pastur moments under τ.

slide-39
SLIDE 39

16/37

A free approximation

The measure µ0 is the τ-law of an operator w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

in a non-commutative probability space (A, τ):

  • {hr}k

r=1 and {frs}k r,s=1 have the same joint moments under τ

as {Hr}k

r=1 and {Frs}k r,s=1 under the normalized matrix trace.

  • Each g∗

r gr has Marcenko-Pastur moments under τ.

  • The families {hr : r = 1, . . . , k}, {frs : r, s = 1, . . . , k}, and

elements g1, . . . , gk are free with amalgamation over a diagonal sub-algebra of projections. [Benaych-Georges ’09]

slide-40
SLIDE 40

16/37

A free approximation

The measure µ0 is the τ-law of an operator w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

in a non-commutative probability space (A, τ):

  • {hr}k

r=1 and {frs}k r,s=1 have the same joint moments under τ

as {Hr}k

r=1 and {Frs}k r,s=1 under the normalized matrix trace.

  • Each g∗

r gr has Marcenko-Pastur moments under τ.

  • The families {hr : r = 1, . . . , k}, {frs : r, s = 1, . . . , k}, and

elements g1, . . . , gk are free with amalgamation over a diagonal sub-algebra of projections. [Benaych-Georges ’09] We show that this approximation is asymptotically correct, due to the independence and rotational invariance of G1, . . . , Gk.

slide-41
SLIDE 41

17/37

Computation of fixed-point equations

In (A, τ), let τ H : A → H be the conditional expectation onto the subalgebra H = h1, . . . , hk ⊂ A, and similarly for τ G, τ F.

slide-42
SLIDE 42

17/37

Computation of fixed-point equations

In (A, τ), let τ H : A → H be the conditional expectation onto the subalgebra H = h1, . . . , hk ⊂ A, and similarly for τ G, τ F. Let w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

v =

k

  • r=1

k

  • s=1

g∗

r frsgs

u =

k

  • r=1

k

  • s=1

frs

slide-43
SLIDE 43

17/37

Computation of fixed-point equations

In (A, τ), let τ H : A → H be the conditional expectation onto the subalgebra H = h1, . . . , hk ⊂ A, and similarly for τ G, τ F. Let w =

k

  • r=1

k

  • s=1

h∗

r g∗ r frsgshs

v =

k

  • r=1

k

  • s=1

g∗

r frsgs

u =

k

  • r=1

k

  • s=1

frs We relate the τ H Stieltjes-transform of w, τ G Stietjes-transform of v, and τ F Stieltjes-transform of u using conditional cumulant relations, and compute τ ◦ τ H((z − w)−1). [Speicher, Vargas ’12]

slide-44
SLIDE 44

18/37

All eigenvalues stick to the support

Theorem (Fan, Sun, Wang ’19)

For any fixed δ > 0, almost surely for all large n spec( Σ) ⊂ supp(µ0)δ ≡ {x ∈ R : dist(x, supp(µ0)) < δ}.

slide-45
SLIDE 45

18/37

All eigenvalues stick to the support

Theorem (Fan, Sun, Wang ’19)

For any fixed δ > 0, almost surely for all large n spec( Σ) ⊂ supp(µ0)δ ≡ {x ∈ R : dist(x, supp(µ0)) < δ}. We get this from a strong asymptotic freeness result for GOE and deterministic matrices, by embedding G1, . . . , Gk into GOE.

slide-46
SLIDE 46

18/37

All eigenvalues stick to the support

Theorem (Fan, Sun, Wang ’19)

For any fixed δ > 0, almost surely for all large n spec( Σ) ⊂ supp(µ0)δ ≡ {x ∈ R : dist(x, supp(µ0)) < δ}. We get this from a strong asymptotic freeness result for GOE and deterministic matrices, by embedding G1, . . . , Gk into GOE.

Theorem (Fan, Johnstone ’17)

When Σr = σ2

r Id for every r, the extremal eigenvalue at each

“regular” edge of supp(µ0) has GOE Tracy-Widom fluctuations.

slide-47
SLIDE 47

18/37

All eigenvalues stick to the support

Theorem (Fan, Sun, Wang ’19)

For any fixed δ > 0, almost surely for all large n spec( Σ) ⊂ supp(µ0)δ ≡ {x ∈ R : dist(x, supp(µ0)) < δ}. We get this from a strong asymptotic freeness result for GOE and deterministic matrices, by embedding G1, . . . , Gk into GOE.

Theorem (Fan, Johnstone ’17)

When Σr = σ2

r Id for every r, the extremal eigenvalue at each

“regular” edge of supp(µ0) has GOE Tracy-Widom fluctuations. This is a distinct analytic argument specific to the isotropic setting. Establishing Tracy-Widom for general Σr is interesting and open.

slide-48
SLIDE 48

19/37

Spiked model and outliers

For understanding principal components analysis in these models, we are interested in spiked settings Σr = ˚ Σr + ΓT

r Γr

where Γr ∈ Rp×ℓr , and ΓT

r Γr is a perturbation of fixed rank ℓr.

slide-49
SLIDE 49

19/37

Spiked model and outliers

For understanding principal components analysis in these models, we are interested in spiked settings Σr = ˚ Σr + ΓT

r Γr

where Γr ∈ Rp×ℓr , and ΓT

r Γr is a perturbation of fixed rank ℓr.

Mean eigenvalue locations

Eigenvalue Frequency −4 −2 2 4 6 20 40 60 80

  • ΣA in a twin-study design, where each ΣA and ΣE is a rank-1 perturbation of
  • Id. The spike-to-outlier mapping is not 1-to-1, and ΣE produces outliers in

ΣA.

slide-50
SLIDE 50

20/37

Spiked model and outliers

Define µ0 by the bulk components ˚ Σ1, . . . , ˚ Σk. Recall b1(z), . . . , bk(z) from the fixed-point equations for µ0, and set T(z) = Id +

  • ΓT

r

  • z Id +b · ˚

Σ −1 Γsbs(z) k

r,s=1

∈ Cℓ+×ℓ+ where ℓ+ = ℓ1 + . . . + ℓk and b · ˚ Σ = b1˚ Σ1 + . . . + bk˚ Σk.

slide-51
SLIDE 51

20/37

Spiked model and outliers

Define µ0 by the bulk components ˚ Σ1, . . . , ˚ Σk. Recall b1(z), . . . , bk(z) from the fixed-point equations for µ0, and set T(z) = Id +

  • ΓT

r

  • z Id +b · ˚

Σ −1 Γsbs(z) k

r,s=1

∈ Cℓ+×ℓ+ where ℓ+ = ℓ1 + . . . + ℓk and b · ˚ Σ = b1˚ Σ1 + . . . + bk˚ Σk.

Theorem (Fan, Sun, Wang ’19)

Eigenvalues λ of Σ separated from supp(µ0) are in correspondence with roots of 0 = det T(λ), such that λ − λ → 0.

slide-52
SLIDE 52

20/37

Spiked model and outliers

Define µ0 by the bulk components ˚ Σ1, . . . , ˚ Σk. Recall b1(z), . . . , bk(z) from the fixed-point equations for µ0, and set T(z) = Id +

  • ΓT

r

  • z Id +b · ˚

Σ −1 Γsbs(z) k

r,s=1

∈ Cℓ+×ℓ+ where ℓ+ = ℓ1 + . . . + ℓk and b · ˚ Σ = b1˚ Σ1 + . . . + bk˚ Σk.

Theorem (Fan, Sun, Wang ’19)

Eigenvalues λ of Σ separated from supp(µ0) are in correspondence with roots of 0 = det T(λ), such that λ − λ → 0. If λ is an isolated root, then the eigenvector v and unit vector u ∈ ker T(λ) satisfy (ΓT

1

v, . . . , ΓT

k

v) − α−1/2u → 0.

slide-53
SLIDE 53

20/37

Spiked model and outliers

Define µ0 by the bulk components ˚ Σ1, . . . , ˚ Σk. Recall b1(z), . . . , bk(z) from the fixed-point equations for µ0, and set T(z) = Id +

  • ΓT

r

  • z Id +b · ˚

Σ −1 Γsbs(z) k

r,s=1

∈ Cℓ+×ℓ+ where ℓ+ = ℓ1 + . . . + ℓk and b · ˚ Σ = b1˚ Σ1 + . . . + bk˚ Σk.

Theorem (Fan, Sun, Wang ’19)

Eigenvalues λ of Σ separated from supp(µ0) are in correspondence with roots of 0 = det T(λ), such that λ − λ → 0. If λ is an isolated root, then the eigenvector v and unit vector u ∈ ker T(λ) satisfy (ΓT

1

v, . . . , ΓT

k

v) − α−1/2u → 0. For ˚ Σr = σ2

r Id, we established also in [Fan, Johnstone, Sun ’18] a

Gaussian CLT for λ − λ.

slide-54
SLIDE 54

21/37

Bias and phase transition of eigenvalues

The number of outlier eigenvalues is predicted by |{λ / ∈ supp(µ0) : 0 = det T(λ)}|. This is 0 if all population spikes are small.

slide-55
SLIDE 55

21/37

Bias and phase transition of eigenvalues

The number of outlier eigenvalues is predicted by |{λ / ∈ supp(µ0) : 0 = det T(λ)}|. This is 0 if all population spikes are small. More generally, the number and locations of outliers depend on

  • Alignments between bulk components ˚

Σ1, . . . , ˚ Σk

  • Their alignments with all spike directions Γ1, . . . , Γk
slide-56
SLIDE 56

21/37

Bias and phase transition of eigenvalues

The number of outlier eigenvalues is predicted by |{λ / ∈ supp(µ0) : 0 = det T(λ)}|. This is 0 if all population spikes are small. More generally, the number and locations of outliers depend on

  • Alignments between bulk components ˚

Σ1, . . . , ˚ Σk

  • Their alignments with all spike directions Γ1, . . . , Γk

Qualitatively, for the twin-study design:

  • Large spikes in ΣE generate two outliers of opposite sign in

ΣA

slide-57
SLIDE 57

21/37

Bias and phase transition of eigenvalues

The number of outlier eigenvalues is predicted by |{λ / ∈ supp(µ0) : 0 = det T(λ)}|. This is 0 if all population spikes are small. More generally, the number and locations of outliers depend on

  • Alignments between bulk components ˚

Σ1, . . . , ˚ Σk

  • Their alignments with all spike directions Γ1, . . . , Γk

Qualitatively, for the twin-study design:

  • Large spikes in ΣE generate two outliers of opposite sign in

ΣA

  • Large eigenvalues in ΣA are observed with upward bias in

ΣA, where the bias is larger if this eigenvector of ΣA is aligned with eigenvectors of ΣE.

slide-58
SLIDE 58

22/37

Bias and phase transition of eigenvalues

Mean eigenvalue locations

Eigenvalue Frequency −4 −2 2 4 6 20 40 60 80

slide-59
SLIDE 59

23/37

Bias of principal eigenvectors

In contrast to the sample covariance setting of k = 1, here there may also be bias in the outlier eigenvectors:

  • 0.0

0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

  • In high dimensions, principal component eigenvectors of

ΣA may be biased towards eigenvectors of ΣE.

slide-60
SLIDE 60

24/37

Debiasing the principal components

For ˚ Σr = σ2

r Id, we developed in [Fan, Johnstone, Sun ’18] an

algorithm to estimate the population eigenvalues, and also debias the estimated principal eigenvectors:

slide-61
SLIDE 61

24/37

Debiasing the principal components

For ˚ Σr = σ2

r Id, we developed in [Fan, Johnstone, Sun ’18] an

algorithm to estimate the population eigenvalues, and also debias the estimated principal eigenvectors:

  • 1. Track the trajectories of outlier eigenvalues of

Σ = Y TBY , as B varies within a (k − 1)-dimensional family.

slide-62
SLIDE 62

24/37

Debiasing the principal components

For ˚ Σr = σ2

r Id, we developed in [Fan, Johnstone, Sun ’18] an

algorithm to estimate the population eigenvalues, and also debias the estimated principal eigenvectors:

  • 1. Track the trajectories of outlier eigenvalues of

Σ = Y TBY , as B varies within a (k − 1)-dimensional family.

  • 2. For B where an outlier

λ satisfies b2( λ) = . . . = bk( λ) = 0, v is unbiased for an eigenvector of Σ1, and λ is related to the eigenvalue of Σ1.

slide-63
SLIDE 63

24/37

Debiasing the principal components

For ˚ Σr = σ2

r Id, we developed in [Fan, Johnstone, Sun ’18] an

algorithm to estimate the population eigenvalues, and also debias the estimated principal eigenvectors:

  • 1. Track the trajectories of outlier eigenvalues of

Σ = Y TBY , as B varies within a (k − 1)-dimensional family.

  • 2. For B where an outlier

λ satisfies b2( λ) = . . . = bk( λ) = 0, v is unbiased for an eigenvector of Σ1, and λ is related to the eigenvalue of Σ1. Use a grid search to find such B.

−0.4 −0.2 0.0 0.2 0.4 −0.4 −0.2 0.0 0.2 0.4

slide-64
SLIDE 64

25/37

Debiasing the principal components

Mean and 90%-ellipsoid of MANOVA and debiased principal eigenvector estimates, for true eigenvector (1, 0) and eigenvalue µ: µ = 6

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • µ = 8

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • µ = 10

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

slide-65
SLIDE 65

25/37

Debiasing the principal components

Mean and 90%-ellipsoid of MANOVA and debiased principal eigenvector estimates, for true eigenvector (1, 0) and eigenvalue µ: µ = 6

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • µ = 8

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • µ = 10

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • Mean and st. dev. of MANOVA and debiased eigenvalue estimates:

µ = 6 µ = 8 µ = 10 MANOVA 10.57 (1.74) 11.98 (1.85) 13.59 (1.99) Estimated 6.28 (1.56) 8.21 (1.72) 10.15 (1.91)

slide-66
SLIDE 66

25/37

Debiasing the principal components

Mean and 90%-ellipsoid of MANOVA and debiased principal eigenvector estimates, for true eigenvector (1, 0) and eigenvalue µ: µ = 6

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • µ = 8

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • µ = 10

0.0 0.2 0.4 0.6 0.8 1.0 −0.4 −0.2 0.0 0.2 0.4

  • Mean and st. dev. of MANOVA and debiased eigenvalue estimates:

µ = 6 µ = 8 µ = 10 MANOVA 10.57 (1.74) 11.98 (1.85) 13.59 (1.99) Estimated 6.28 (1.56) 8.21 (1.72) 10.15 (1.91) Developing estimation procedures for general ˚ Σr = σ2

r Id is open.

slide-67
SLIDE 67

26/37

A few general tools

slide-68
SLIDE 68

27/37

  • 1. ℓ2 fluctuation averaging

In the setting Σr = σ2

r Id, recall that

Σ = G TFG.

slide-69
SLIDE 69

27/37

  • 1. ℓ2 fluctuation averaging

In the setting Σr = σ2

r Id, recall that

Σ = G TFG. The resolvent R(z) = ( Σ − z Id)−1 has a deterministic approximation R0(z) in the form of an anisotropic local law [Knowles, Yin ’17]. For z separated from the spectral support, dist(z, supp(µ0)) > δ, and any deterministic unit vectors u and v, this says u∗(R(z) − R0(z))v ≺ 1/√n.

slide-70
SLIDE 70

27/37

  • 1. ℓ2 fluctuation averaging

In the setting Σr = σ2

r Id, recall that

Σ = G TFG. The resolvent R(z) = ( Σ − z Id)−1 has a deterministic approximation R0(z) in the form of an anisotropic local law [Knowles, Yin ’17]. For z separated from the spectral support, dist(z, supp(µ0)) > δ, and any deterministic unit vectors u and v, this says u∗(R(z) − R0(z))v ≺ 1/√n. To approximate other combinations of resolvent entries, e.g. Tr R(z), one may apply weak dependence of these entries and “fluctuation averaging” techniques [Erd¨

  • s, Yau, Yin ’11].
slide-71
SLIDE 71

28/37

  • 1. ℓ2 fluctuation averaging

Let x1, . . . , xn be independent random quantities. For Y a scalar function of x1, . . . , xn, denote Pi[Y] = Exi[Y], Qi[Y] = Y − Pi[Y], QS[Y] =

  • i∈S

Qi

  • [Y]
slide-72
SLIDE 72

28/37

  • 1. ℓ2 fluctuation averaging

Let x1, . . . , xn be independent random quantities. For Y a scalar function of x1, . . . , xn, denote Pi[Y] = Exi[Y], Qi[Y] = Y − Pi[Y], QS[Y] =

  • i∈S

Qi

  • [Y]

Lemma (Fan, Johnstone, Sun ’18)

Suppose Y1, . . . , Yn satisfy Pi[Yi] = 0, Yi ≺ 1, and the weak dependence QS[Yi] ≺ n−|S|/2 for i / ∈ S.

slide-73
SLIDE 73

28/37

  • 1. ℓ2 fluctuation averaging

Let x1, . . . , xn be independent random quantities. For Y a scalar function of x1, . . . , xn, denote Pi[Y] = Exi[Y], Qi[Y] = Y − Pi[Y], QS[Y] =

  • i∈S

Qi

  • [Y]

Lemma (Fan, Johnstone, Sun ’18)

Suppose Y1, . . . , Yn satisfy Pi[Yi] = 0, Yi ≺ 1, and the weak dependence QS[Yi] ≺ n−|S|/2 for i / ∈ S. Then

n

  • i=1

uiYi ≺ n

  • i=1

|ui|2 1/2 .

slide-74
SLIDE 74

28/37

  • 1. ℓ2 fluctuation averaging

Let x1, . . . , xn be independent random quantities. For Y a scalar function of x1, . . . , xn, denote Pi[Y] = Exi[Y], Qi[Y] = Y − Pi[Y], QS[Y] =

  • i∈S

Qi

  • [Y]

Lemma (Fan, Johnstone, Sun ’18)

Suppose Y1, . . . , Yn satisfy Pi[Yi] = 0, Yi ≺ 1, and the weak dependence QS[Yi] ≺ n−|S|/2 for i / ∈ S. Then

n

  • i=1

uiYi ≺ n

  • i=1

|ui|2 1/2 . Compared with existing results that I’m aware of, this does not require ui ≡ 1/n or use an upper bound on u∞.

slide-75
SLIDE 75

29/37

  • 1. ℓ2 fluctuation averaging

Lemma (Fan, Johnstone, Sun ’18)

Suppose (Yij)i=j satisfy Pi[Yij] = Pj[Yij] = 0, Yij ≺ 1, and QS[Yij] ≺ n−|S|/2 for i, j / ∈ S. Then

  • i=j

uijYij ≺  

i=j

|uij|2  

1/2

.

slide-76
SLIDE 76

29/37

  • 1. ℓ2 fluctuation averaging

Lemma (Fan, Johnstone, Sun ’18)

Suppose (Yij)i=j satisfy Pi[Yij] = Pj[Yij] = 0, Yij ≺ 1, and QS[Yij] ≺ n−|S|/2 for i, j / ∈ S. Then

  • i=j

uijYij ≺  

i=j

|uij|2  

1/2

. For z with dist(z, supp(µ0)) > δ, we obtain from these and the entrywise local law, for any deterministic matrix M, that Tr(R(z) − R0(z))M ≺ MHS/√n.

slide-77
SLIDE 77

29/37

  • 1. ℓ2 fluctuation averaging

Lemma (Fan, Johnstone, Sun ’18)

Suppose (Yij)i=j satisfy Pi[Yij] = Pj[Yij] = 0, Yij ≺ 1, and QS[Yij] ≺ n−|S|/2 for i, j / ∈ S. Then

  • i=j

uijYij ≺  

i=j

|uij|2  

1/2

. For z with dist(z, supp(µ0)) > δ, we obtain from these and the entrywise local law, for any deterministic matrix M, that Tr(R(z) − R0(z))M ≺ MHS/√n. This recovers the anisotropic local law for rank-one M = vu∗, but provides a strengthened guarantee when rank(M) ≫ 1. We use this to establish the CLT for fluctuations of outlier eigenvalues.

slide-78
SLIDE 78

30/37

  • 2. Anisotropic resolvent approximation

For Σr = σ2

r Id, we still wish to approximate u∗R(z)v to analyze

  • utliers by a “master equation” approach [Benaych-Georges,

Nadakuditi ’12], but we currently don’t have a local law.

slide-79
SLIDE 79

30/37

  • 2. Anisotropic resolvent approximation

For Σr = σ2

r Id, we still wish to approximate u∗R(z)v to analyze

  • utliers by a “master equation” approach [Benaych-Georges,

Nadakuditi ’12], but we currently don’t have a local law. We show using free probability techniques, for general self-adjoint polynomial matrix models, that for dist(z, supp(µ0)) > δ we have u∗(R(z) − R0(z))v → 0 almost surely, for a certain deterministic approximation R0.

slide-80
SLIDE 80

30/37

  • 2. Anisotropic resolvent approximation

For Σr = σ2

r Id, we still wish to approximate u∗R(z)v to analyze

  • utliers by a “master equation” approach [Benaych-Georges,

Nadakuditi ’12], but we currently don’t have a local law. We show using free probability techniques, for general self-adjoint polynomial matrix models, that for dist(z, supp(µ0)) > δ we have u∗(R(z) − R0(z))v → 0 almost surely, for a certain deterministic approximation R0. Note: R0 is not necessarily isotropic. The approximation is not u∗R(z)v ≈ u∗v · m0(z) if u, v are aligned with R0(z).

slide-81
SLIDE 81

31/37

  • 2. Anisotropic resolvent approximation

Let

  • H1, . . . , Hq ∈ Cn×n be deterministic matrices,
  • G1, . . . , Gp ∈ Cn×n be random and jointly orthogonally

invariant in law. So {H1, . . . , Hq} is asymptotically free of {G1, . . . , Gp}.

slide-82
SLIDE 82

31/37

  • 2. Anisotropic resolvent approximation

Let

  • H1, . . . , Hq ∈ Cn×n be deterministic matrices,
  • G1, . . . , Gp ∈ Cn×n be random and jointly orthogonally

invariant in law. So {H1, . . . , Hq} is asymptotically free of {G1, . . . , Gp}. Define the von Neumann algebras

  • (A1, τ1) ≡ (Cn×n, n−1 Tr), containing H1, . . . , Hq,
  • (A2, τ2) containing {g1, . . . , gp} which approximate

{G1, . . . , Gp} in joint law. Let (A, τ) be the free deterministic equivalent model defined by their von Neumann free product.

slide-83
SLIDE 83

31/37

  • 2. Anisotropic resolvent approximation

Let

  • H1, . . . , Hq ∈ Cn×n be deterministic matrices,
  • G1, . . . , Gp ∈ Cn×n be random and jointly orthogonally

invariant in law. So {H1, . . . , Hq} is asymptotically free of {G1, . . . , Gp}. Define the von Neumann algebras

  • (A1, τ1) ≡ (Cn×n, n−1 Tr), containing H1, . . . , Hq,
  • (A2, τ2) containing {g1, . . . , gp} which approximate

{G1, . . . , Gp} in joint law. Let (A, τ) be the free deterministic equivalent model defined by their von Neumann free product. Let τ H : A → H ≡ H1, . . . , Hq be the conditional expectation. Note: For any a ∈ A, τ H(a) ∈ H ⊂ A1 is an n × n matrix!

slide-84
SLIDE 84

32/37

  • 2. Anisotropic resolvent approximation

Theorem (Fan, Sun, Wang ’19)

Fix a self-adjoint ∗-polynomial Q and δ > 0. Let W = Q(G1, . . . , Gp, H1, . . . , Hq) ∈ Cn×n, w = Q(g1, . . . , gp, H1, . . . , Hq) ∈ A.

slide-85
SLIDE 85

32/37

  • 2. Anisotropic resolvent approximation

Theorem (Fan, Sun, Wang ’19)

Fix a self-adjoint ∗-polynomial Q and δ > 0. Let W = Q(G1, . . . , Gp, H1, . . . , Hq) ∈ Cn×n, w = Q(g1, . . . , gp, H1, . . . , Hq) ∈ A. Let R(z) = (W − z Id)−1, and define a deterministic approximation R0(z) = τ H((w − z)−1) ∈ Cn×n. Then for any deterministic unit vectors u, v ∈ Cn, as n → ∞, u∗(R(z) − R0(z))v → 0 uniformly over {z ∈ C : dist(z, spec(W ) ∪ spec(w)) > δ}.

slide-86
SLIDE 86

33/37

  • 3. Augmented Cauchy and R-transforms

Our computations in the approximating free model use relations between (conditional) Cauchy and R-transforms:

slide-87
SLIDE 87

33/37

  • 3. Augmented Cauchy and R-transforms

Our computations in the approximating free model use relations between (conditional) Cauchy and R-transforms: Let κℓ(a1, . . . , aℓ) be the ℓth order non-crossing cumulant, and Ga(z) = τ((z − a)−1) =

  • ℓ=0

z−(ℓ+1)τ(aℓ), Ra(z) =

  • ℓ=1

zℓ−1κℓ(a, . . . , a). The moment-cumulant relations for non-crossing partitions give Ga(z) = (z − Ra(Ga(z)))−1.

slide-88
SLIDE 88

34/37

  • 3. Augmented Cauchy and R-transforms

In addition to approximating n−1 Tr R(z), we also need n−1 Tr AR(z) for certain (random) matrices A.

slide-89
SLIDE 89

34/37

  • 3. Augmented Cauchy and R-transforms

In addition to approximating n−1 Tr R(z), we also need n−1 Tr AR(z) for certain (random) matrices A. We introduce augmented transforms and some associated moment-cumulant relations: Ga,w(z) = τ(a(z − w)−1) =

  • ℓ=0

z−(ℓ+1)τ(awℓ), Ra,w(z) =

  • ℓ=1

zℓ−1κℓ(a, w, . . . , w).

Lemma (Fan, Sun, Wang ’19)

Ga,w(z) = Ra,w(Gw(z))Gw(z)

slide-90
SLIDE 90

35/37

  • 4. Strong asymptotic freeness for GOE

In the setting Σr = σ2

r Id, we use a strong asymptotic freeness

result to show that eigenvalues stick to supp(µ0):

slide-91
SLIDE 91

35/37

  • 4. Strong asymptotic freeness for GOE

In the setting Σr = σ2

r Id, we use a strong asymptotic freeness

result to show that eigenvalues stick to supp(µ0):

Theorem (Fan, Sun, Wang ’19)

Let W1, . . . , Wp be independent GOE, H1, . . . , Hq deterministic with bounded norm, and w1, . . . , wp free semicircular elements. For any fixed self-adjoint ∗-polynomial Q and δ > 0, a.s. for large n, spec(Q({Wi}p

i=1, {Hj}q j=1)) ⊂ spec(Q({wi}p i=1, {Hj}q j=1))δ.

slide-92
SLIDE 92

35/37

  • 4. Strong asymptotic freeness for GOE

In the setting Σr = σ2

r Id, we use a strong asymptotic freeness

result to show that eigenvalues stick to supp(µ0):

Theorem (Fan, Sun, Wang ’19)

Let W1, . . . , Wp be independent GOE, H1, . . . , Hq deterministic with bounded norm, and w1, . . . , wp free semicircular elements. For any fixed self-adjoint ∗-polynomial Q and δ > 0, a.s. for large n, spec(Q({Wi}p

i=1, {Hj}q j=1)) ⊂ spec(Q({wi}p i=1, {Hj}q j=1))δ.

The GUE analogue was proven in [Male ’12], and extended to complex Wigner matrices in [Belinschi, Capitaine ’17].

slide-93
SLIDE 93

35/37

  • 4. Strong asymptotic freeness for GOE

In the setting Σr = σ2

r Id, we use a strong asymptotic freeness

result to show that eigenvalues stick to supp(µ0):

Theorem (Fan, Sun, Wang ’19)

Let W1, . . . , Wp be independent GOE, H1, . . . , Hq deterministic with bounded norm, and w1, . . . , wp free semicircular elements. For any fixed self-adjoint ∗-polynomial Q and δ > 0, a.s. for large n, spec(Q({Wi}p

i=1, {Hj}q j=1)) ⊂ spec(Q({wi}p i=1, {Hj}q j=1))δ.

The GUE analogue was proven in [Male ’12], and extended to complex Wigner matrices in [Belinschi, Capitaine ’17]. We follow proof ideas of these works, and of [Schultz ’05] in the pure GOE setting.

slide-94
SLIDE 94

36/37

References

Zhou Fan, Iain M. Johnstone, “Eigenvalue distributions of variance components estimators in high-dimensional random effects models,” The Annals of Statistics, 2019+. Zhou Fan, Yi Sun, Zhichao Wang, “Principal components in linear mixed models with general bulk,” arXiv:1903.09592, 2019. Zhou Fan, Iain M. Johnstone, “Tracy-Widom at each edge of real covariance and MANOVA estimators,” arXiv:1707.02352, 2018. Zhou Fan, Iain M. Johnstone, Yi Sun, “Spiked covariances and principal components analysis in high-dimensional random effects models,” arXiv:arXiv:1806.09529, 2018.

slide-95
SLIDE 95

37/37

Thank you!