High-Dimensional Covariance Decomposition into Sparse Markov and - - PowerPoint PPT Presentation

high dimensional covariance decomposition into sparse
SMART_READER_LITE
LIVE PREVIEW

High-Dimensional Covariance Decomposition into Sparse Markov and - - PowerPoint PPT Presentation

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Domains Majid Janzamin and Anima Anandkumar U.C. Irvine High-Dimensional Covariance Estimation n i.i.d. samples, p variables X := [ X 1 , . . . , X p ] T .


slide-1
SLIDE 1

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Domains

Majid Janzamin and Anima Anandkumar

U.C. Irvine

slide-2
SLIDE 2

High-Dimensional Covariance Estimation

n i.i.d. samples, p variables X := [X1, . . . , Xp]T . High-dimensional regime: both n, p → ∞ and n ≪ p. Covariance estimation: Σ∗ := E[XXT ]. Challenge: empirical (sample) covariance ill-posed when n ≪ p:

  • Σn := 1

n

n

  • k=1

x(k)x(k)T . Solution: Imposing Sparsity for Tractable High-dimensional Estimation

slide-3
SLIDE 3

Incorporating Sparsity in High Dimensions

Sparse Covariance

Σ∗ ΣR

Sparse Inverse Covariance

1

Σ∗ J−1

M

slide-4
SLIDE 4

Incorporating Sparsity in High Dimensions

Sparse Covariance

Σ∗ ΣR

Sparse Inverse Covariance

1

Σ∗ J−1

M

Relationship with Statistical Properties (Gaussian)

Sparse Covariance= Independence Model: marginal independence. Sparse Inverse Covariance=Markov Model: conditional independence

slide-5
SLIDE 5

Incorporating Sparsity in High Dimensions

Sparse Covariance

Σ∗ ΣR

Sparse Inverse Covariance

1

Σ∗ J−1

M

Relationship with Statistical Properties (Gaussian)

Sparse Covariance= Independence Model: marginal independence. Sparse Inverse Covariance=Markov Model: conditional independence

Guarantees under Sparsity Constraints in High Dimensions

Consistent Estimation when n = Ω(log p) ⇒ n ≪ p.

slide-6
SLIDE 6

Incorporating Sparsity in High Dimensions

Sparse Covariance

Σ∗ ΣR

Sparse Inverse Covariance

1

Σ∗ J−1

M

Relationship with Statistical Properties (Gaussian)

Sparse Covariance= Independence Model: marginal independence. Sparse Inverse Covariance=Markov Model: conditional independence

Guarantees under Sparsity Constraints in High Dimensions

Consistent Estimation when n = Ω(log p) ⇒ n ≪ p. Going beyond Sparsity in High Dimensions?

slide-7
SLIDE 7

Going Beyond Sparse Models

Motivation

Sparsity constraints restrictive to have faithful representation. Data not sparse in a single domain Solution: Sparsity in Multiple Domains.

slide-8
SLIDE 8

Going Beyond Sparse Models

Motivation

Sparsity constraints restrictive to have faithful representation. Data not sparse in a single domain Solution: Sparsity in Multiple Domains.

One Possibility: Sparse Markov + Sparse Independence Models

Sparsity in Multiple Domains: Multiple Statistical Relationships.

1

Σ∗ J−1

M

ΣR

slide-9
SLIDE 9

Going Beyond Sparse Models

Motivation

Sparsity constraints restrictive to have faithful representation. Data not sparse in a single domain Solution: Sparsity in Multiple Domains.

One Possibility: Sparse Markov + Sparse Independence Models

Sparsity in Multiple Domains: Multiple Statistical Relationships.

1

Σ∗ J−1

M

ΣR Efficient Decomposition and Estimation in High Dimensions?

slide-10
SLIDE 10

Going Beyond Sparse Models

Motivation

Sparsity constraints restrictive to have faithful representation. Data not sparse in a single domain Solution: Sparsity in Multiple Domains.

One Possibility: Sparse Markov + Sparse Independence Models

Sparsity in Multiple Domains: Multiple Statistical Relationships.

1

Σ∗ J−1

M

ΣR Efficient Decomposition and Estimation in High Dimensions? Unique Decomposition? Good Sample Requirements?

slide-11
SLIDE 11

Summary of Results

Σ∗ = J∗

M −1 + Σ∗ R.

1

slide-12
SLIDE 12

Summary of Results

Σ∗ = J∗

M −1 + Σ∗ R.

1

Contribution 1: Novel Method for Decomposition

Decomposition into Markov and residual domains. Unification of Sparse Covariance and Inverse Covariance Estimation.

slide-13
SLIDE 13

Summary of Results

Σ∗ = J∗

M −1 + Σ∗ R.

1

Contribution 1: Novel Method for Decomposition

Decomposition into Markov and residual domains. Unification of Sparse Covariance and Inverse Covariance Estimation.

Contribution 2: Guarantees for Estimation

Conditions for unique decomposition (exact statistics). Sparsistency and norm guarantees in both Markov and independence domains (sample analysis) Sample requirement: no. of samples n = Ω(log p) for p variables.

slide-14
SLIDE 14

Summary of Results

Σ∗ = J∗

M −1 + Σ∗ R.

1

Contribution 1: Novel Method for Decomposition

Decomposition into Markov and residual domains. Unification of Sparse Covariance and Inverse Covariance Estimation.

Contribution 2: Guarantees for Estimation

Conditions for unique decomposition (exact statistics). Sparsistency and norm guarantees in both Markov and independence domains (sample analysis) Sample requirement: no. of samples n = Ω(log p) for p variables. Efficient Method for Covariance Decomposition and Estimation

slide-15
SLIDE 15

Related Works

Sparse Covariance/Inverse Covariance Estimation Sparse Covariance Estimation: Covariance Thresholding.

◮ (Bickel & Levina) (Wagaman & Levina) ( Cai et. al.)

slide-16
SLIDE 16

Related Works

Sparse Covariance/Inverse Covariance Estimation Sparse Covariance Estimation: Covariance Thresholding.

◮ (Bickel & Levina) (Wagaman & Levina) ( Cai et. al.)

Sparse Inverse Covariance Estimation:

◮ ℓ1 Penalization (Meinshausen and B¨

uhlmann) (Ravikumar et. al)

◮ Non-Convex Methods (Anandkumar et. al) (Zhang)

slide-17
SLIDE 17

Related Works

Sparse Covariance/Inverse Covariance Estimation Sparse Covariance Estimation: Covariance Thresholding.

◮ (Bickel & Levina) (Wagaman & Levina) ( Cai et. al.)

Sparse Inverse Covariance Estimation:

◮ ℓ1 Penalization (Meinshausen and B¨

uhlmann) (Ravikumar et. al)

◮ Non-Convex Methods (Anandkumar et. al) (Zhang)

Beyond Sparse Models: Decomposition Issues Sparse + Low Rank (Chandrasekaran et. al) (Candes et. al) Decomposable Regularizers (Negahban et. al)

slide-18
SLIDE 18

Related Works

Sparse Covariance/Inverse Covariance Estimation Sparse Covariance Estimation: Covariance Thresholding.

◮ (Bickel & Levina) (Wagaman & Levina) ( Cai et. al.)

Sparse Inverse Covariance Estimation:

◮ ℓ1 Penalization (Meinshausen and B¨

uhlmann) (Ravikumar et. al)

◮ Non-Convex Methods (Anandkumar et. al) (Zhang)

Beyond Sparse Models: Decomposition Issues Sparse + Low Rank (Chandrasekaran et. al) (Candes et. al) Decomposable Regularizers (Negahban et. al) Multi-Resolution Markov+Independence Models (Choi et. al) Decomposition in inverse covariance domain Lack theoretical guarantees

slide-19
SLIDE 19

Related Works

Sparse Covariance/Inverse Covariance Estimation Sparse Covariance Estimation: Covariance Thresholding.

◮ (Bickel & Levina) (Wagaman & Levina) ( Cai et. al.)

Sparse Inverse Covariance Estimation:

◮ ℓ1 Penalization (Meinshausen and B¨

uhlmann) (Ravikumar et. al)

◮ Non-Convex Methods (Anandkumar et. al) (Zhang)

Beyond Sparse Models: Decomposition Issues Sparse + Low Rank (Chandrasekaran et. al) (Candes et. al) Decomposable Regularizers (Negahban et. al) Multi-Resolution Markov+Independence Models (Choi et. al) Decomposition in inverse covariance domain Lack theoretical guarantees Our contribution: Guaranteed Decomposition and Estimation

slide-20
SLIDE 20

Outline

1

Introduction

2

Algorithm

3

Guarantees

4

Experiments

5

Conclusion

slide-21
SLIDE 21

Some Intuitions and Ideas

Σ∗ = J∗

M −1 + Σ∗ R.

  • Σn: sample covariance

using n i.i.d. samples

1

slide-22
SLIDE 22

Some Intuitions and Ideas

Σ∗ = J∗

M −1 + Σ∗ R.

  • Σn: sample covariance

using n i.i.d. samples

1

Review Ideas for Special Cases: Sparse Covariance/Inverse Covariance

slide-23
SLIDE 23

Some Intuitions and Ideas

Σ∗ = J∗

M −1 + Σ∗ R.

  • Σn: sample covariance

using n i.i.d. samples

1

Review Ideas for Special Cases: Sparse Covariance/Inverse Covariance

Sparse Covariance Estimation (Independence Model)

Σ∗ = Σ∗

R.

  • Σn: sample covariance using n samples

p variables: p ≫ n. Thresholding estimator for off-diagonals (Bickel & Levina): threshold chosen as

  • log p

n Sparsistency (support recovery) and Norm guarantees when n = Ω(log p) ⇒ n ≪ p.

slide-24
SLIDE 24

Recap of Inverse Covariance (Markov) Estimation

Σ∗ = J∗

M −1+Σ∗ R

  • Σn: sample covariance

using n i.i.d. samples

1

slide-25
SLIDE 25

Recap of Inverse Covariance (Markov) Estimation

Σ∗ = J∗

M −1+Σ∗ R

  • Σn: sample covariance

using n i.i.d. samples

1

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al. ‘08)

slide-26
SLIDE 26

Recap of Inverse Covariance (Markov) Estimation

Σ∗ = J∗

M −1+Σ∗ R

  • Σn: sample covariance

using n i.i.d. samples

1

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al. ‘08)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
slide-27
SLIDE 27

Recap of Inverse Covariance (Markov) Estimation

Σ∗ = J∗

M −1+Σ∗ R

  • Σn: sample covariance

using n i.i.d. samples

1

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al. ‘08)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off

Max-entropy Formulation (Lagrangian Dual)

  • ΣM := argmax

ΣM ≻0,ΣR

log det ΣM−λΣR1,off

  • s. t.
  • Σn − ΣM∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d

  • , ΣR
  • d = 0.
slide-28
SLIDE 28

Recap of Inverse Covariance (Markov) Estimation

Σ∗ = J∗

M −1+Σ∗ R

  • Σn: sample covariance

using n i.i.d. samples

1

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al. ‘08)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off

Max-entropy Formulation (Lagrangian Dual)

  • ΣM := argmax

ΣM ≻0,ΣR

log det ΣM−λΣR1,off

  • s. t.
  • Σn − ΣM∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d

  • , ΣR
  • d = 0.

Consistent Estimation Under Certain Conditions, n = Ω(log p)

slide-29
SLIDE 29

Extension to Markov+Independence Models?

Σ∗ = J∗

M −1 + Σ∗ R.

1

Sparse Covariance Estimation

Threshold off-diagonal entries of Σn.

Sparse Inverse Covariance Estimation

Add ℓ1 penalty to maximum likelihood program (involving inverse covariance matrix estimation)

slide-30
SLIDE 30

Extension to Markov+Independence Models?

Σ∗ = J∗

M −1 + Σ∗ R.

1

Sparse Covariance Estimation

Threshold off-diagonal entries of Σn.

Sparse Inverse Covariance Estimation

Add ℓ1 penalty to maximum likelihood program (involving inverse covariance matrix estimation) Is it possible to unify above methods and guarantees?

slide-31
SLIDE 31

Extension to Markov+Independence Models?

Σ∗ = J∗

M −1 + Σ∗ R.

1

Sparse Covariance Estimation

Threshold off-diagonal entries of Σn.

Sparse Inverse Covariance Estimation

Add ℓ1 penalty to maximum likelihood program (involving inverse covariance matrix estimation) Is it possible to unify above methods and guarantees?

Challenges and Insights

Penalties in above methods are in different domains

slide-32
SLIDE 32

Extension to Markov+Independence Models?

Σ∗ = J∗

M −1 + Σ∗ R.

1

Sparse Covariance Estimation

Threshold off-diagonal entries of Σn.

Sparse Inverse Covariance Estimation

Add ℓ1 penalty to maximum likelihood program (involving inverse covariance matrix estimation) Is it possible to unify above methods and guarantees?

Challenges and Insights

Penalties in above methods are in different domains Insight: Consider dual program of MLE Dual program is in covariance domain for Markov model.

slide-33
SLIDE 33

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation

Lagrangian dual of ℓ1-penalized MLE

  • ΣM

:= argmax

ΣM≻0

log det ΣM

  • s. t.
  • Σn − ΣM

∞,off ≤ γ,

  • ΣM
  • d =
  • Σn

d

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
slide-34
SLIDE 34

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation + ℓ1-penalized Residuals (This work)

Lagrangian dual of ℓ1-penalized MLE

  • ΣM

:= argmax

ΣM≻0

log det ΣM

  • s. t.
  • Σn − ΣM

∞,off ≤ γ,

  • ΣM
  • d =
  • Σn

d

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
slide-35
SLIDE 35

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation + ℓ1-penalized Residuals (This work)

Lagrangian dual of ℓ1-penalized MLE

  • ΣM

:= argmax

ΣM≻0

log det ΣM

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
slide-36
SLIDE 36

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation + ℓ1-penalized Residuals (This work)

Lagrangian dual of ℓ1-penalized MLE ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
slide-37
SLIDE 37

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation + ℓ1-penalized Residuals (This work)

Lagrangian dual of ℓ1-penalized MLE ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM−λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
slide-38
SLIDE 38

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation + ℓ1-penalized Residuals (This work)

Lagrangian dual of ℓ1-penalized MLE ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM−λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

ℓ1-MLE for Sparse Inverse Covariance (Ravikumar et. al)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
  • s. t. JM∞,off ≤ λ,
slide-39
SLIDE 39

Our Algorithm: Covariance Decomposition

Σ∗ = J∗

M −1 + Σ∗ R.

Extend ℓ1-penalized MLE

1

Max-entropy Formulation + ℓ1-penalized Residuals (This work)

Lagrangian dual of ℓ1-penalized MLE ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM−λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

ℓ1 − ℓ∞-penalized MLE (This work)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off
  • s. t. JM∞,off ≤ λ,
slide-40
SLIDE 40

Observations regarding the Proposed Method

ℓ1 − ℓ∞-penalized MLE (Primal)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off, s. t. JM∞,off ≤ λ

Max-entropy Markov + ℓ1-penalized Residuals (Dual) ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM − λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.
slide-41
SLIDE 41

Observations regarding the Proposed Method

ℓ1 − ℓ∞-penalized MLE (Primal)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off, s. t. JM∞,off ≤ λ

Max-entropy Markov + ℓ1-penalized Residuals (Dual) ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM − λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

Case: λ → 0 (Sparse Covariance Estimation) Threshold estimator for off-diagonals of Σ∗

R (under exact statistics)

With samples, λ =

  • log p/n reduces to threshold estimator.
slide-42
SLIDE 42

Observations regarding the Proposed Method

ℓ1 − ℓ∞-penalized MLE (Primal)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off, s. t. JM∞,off ≤ λ

Max-entropy Markov + ℓ1-penalized Residuals (Dual) ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM − λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

Case: λ → 0 (Sparse Covariance Estimation) Threshold estimator for off-diagonals of Σ∗

R (under exact statistics)

With samples, λ =

  • log p/n reduces to threshold estimator.

Case: λ → ∞ (Sparse Inverse Covariance Estimator) Residual matrix ΣR = 0: ℓ1-penalized MLE of Ravikumar et. al

slide-43
SLIDE 43

Observations regarding the Proposed Method

ℓ1 − ℓ∞-penalized MLE (Primal)

  • JM := argmin

JM≻0

  • Σn, JM − log det JM + γJM1,off, s. t. JM∞,off ≤ λ

Max-entropy Markov + ℓ1-penalized Residuals (Dual) ( ΣM, ΣR) := argmax

ΣM≻0,ΣR

log det ΣM − λΣR1,off

  • s. t.
  • Σn − ΣM − ΣR∞,off ≤ γ,
  • ΣM
  • d =
  • Σn

d ,

  • ΣR
  • d = 0.

Case: λ → 0 (Sparse Covariance Estimation) Threshold estimator for off-diagonals of Σ∗

R (under exact statistics)

With samples, λ =

  • log p/n reduces to threshold estimator.

Case: λ → ∞ (Sparse Inverse Covariance Estimator) Residual matrix ΣR = 0: ℓ1-penalized MLE of Ravikumar et. al Unification of Sparse Covariance & Inverse Covariance Estimation

slide-44
SLIDE 44

Outline

1

Introduction

2

Algorithm

3

Guarantees

4

Experiments

5

Conclusion

slide-45
SLIDE 45

Guarantees for High-Dimensional Estimation

Σ∗ = J∗

M −1 + Σ∗ R.

1

Conditions for Recovery

Maximum degree ∆ in the Markov graph (corresponding to J∗

M).

Number of samples n, number of nodes p satisfy n = Ω(∆2 log p). Regularization constant: λ = max

i=j J∗ M(i, j) + Θ(

  • log p/n).

Theorem

The proposed method outputs estimates ( JM, ΣR) such that ( JM, ΣR) are sparsistent and sign consistent. satisfy norm guarantees. JM − J∗

M∞,

ΣR − Σ∗

R∞ = O

  • log p/n
  • .

Guarantee Sparsistency and Efficient Estimation in Both Domains

slide-46
SLIDE 46

Observations

Corollary 1 (Sparse Covariance Estimation)

With λ = Θ(

  • log p/n) , our method reduces to threshold estimator

(Bickel & Levina) and is sparsistent for covariance estimation.

Corollary 2 (Sparse Inverse Covariance Estimation)

With λ → ∞ , our method reduces to ℓ1-penalized MLE (Ravikumar et. al) and is sparsistent for inverse covariance estimation.

Conditions for Recovery

Mutual incoherence-type conditions Sample complexity n = Ω(∆2 log p) . Comparable to inverse covariance estimation (Ravikumar et. al).

slide-47
SLIDE 47

Outline

1

Introduction

2

Algorithm

3

Guarantees

4

Experiments

5

Conclusion

slide-48
SLIDE 48

Synthetic Data

Σ∗ = J∗

M −1 + Σ∗ R,

J∗ = (Σ∗)−1.

1

Setup

8 × 8 2-d grid for Markov model. Mixed Markov model (both positive and negative correlations). Arbitrary-valued sparse residuals.

J estimation

1000 2000 3000 4000 5000 6000 96 97 98 99 ℓ1 + ℓ∞ method ℓ1 method

n J∗ − J∞

Performance under LBP

5 10 15 −10 10 20 LBP applied to J∗ model LBP applied to J∗

M model

iteration

  • Avg. mean error

Advantage over existing techniques.

slide-49
SLIDE 49

Experiments on Stock Market Data

Setup

Monthly stock returns of companies on S&P index. Companies in divisions E.Trans, Comm, Elec&Gas and G.Retail Trade. Apply the proposed method.

CBS NSC SNS BNI HD

CMCSA

TGT MCD WMT CVS FDX ETR EXC T VZ

Solid line: Markov graph. Dotted line: Independence graph.

slide-50
SLIDE 50

Outline

1

Introduction

2

Algorithm

3

Guarantees

4

Experiments

5

Conclusion

slide-51
SLIDE 51

Conclusion

Summary

Covariance decomposition and estimation in high dimensions Combination of Markov and independence models Efficient method and guarantees for estimation in both domains Unifying sparse covariance/inverse covariance estimation

slide-52
SLIDE 52

Conclusion

Summary

Covariance decomposition and estimation in high dimensions Combination of Markov and independence models Efficient method and guarantees for estimation in both domains Unifying sparse covariance/inverse covariance estimation

Not covered in this talk

Analysis under Exact Statistics: Conditions for Unique Decomposition. Sample Analysis: Careful control of perturbations in both domains. Longer version available on webpage.

slide-53
SLIDE 53

Conclusion

Summary

Covariance decomposition and estimation in high dimensions Combination of Markov and independence models Efficient method and guarantees for estimation in both domains Unifying sparse covariance/inverse covariance estimation

Not covered in this talk

Analysis under Exact Statistics: Conditions for Unique Decomposition. Sample Analysis: Careful control of perturbations in both domains. Longer version available on webpage.

Outlook

Discrete Model (via pseudo-likelihood) Other forms of residuals (e.g. low rank).