Graphical models for Neuroscience Part I Giuseppe Vinci - - PowerPoint PPT Presentation

graphical models for neuroscience part i
SMART_READER_LITE
LIVE PREVIEW

Graphical models for Neuroscience Part I Giuseppe Vinci - - PowerPoint PPT Presentation

Graphical models for Neuroscience Part I Giuseppe Vinci Department of Statistics Rice University 1/1 Outline 1. Neural connectivity 2. Graphical models 3. Gaussian Graphical models 4. Functional connectivity in Macaque V4 2/1 Neural


slide-1
SLIDE 1

1/1

Graphical models for Neuroscience Part I

Giuseppe Vinci

Department of Statistics Rice University

slide-2
SLIDE 2

2/1

Outline

  • 1. Neural connectivity
  • 2. Graphical models
  • 3. Gaussian Graphical models
  • 4. Functional connectivity in Macaque V4
slide-3
SLIDE 3

3/1

Neural connectivity

How does the brain work? Through concerted ac- tions of many connected neurons. We need to know these connections to:

  • Understand causes of brain disorders.

E.g. Parkinson, Alzheimer.

  • Interface prosthetic devices.
slide-4
SLIDE 4

4/1

Neural connectivity

Two different perspectives

  • structural connectivity = physical connections
  • functional connectivity = neurons activity dependence

→ Understanding brain functions under differing experimental conditions. → It needs to be inferred from (in vivo) experimental data recordings, i.e. we need to see how neurons behave.

slide-5
SLIDE 5

5/1

Functional Connectivity = neurons activity dependence

Dependence between brain areas (macroscopic scale) fMRI, EEG, ECOG, LFP

slide-6
SLIDE 6

6/1

Functional Connectivity = neurons activity dependence

Dependence between neurons (microscopic scale)

video link

slide-7
SLIDE 7

7/1

Dependence

  • Let X = (X1, ..., Xd) ∼ P be neural signals
  • How could we describe the dependence structure of X?
  • Analytically
  • Contour density plots, spectral analysis
  • Graphs
slide-8
SLIDE 8

8/1

Dependence Graph

Graph G = {N, E} where N = {1, ..., d} = nodes and E = [Eij] = edges with Eij ∈ {0, 1}.

slide-9
SLIDE 9

9/1

Markov Random Field

  • A random vector X forms a Markov Random Field with respect to a graph G

if it satisfies the Markov properties

  • 1. Pairwise: no edge ⇔ indep. conditionally on all other variables
  • 2. Local: indep. conditionally on neighbors
  • 3. Global: indep. conditionally on separators
slide-10
SLIDE 10

10/1

Gaussian Graphical Model

  • A Gaussian Graphical model is a Markov Random Field
  • The p.d.f. of X ∼ N(0, Θ−1), with Θ = [θij] ≻ 0, is

pΘ(x) = (2π)−d/2√ det Θ exp

  • −1

2x′Θx

  • =

exp   −

d

  • i,j=1

1 2θijxixj + const.   

  • θij = 0 ⇔ no edge between Xi, Xj ⇔ cond. independence
  • Note that partial correlation ρij = −

θij

θiiθjj ∈ [−1, 1] is a more interpretable

measure of conditional dependence.

slide-11
SLIDE 11

11/1

Gaussian Graphical Model

slide-12
SLIDE 12

12/1

GGM - Estimation

  • We need to estimate Θ.
  • Suppose we collect data vectors Data = {X 1, ..., X n} from n repeat experiments.
  • To estimate Θ, we start by defining the likelihood function

L(Θ; Data) =

n

  • r=1

pΘ(X r)

slide-13
SLIDE 13

13/1

GGM - MLE

  • ΘMLE = arg max

Θ≻0

L(Θ; Data)

  • Simple but can have undesirable statistical properties because:
  • Major problem: number of parameters in Θ is d +

d

2

  • = d(d+1)

2

and sample size may be small in some applications. E.g. d = 100 ⇒ 5050 parameters and n = 200

  • Minor problem:

ΘMLE does not give exact zeros.

slide-14
SLIDE 14

14/1

GGM - Penalized MLE

  • Penalized MLE
  • Θ(λ) = arg max

Θ≻0

log L(Θ; Data) − Pλ(Θ) where Pλ ≥ 0, and λ is chosen in some way (CV, BIC, Stability). E.g. Pλ(Θ) = λΘq

q (usually excluding diagonal elements)

  • q = 0 penalizes number of edges.
  • q = 1 is the graphical lasso1.
  • q = 2 is the ridge regularization.

1Yuan and Lin (2007), Friedman et al.(2008), Rothman et al.(2008), Fan et al.(2009), Ravikumar et al.(2011), Mazumder and Hastie (2012), Hsieh et al.(2014).

slide-15
SLIDE 15

15/1

GGM - Bayesian regularization

  • Penalized MLE can be seen as the maximum of the posterior distribution

π(Θ | Data) ∝ L(Θ; Data)

  • likelihood

× π(Θ | λ)

  • prior

where π(Θ | λ) ∝ e−Pλ(Θ)I(Θ ≻ 0).

  • Note that we can also compute posterior means instead of max.
  • The parameter λ can be selected by imposing a hyper prior π(λ).
slide-16
SLIDE 16

16/1

What if data is not Gaussian?

  • Transform the data: e.g. √spike-count can be approximately Gaussian
  • Hierarchical model: Y ∼ PX, and X ∼ N(0, Θ−1).
  • Use another graphical model: e.g. Poisson graphs
slide-17
SLIDE 17

17/1

Example: macaque V4 (Vinci et al. 2018)

square-root transformed spike-count (500ms) of d = 100 neurons, n = 126 for each condition.