Generalized Linear Models & Logistic Regression Jonathan - - PowerPoint PPT Presentation

generalized linear models logistic regression
SMART_READER_LITE
LIVE PREVIEW

Generalized Linear Models & Logistic Regression Jonathan - - PowerPoint PPT Presentation

Generalized Linear Models & Logistic Regression Jonathan Pillow Mathematical Tools for Neuroscience (NEU 314) Spring, 2016 lecture 17 Example 3: unknown neuron 100 75 (spike count) 50 25 0 -25 0 25 (contrast) What model


slide-1
SLIDE 1

Generalized Linear Models & 
 Logistic Regression

Jonathan Pillow Mathematical Tools for Neuroscience (NEU 314) Spring, 2016 lecture 17

slide-2
SLIDE 2

Example 3: unknown neuron

  • 25

25 25 50 75 100 (contrast) (spike count)

What model would you use to fit this neuron?

slide-3
SLIDE 3

conditional intensity (“spike rate”)

Linear-Nonlinear-Poisson model

  • example of generalized linear model (GLM)

stimulus filter nonlinearity Poisson spiking

stimulus

k f

λ(t) Poisson spiking

slide-4
SLIDE 4

Aside on GLMs:

  • 1. Be careful about terminology:

Linear Linear General Linear Model Generalized Linear Model

GLM GLM ≠

(Nelder 1972)

slide-5
SLIDE 5

2003 interview with John Nelder...

Stephen Senn: I must confess to having some confusion when I was a young statistician between general linear models and generalized linear models. Do you regret the terminology? John Nelder: I think probably I do. I suspect we should have found some more fancy name for it that would have stuck and not been confused with the general linear model, although general and generalized are not quite the same. I can see why it might have been better to have thought of something else.

Senn, (2003). Statistical Science

slide-6
SLIDE 6

Moral: Be careful when naming your model!

slide-7
SLIDE 7
  • 2. General Linear Model

Linear Noise

“Dimensionality Reduction”

(exponential family)

Examples:

  • 1. Gaussian
  • 2. Poisson
slide-8
SLIDE 8
  • 3. Generalized Linear Model

Linear Examples:

  • 1. Gaussian
  • 2. Poisson

Noise

(exponential family)

Nonlinear

slide-9
SLIDE 9
  • output: Poisson process


stimulus filter Poisson spiking

stimulus

k f

λ(t)

exponential nonlinearity

Linear-Nonlinear-Poisson

conditional intensity (“spike rate”)

slide-10
SLIDE 10

GLM with spike-history dependence

post-spike filter exponential nonlinearity probabilistic spiking

stimulus

stimulus filter

+

conditional intensity (spike rate)

(Truccolo et al 04)

  • output: product of stimulus and spike-history term
slide-11
SLIDE 11

GLM dynamic behaviors

post-spike filter h(t) stimulus p(spike)

  • irregular spiking (Poisson process)

filter outputs

(“currents”)

slide-12
SLIDE 12

GLM dynamic behaviors

post-spike filter h(t) stimulus p(spike)

  • regular spiking

filter outputs

(“currents”)

slide-13
SLIDE 13

GLM dynamic behaviors

post-spike filter h(t)

  • bursting

filter outputs

(“currents”)

p(spike) stimulus

slide-14
SLIDE 14

GLM dynamic behaviors

post-spike filter h(t) stimulus filter outputs

(“currents”)

p(spike)

  • adaptation
slide-15
SLIDE 15

multi-neuron GLM

exponential nonlinearity probabilistic spiking

stimulus

neuron 1 neuron 2 post-spike filter stimulus filter

+ +

slide-16
SLIDE 16

multi-neuron GLM

exponential nonlinearity probabilistic spiking coupling filters

stimulus

neuron 1 neuron 2 post-spike filter stimulus filter

+ +

slide-17
SLIDE 17

conditional intensity

(spike rate)

...

time

t

GLM equivalent diagram:

slide-18
SLIDE 18

Logistic Regression

  • 1. Linear weights
  • 2. sigmoid (“logistic”)


function

  • 3. Bernoulli (coin flip)

GLM for binary classification

weights sigmoid Bernoulli input “0” or “1”

slide-19
SLIDE 19

Logistic Regression

GLM for binary classification

weights sigmoid Bernoulli

compact expression:

input “0” or “1”

(note when y = 1, this is equal to exp(wx)/(1+exp(wx)), which is equal to 1/(1+exp(-wx) )

slide-20
SLIDE 20

Logistic Regression

GLM for binary classification

weights sigmoid Bernoulli

fit w by maximizing log-likelihood: compact expression:

input “0” or “1”

slide-21
SLIDE 21

Logistic Regression

neuron 1 spikes neuron 2 spikes p = 0.5 contour

(classification boundary)

geometric view

slide-22
SLIDE 22

Bayesian Estimation

  • 1. Likelihood
  • 2. Prior
  • 3. Loss function

jointly determine the posterior “cost” of making an estimate if the true value is

  • fully specifies how to generate an estimate from the data

Bayesian estimator is defined as:

ˆ θ(m) = arg min

ˆ θ

Z L(ˆ θ, θ)p(θ|m)dθ L(ˆ θ, θ)

“Bayes’ risk”

three basic ingredients:

slide-23
SLIDE 23

Typical Loss functions and Bayesian estimators 1.

squared error loss need to find minimizing the expected loss: Differentiate with respect to and set to zero: “posterior mean” also known as Bayes’ Least Squares (BLS) estimator

L(ˆ θ, θ) = (ˆ θ − θ)2

slide-24
SLIDE 24

Typical Loss functions and Bayesian estimators 2.

“zero-one” loss 


(1 unless )

  • posterior maximum (or “mode”).
  • known as maximum a posteriori (MAP) estimate.

expected loss: which is minimized by:

L(ˆ θ, θ) = 1 − δ(ˆ θ − θ)