NEPR208 - Adaptation properties and mechanisms Functional - - PowerPoint PPT Presentation

nepr208 adaptation properties and mechanisms functional
SMART_READER_LITE
LIVE PREVIEW

NEPR208 - Adaptation properties and mechanisms Functional - - PowerPoint PPT Presentation

NEPR208 - Adaptation properties and mechanisms Functional advantages in properties of a neural code and changes in those properties What is adaptation? Why do neural systems have a particular nonlinear and filter? Why does the nonlinearity and


slide-1
SLIDE 1

Functional advantages in properties of a neural code and changes in those properties What is adaptation? Why do neural systems have a particular nonlinear and filter? Why does the nonlinearity and filter change? A hierarchy of systems What biophysical mechanisms can cause adaptation? NEPR208 - Adaptation properties and mechanisms

Linear-Nonlinear (LN) Model Receptive Field Preferred Visual feature Spatiotemporal Filter Time

slide-2
SLIDE 2

What happens when stimulus statistics change?

Intensity

slide-3
SLIDE 3

Intensity

What happens when stimulus statistics change?

slide-4
SLIDE 4

Sakmann and Creuzfeldt, Scotopic and mesopic light adaptation in the cat’s retina (1969)

Ganglion cell response curves shift to the mean light intensity

slide-5
SLIDE 5

Neurons have a limited dynamic range set by maximum and minimum output levels, and by noise

slide-6
SLIDE 6

µ = mean # of events in a time interval n = events in a time interval

Events with Poisson statistics # of events Expected frequency

Joint probability distribution P[n,µ]

variance=mean=µ

P X = n | E X

( ) = µ

( ) = e−µµn

n!

slide-7
SLIDE 7

How to Model Noise?

Poisson distribution Independent events occurring at an average rate. Photons Spiking Gaussian distribution Sum of many independent processes through central limit theorem. Membrane potential noise Binomial distribution n independent outcomes, each with probability p. Channel gating Vesicle fusion Approximated by Poisson distribution at low probability p

σ2 = µ

σ2 = constant parameter

σ2 = np 1− p

( )

slide-8
SLIDE 8

Nonlinearity

slide-9
SLIDE 9

Turtle Cones: Sensitivity and Kinetics change with mean luminance

Baylor & Hodgkin 1974

Dim background Bright background

slide-10
SLIDE 10

Rate 0.1 1 10 100 1000 Signal with poisson distribution

2 2000 Time (s) 10 40 200 2000 0.2 0.1 0.0 4000 2000 Time (s)

Filtered

slide-11
SLIDE 11

Retinal bipolar cell receptive field What receptive field maximizes information transmission?

Baccus, Olveczky, Manu & Meister, 2008

slide-12
SLIDE 12

A Mathematical Theory of Communication Claude Shannon (1948)

Bell System Technical Journal 27, 379-423

Entropy – measure of uncertainty in bits Bit – unit of information Mutual information – What you can learn about one signal by observing another

slide-13
SLIDE 13

J.H. van Hateren, Spatiotemporal contrast sensitivity of early vision. Vision Res., 33:257-67 (1993)

Natural visual scenes are dominated by low spatial and temporal frequencies Theory of maximizing information in a noisy neural system

J.H. van Hateren. Real and optimal neural images in early vision. Nature 360:68-70 (1992)

‘Efficient Coding’ - Horace Barlow

slide-14
SLIDE 14

Linear filter and frequency response

1000 500

100 80 60 40 20 1000 500

Stimulus Response Filter

Convolution theorem

a convolution in the time domain

h(t) = f (t)∗ g(t) ⇔

h

~

ω

( ) = f

~

ω

( )g

~

ω

( )

is a simple product in the frequency domain

slide-15
SLIDE 15

Optimal filter whitens but also cuts out noise

Stimulus Noise Filter ‘Whitening’ filter Response Noise 1/f

slide-16
SLIDE 16

Theory of maximizing information in a noisy neural system Low background intensity Integrates over time (real and theoretical optimum) High background intensity Emphasizes change, is more differentiating (real and theoretical optimum) Both, scaled in time to the first peak

Filter of fly Large Monopolar Cells, 2nd order visual neuron J.H. van Hateren. Real and optimal neural images in early vision. Nature 360:68-70 (1992)

slide-17
SLIDE 17

Spatial adaptation in retinal ganglion cells

Barlow, Fitzhugh & Kuffler (1957)

slide-18
SLIDE 18

Theories of efficient coding: An ideal encoder should use all output values with equal probability Low frequencies dominate in natural scenes An efficient encoder should amplify higher frequencies more than low frequencies But when signals are more noisy, such as when the signal is weak, higher frequencies should be reduced, as they carry little information

slide-19
SLIDE 19

Cone Bipolar cell Ganglion cell Synaptic weights w1 w2 w3 Linear-Nonlinear (LN) Model Receptive Field Preferred Visual feature Spatiotemporal Filter Time

A Simple Model of Visual Responses

slide-20
SLIDE 20

Deep Learning Object Recognition

slide-21
SLIDE 21

“Convolutional” layer Like a mosaic

  • f retinal neurons
slide-22
SLIDE 22

A

L

×

Filter Scale Threshold

Like the multiple cell types in the retina

Multiple cell types

Multiple cell types

slide-23
SLIDE 23

A

L

×

Filter Scale Threshold

Multiple cell types Like the hierarchy

  • f retinal circuitry

Like the multiple cell types in the retina

Multiple Layers

slide-24
SLIDE 24

Object Recognition Deep Network

slide-25
SLIDE 25

Spatiotemporal Filter

Time

+

LN Model LN-LN Model Minimal Convolutional Neural Network Deep CNN More Expressive More Computationally Interpretable More Mechanistically Interpretable

+

Stimulus Convolutional Layer 1 Convolutional Layer 2 Dense layer

Different sensory models for different questions

slide-26
SLIDE 26

Adaptation to mean and variance

Intensity

slide-27
SLIDE 27

Why study biophysical mechanisms?

Function Mechanisms Theory Computations

Lower level mechanisms Higher level function

Biophysics provides a tool kit Biophysics provides constraints

slide-28
SLIDE 28

Change in sensitivity by depletion

Short-term synaptic plasticity synaptic depression

Pre Post

Receptor desensitization Ion channel inactivation

Change in sensitivity by modulation

Feedforward inhibition Feedback inhibition Spike dependent conductances

slide-29
SLIDE 29

IK + Ic = Ie

Ie

C

V

GK

Ic IK

EK +

GK = 1RK

36

C dV dt = Ie − GK(V − EK )

Inside Outside

‘Equivalent’ circuit model of a neuron

slide-30
SLIDE 30

A mathematical model of a neuron

100 200 300 −100 100 Time (ms) Vm

Alan Hodgkin Andrew Huxley, 1952

gNa gK gL EL EK ENa

Ie V

C

+ + +

37

C dV dt = In t

( )

n

= Ie t

( )−

gi V − Ei

( )

i

100 200 300 0.1 Time (ms)

slide-31
SLIDE 31

Ionic currents (Time dependence)

GK(t)

Time after start

  • f pulse (ms)

activation

GNa(t)

Time after start

  • f pulse (ms)

activation inactivation

38

IK(t) = GK(V,t)⋅ V − EK

( )

INa(t) = GNa(V,t)⋅ V − ENa

( )

slide-32
SLIDE 32

A R

Closed Open

Change in activity = Inflow – Outflow

dA dt = R L

[ ]kon − Akoff

koff

L

[ ]kon

Rate constants State Occupancies (sum to 1)

Kinetic model

slide-33
SLIDE 33

koff

L

[ ]kon

A R

Closed Open

1 Fractional Occupancy Time Rate constant (1/s)

Active Resting

[L]kon koff Input and output in a kinetic model

slide-34
SLIDE 34

Steps for computing the model, focusing on K+ current:

41

C dV(t) dt = Ie(t)− GKn4 t,αn,βn

( )⋅(V − EK )...

Closed Open 1-n n

α V

( )

β V

( )

dn dt = α V

( ) 1− n ( )−β V ( )n

state variable Max cond.

slide-35
SLIDE 35

Steps for computing the model, focusing on K+ current: dn dt

n(t)

Compute and integrate one time step to get Compute K current:

IK = GKn4(V − EK )

Compute total membrane current: Im = IK + INa + IL Integrate to get at next time step dVm dt Vm Start with at time step t Vm

42

Compute rate constants as a function of Vm

C dV(t) dt = Ie(t)− GKn4 t,αn,βn

( )⋅(V − EK )...

Closed Open 1-n n

α V

( )

β V

( )

dn dt = α V

( ) 1− n ( )−β V ( )n

slide-36
SLIDE 36

Hodgkin Huxley Model

Conductance state variables: Rate “constants”: Voltage state variable (membrane equation):

C dV dt = I(t)− gKn4 V − EK

( )− gNam3h V − ENa ( )− gl V − El ( )

Constants:

E K E Na

E l

Note: these are given in the original form, relative to Vrest =~ -66 mV

slide-37
SLIDE 37
slide-38
SLIDE 38

Adaptation to the mean and variance of signals are similar in a number of systems The kinetics and gain of the response change when the stimulus statistics change These adaptive properties can be interpreted as avoiding saturation and maximizing information in the presence of noise Many mechanisms can contribute these adaptive nonlinear properties at different timescales