decoding an arbitrary continuous stimulus
play

Decoding an arbitrary continuous stimulus E.g. Gaussian tuning - PowerPoint PPT Presentation

1 Decoding an arbitrary continuous stimulus E.g. Gaussian tuning curves .. what is P(r a |s)? 2 Decoding an arbitrary continuous stimulus Many neurons voting for an outcome. Work through a specific example assume independence


  1. 1 Decoding an arbitrary continuous stimulus E.g. Gaussian tuning curves .. what is P(r a |s)?

  2. 2 Decoding an arbitrary continuous stimulus Many neurons “voting” for an outcome. Work through a specific example assume independence • assume Poisson firing • Noise model: Poisson distribution P T [k] = ( l T) k exp(- l T)/k!

  3. 3 Need to know full P[ r |s] Assume Poisson: Assume independent: Population response of 11 cells with Gaussian tuning curves

  4. 4 ML Apply ML: maximize ln P[ r |s] with respect to s Set derivative to zero, use sum = constant From Gaussianity of tuning curves, If all s same

  5. 5 MAP Apply MAP: maximise ln p[s| r ] with respect to s Set derivative to zero, use sum = constant From Gaussianity of tuning curves,

  6. 6 Given this data: Prior with mean -2, variance 1 Constant prior MAP:

  7. 7 How good is our estimate? For stimulus s, have estimated s est Bias: Variance: Mean square error: Cramer-Rao bound: Fisher information (ML is unbiased: b = b’ = 0)

  8. 8 Fisher information Alternatively: Quantifies local stimulus discriminability

  9. 9 Entropy and Shannon information

  10. 10 Entropy and Shannon information For a random variable X with distribution p(x), the entropy is H[ X ] = - S x p( x ) log 2 p( x ) Information is defined as I[ X ] = - log 2 p( x ) Mutual Information between X and Y is defined as MI[X,Y] = H[X] - E [H[X|Y=y]] y = H[Y] - E [H[Y|X=x]] x

  11. 11 Information in single spikes How much information does a single spike convey about the stimulus? Key idea: the information that a spike gives about the stimulus is the reduction in entropy between the distribution of spike times not knowing the stimulus, and the distribution of times knowing the stimulus. The response to an (arbitrary) stimulus sequence s is r(t). Without knowing that the stimulus was s , the probability of observing a spike in a given bin is proportional to , the mean rate, and the size of the bin. Consider a bin D t small enough that it can only contain a single spike. Then in the bin at time t,

  12. 12 Information in single spikes Now compute the entropy difference: , ß prior ß conditional Note substitution of a time average for an average over the r ensemble. -p and using Assuming , In terms of information per spike (divide by ):

  13. 13 Using information to evaluate neural models We can use the information about the stimulus to evaluate our reduced dimensionality models.

  14. 14 Evaluating models using information Mutual information is a measure of the reduction of uncertainty about one quantity that is achieved by observing another. Uncertainty is quantified by the entropy of a probability distribution, ∑ p(x) log 2 p(x). We can compute the information in the spike train directly, without direct reference to the stimulus (Brenner et al., Neural Comp., 2000) This sets an upper bound on the performance of the model. Repeat a stimulus of length T many times and compute the time-varying rate r(t), which is the probability of spiking given the stimulus.

  15. 15 Evaluating models using information Information in timing of 1 spike: By definition

  16. 16 Evaluating models using information Given: By definition Bayes’ rule

  17. 17 Evaluating models using information Given: By definition Bayes’ rule Dimensionality reduction

  18. 18 Evaluating models using information Given: By definition Bayes’ rule Dimensionality reduction So the information in the K-dimensional model is evaluated using the distribution of projections:

  19. 19 Using information to evaluate neural models Here we used information to evaluate reduced models of the Hodgkin-Huxley neuron. Twist model 2D: two covariance modes 1D: STA only

  20. 20 Information in 1D The STA is the single most informative dimension. Mode 1 6 Mode 2 Information in E-Vector (bits) 4 2 0 0 2 4 6 Information in STA (bits)

  21. 21 Information in 1D •The information is related to the eigenvalue of the corresponding eigenmode •Negative eigenmodes are much more informative •Information in STA and leading negative eigenmodes up to 90% of the total 1.0 0.8 Information fraction 0.6 0.4 0.2 0.0 -1 0 1 2 3 Eigenvalue (normalised to stimulus variance)

  22. 22 Information in 2D • We recover significantly more information from a 2-dimensional description 1.0 Information about two features (normalized) 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Information about STA (normalized)

  23. 23 Calculating information in spike trains How can one compute the entropy and information of spike trains? Entropy: Discretize the spike train into binary words w with letter size D t, length T. This takes into account correlations between spikes on timescales T D t. Compute p i = p( w i ), then the naïve entropy is Strong et al., 1997; Panzeri et al.

  24. 24 Calculating information in spike trains Information : difference between the variability driven by stimuli and that due to noise. Take a stimulus sequence s and repeat many times. For each time in the repeated stimulus, get a set of words P( w | s (t)). Average over s à average over time: H noise = < H[P(w|s i )] > i . Choose length of repeated sequence long enough to sample the noise entropy adequately. Finally, do as a function of word length T and extrapolate to infinite T. Reinagel and Reid, ‘00

  25. 25 Calculating information in spike trains Fly H1: obtain information rate of ~80 bits/sec or 1-2 bits/spike.

  26. 26 Calculating information in the LGN Another example: temporal coding in the LGN (Reinagel and Reid ‘00)

  27. 27 Calculating information in the LGN Apply the same procedure: collect word distributions for a random, then repeated stimulus.

  28. 28 Information in the LGN Use this to quantify how precise the code is, and over what timescales correlations are important.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend