cognitive computational neuroscience
play

Cognitive computational neuroscience of vision Nikolaus - PowerPoint PPT Presentation

Cognitive science Computational neuroscience Cognitive computational neuroscience of vision Nikolaus Kriegeskorte Department of Psychology, Department of Neuroscience Zuckerman Mind Brain Behavior Institute Affiliated member, Electrical


  1. Cognitive science Computational neuroscience Cognitive computational neuroscience of vision Nikolaus Kriegeskorte Department of Psychology, Department of Neuroscience Zuckerman Mind Brain Behavior Institute Affiliated member, Electrical Engineering, Columbia University Artificial intelligence

  2. Cognitive computational neuroscience Cognitive science Computational neuroscience neural A common language for network expressing theories about models brain information processing Artificial intelligence Kriegeskorte & Douglas 2018

  3. How can we test neural network models with brain-activity data?

  4. Predicting representational spaces responses responses activity profile activity profile stimuli stimuli activity pattern activity pattern ? activity patterns ... ... model brain experimental ... stimuli Diedrichsen & Kriegeskorte 2017, Kriegeskorte & Diedrichsen 2019

  5. Predicting representational spaces responses activity profile stimuli activity pattern Diedrichsen & Kriegeskorte 2017, Kriegeskorte & Diedrichsen 2019

  6. Predicting representational spaces model distance matrix features weights encoding representational similarity model analysis responses L2 weight penalty activity profile stimulus 1 stimuli stimulus 3 response 3 activity pattern activity activity response 1 (e.g. neuron, voxel) Diedrichsen & Kriegeskorte 2017, Kriegeskorte & Diedrichsen 2019

  7. Predicting representational spaces model each response model stimulus-by-stimulus matrix separately of summary statistics model second-moment matrix distance matrix features weights encoding pattern component representational similarity model model analysis L2 weight penalty stimulus 1 stimulus 3 stimulus 3 response 3 activity activity activity response 1 (e.g. neuron, voxel) Core commonality : All three test hypotheses about the model representational model activity-profiles distribution second moment of the activity profiles. distances Diedrichsen & Kriegeskorte 2017, Kriegeskorte & Diedrichsen 2019

  8. The onion of brain representations information potentially used information potentially extracted by researchers by single readout neurons 1 spatially organized neuronal population code (neuronal locations and activity profiles L U ) 4 total encoded information (downstream neuron can perform arbitrary linear or nonlinear readout from all neurons) 2 activity-profiles distribution U (activity profiles or all moments 5 linear neuronal readout of activity-profiles distribution) (downstream neuron can perform linear readout from all neurons) 3 representational geometry 6 restricted-input linear readout nd G (2 moment of activity profiles or (downstream neuron can perform linear D representational distance matrix ) readout from a limited number of neurons) 7 local linear readout (downstream neuron can perform linear or radial-basis readout from neurons in a restricted spatial neighborhood) researcher information encoded information comprehensive focused implicit explicit Kriegeskorte & Diedrichsen 2019

  9. The onion of brain representations information potentially used information potentially extracted by researchers by single readout neurons 1 spatially organized neuronal population code (neuronal locations and activity profiles L U ) 4 total encoded information (downstream neuron can perform arbitrary linear or nonlinear readout from all neurons) 2 activity-profiles distribution U (activity profiles or all moments 5 linear neuronal readout of activity-profiles distribution) (downstream neuron can perform linear readout from all neurons) 3 representational geometry 6 restricted-input linear readout nd G (2 moment of activity profiles or (downstream neuron can perform linear D representational distance matrix ) readout from a limited number of neurons) 7 local linear readout (downstream neuron can perform linear or radial-basis readout from neurons in a restricted spatial neighborhood) researcher information encoded information comprehensive focused implicit explicit Kriegeskorte & Diedrichsen 2019

  10. Representational similarity analysis stimuli stimuli 000 0 000000 0 0 00000 0 000000 stimuli stimuli representational ! dissimilarity matrix (RDM) dissimilarity (e.g. crossvalidated Mahalanobis distance estimator) ? activity patterns ... ... model brain experimental ... stimuli Kriegeskorte et al. 2008

  11. Representational feature weighting with non-negative least-squares model RDM f 1 f 2 . . . f k weighted-model w 1 f 1 w 2 f 2 . . . w k f k RDM

  12. Representational feature weighting with non-negative least-squares predicted distance model feature k weight k π‘œ 2 = ෍ መ 𝑙 π‘˜ ] 2 𝑒 𝑗,π‘˜ [π‘₯ 𝑙 𝑔 𝑙 𝑗 βˆ’ π‘₯ 𝑙 𝑔 2 οƒ— feature-1 RDM w 1 2 οƒ— feature-2 RDM + w 2 𝑙=1 stimuli i , j π‘œ ... 2 βˆ™ [𝑔 𝑙 π‘˜ ] 2 = ෍ π‘₯ 𝑙 𝑙 𝑗 βˆ’ 𝑔 𝑙=1 2 οƒ— feature-k RDM The squared distance RDM + w k of weighted model features = weighted-model RDM equals a weighted sum of single-feature RDMs. 2 π‘œ 𝑒 2 βˆ’ ෍ π‘₯ 𝑙2 βˆ™ RDM 𝑙 2 2 βˆ’ መ = arg min π±βˆˆπ’ +𝒐 ෍ 2 𝐱 = arg min π±βˆˆπ’ +𝒐 ෍ 𝑒 𝑗,π‘˜ 𝑒 𝑗,π‘˜ π‘—β‰ π‘˜ 𝑙=1 𝑗,π‘˜ π‘—β‰ π‘˜ w k weight given to model feature k f k (i) model feature k for stimulus i d i,j distance between stimuli i,j w is the weight vector [ w 1 w 2 ... w k ] that minimizes the sum of squared errors

  13. Deep convolutional networks predict IT representational geometry model comparisons (stimulus bootstrap, p < 0.05, Bonferroni corrected for all pairwise comparisons) 0.7 highest accuracy any model can achieve 0.6 noise ceiling 0.5 accuracy of human IT 0.4 other subjects’ average dissimilarity matrix as model prediction 0.3 [group-average of Spearman’s  ] accuracy below noise ceiling 0.2 performance range of p < 0.05, Bonf. corr. (stimulus bootstrap) computer-vision features 0.1 accuracy above 0 SE p < 0.05, Bonf. corr. (stimulus bootstrap) 0 (stimulus bootstrap) weighted combination of layers and SVM discriminants fully convolutional connected Khaligh-Razavi & Kriegeskorte 2014, Nili et al. 2014 (RSA Toolbox), Storrs et al. (in prep.)

  14. Do recurrent neural networks provide better models of vision? Courtney Spoerer

  15. Recurrent networks can recycle their limited computational resources over time. This might boost the performance of a physically finite model or brain. Kriegeskorte & Golan 2019

  16. Layer 1 lateral connectivity is consistent with primate V1 connectivity RCNN, layer 1, lateral connectivity templates (first 5 principal components) Spoerer et al. pp2019

  17. Recurrent models can trade off speed of computation for accuracy recurrent convolutional accuracy feedforward models computational cost [floating-point operations Γ— 10 11 ] Spoerer et al. pp2019

  18. Recurrent models can trade off speed of computation for accuracy recurrent convolutional accuracy feedforward models computational cost [floating-point operations Γ— 10 11 ] Spoerer et al. pp2019

  19. RCNN reaction times tend to be slower for images humans are uncertain about correlation between human certainty and RCNN reaction time [Spearman]

  20. Tim Kietzmann Can recurrent neural network models capture the representational dynamics in the human ventral stream?

  21. Fitting model representational dynamics with deep representational distance learning McClure & Kriegeskorte 2016 Task: find an image-computable network to model the first 300ms of representational dynamics of the ventral stream.

  22. Recurrent models better explain representations and their dynamics magnetoencephalography functional magnetic resonance imaging feedforward recurrent Recurrent networks significantly outperform ramping feedforward models in predicting ventral-stream representations (MEG and fMRI).

  23. How can we build neural network models of mind and brain? big world big brain big behavioral data data data Divergent: Exploring the space of computational big models models with world data Convergent: Constraining models β€’ Training with brain and behavioral data β€’ different sets of stimuli β€’ β€’ inferential model selection (model different tasks β€’ Units parameters learned for a task) β€’ stochasticity β€’ reweighting of units β€’ context-modulation β€’ linear remixing of units β€’ Architecture β€’ deep learning of model parameters β€’ skipping connections from brain-activity data β€’ recurrent connections

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend