Xaq Pitkow Rajkumar Vasudeva Raju
Inferring Inference
NICE workshop 2017
part of the MICrONS project with Tolias, Bethge, Patel, Zemel, Urtasun, Xu, Siapas, Paninski, Baraniuk, Reid, Seung
Inferring Inference Xaq Pitkow Rajkumar Vasudeva Raju part of the - - PowerPoint PPT Presentation
Inferring Inference Xaq Pitkow Rajkumar Vasudeva Raju part of the MICrONS project with Tolias, Bethge, Patel, Zemel, Urtasun, Xu, Siapas, Paninski, Baraniuk, Reid, Seung NICE workshop 2017 World Brain match Hypothesis: The brain
Xaq Pitkow Rajkumar Vasudeva Raju
Inferring Inference
NICE workshop 2017
part of the MICrONS project with Tolias, Bethge, Patel, Zemel, Urtasun, Xu, Siapas, Paninski, Baraniuk, Reid, Seung
Brain World Hypothesis:
The brain approximates probabilistic inference
using a message-passing algorithm implicit in population dynamics
match
What algorithms can we learn from the brain?
Architectures? cortex, hippocampus, cerebellum, basal ganglia, … Transformations? nonlinear dynamics from population responses Learning rules? short and long-term plasticity
M B S M B
Principles: Probabilistic Nonlinear Distributed Details: Graphical models Message-passing inference Multiplexed across neurons
So neural computation is inevitably statistical. This provides us with mathematical predictions.
world brain
Events in the world can cause many neural responses. Neural responses can be caused by many events.
Why does it matter whether processing is linear or nonlinear? If all computation were linear we wouldn’t need a brain.
nonlinearly separable
apples
linearly separable
Product rule: p(x,y) = p(x) ∙ p(y) Sum rule: L(x) = log ∑y exp L(x,y)
Relationships between uncertainties posteriors generally have nonlinear dependencies even for the simplest variables Relationships between latent variables Image = Light × Reflectance
I L R
Probabilistic Graphical Models: Simplify joint distribution p(x|r) by specifying how variables interact
p(x|r) ∝ Y
α
ψα(xα)
x1 x2 x3 ψ123 Variable Factor
Example: Pairwise Markov Random Field
x1
x2 x3
J1 J2 J3 J12 J23
Approximate inference by message-passing:
message-passing parameters interactions posterior for neighbors general equation posterior parameters
Example message-passing algorithms
ri
a.r b.r b.r µ = a.r
Neuron index i
p(x|r) x
Neural response Posterior
1 σ = a.r
Spatial representation of uncertainty (e.g. Probabilistic Population Codes, PPCs)
Ma, Beck, Latham, Pouget 2006, etc
Pattern of activity represents probability. More spikes generally means more certainty
Message-passing updates embedding Neural dynamics
linear connections singleton populations pairwise populations nonlinear connections
r
12
r
23
r
1
r
2
r
3
x1
x2 x3
J1 J2 J3 J12 J23
linear connections singleton populations pairwise populations nonlinear connections
x1
x2 x3
J1 J2 J3 J12 J23
Neural activity
Neural activity
Neural activity Neural encoding Information encoded
Neural activity Neural encoding Information encoded
Neural interactions Information interactions Neural encoding
Neural interactions Probability distributions Information interactions Neural encoding
Neural interactions Information interactions Neural encoding Example:
min max True parameters
Nneurons no noise
Neural activity r Inferred parameters
Mean Variance
Time
Nparams = 1 Nneurons Nparams = 5
Network activity can implicitly perform inference
Raju and Pitkow 2016
Simulated brain
Infer
time b
Encode
time r
Inferring inference
Decode
Message-passing parameters Interactions
Fit*
*within family
True Learnt
Recovery results for simulated brain
Jij ij
G αβγ
αβγ
Message-passing parameters Interactions
max
1 1
* * *
global min degenerate valley 1 degenerate valley 2 degenerate valley 2
Distance towards local minimum 1 Distance towards local minimum 2 Mean Squared Error
Analysis reveals degenerate family
From simulated neural data we have recovered:
which variables interact Message-Passing algorithm Graphical model how they interact how the interactions are used how variables are encoded Representation
Brain neural network Message passing nonlinearity
Applying message-passing to novel tasks
Apply to new graphical model structure Relax to novel neural network OR
Next up: applying methods to real brains stimulus: orientation field recordings: V1 responses*
*not to same stimulus recordings from Tolias lab
model distributed across a population.
modeling transformations of decoded task variables
Brain World match
xaqlab.com
Kaushik Lakshminarasimhan Qianli Yang Emin Orhan Aram Giahi-Saravani KiJung Yoon James Bridgewater Zhengwei Wu Saurabh Daptardar Rajkumar Vasudeva Raju
collaborators acknowledgements funding:
Alex Pouget Jeff Beck Dora Angelaki Andreas Tolias Jacob Reimer Fabian Sinz Alex Ecker Ankit Patel