Pattern Separation and Completion in the Hippocampus
Computational Models of Neural Systems
Lecture 3.5
Pattern Separation and Completion in the Hippocampus Computational - - PowerPoint PPT Presentation
Pattern Separation and Completion in the Hippocampus Computational Models of Neural Systems Lecture 3.5 David S. Touretzky October, 2017 Overview Pattern separation Pulling similar patterns apart reduces memory interference.
Lecture 3.5
10/09/17 Computational Models of Neural Systems 2
– Pulling similar patterns apart reduces memory interference.
– Noisy or incomplete patterns should be mapped to more
complete or correct versions.
architecture?
– Use conjunction (codon units; DG) for pattern separation. – Learned weights plus thresholding gives pattern completion. – Recurrent connections (CA3) can help with completion, but
aren't used in the model described here.
10/09/17 Computational Models of Neural Systems 3
representation of an event.
forming a new representation better suited to storage and retrieval.
representation that can reconstitute the EC pattern.
perforant path mossy fibers
10/09/17 Computational Models of Neural Systems 4
– May regulate overall activity levels, as in a kWTA network.
DG has less activity than CA3/CA1.
– Less activity means representation
is more sparse, hence can be more highly orthogonal.
10/09/17 Computational Models of Neural Systems 5
– Each DG granule cell receives 5,000 inputs from EC. – Each CA3 pyramidal cell receives 3750-4500 inputs from EC.
This is about 2% of the rat's 200,000 EC layer II neurons.
CA3 has 160,000 pyramidal cells; CA1 has 250,000.
synapses.
perforant path and Schafer collaterals. LTP also demonstrated in mossy fjber pathway (non-NMDA).
10/09/17 Computational Models of Neural Systems 6
starting with a simple two-layer k-WTA model (like Marr). EC(i) CA3(o)
Fan-in F = 9 Hits Ha = 4
in one pattern
activity in the layer; αo = ko/No
in the output layer (must be < Ni)
10/09/17 Computational Models of Neural Systems 7
Hypergeomtric (not binomial; we're drawing without replacement)
N i ki 〈H a〉 = ki N i F = i F
10/09/17 Computational Models of Neural Systems 8
input pattern with ki active units, given that the fan-in is F and the total input size is Ni?
– C(ki, Ha) ways of choosing active units to be hits – C(Ni-ki, F-Ha) ways of choosing inactive units for the remaining
– C(Ni, F) ways of sampling F inputs from a population of size Ni
PH a ∣ ki , N i ,F = ki H a N i−ki F−H a
Ni F
# of ways to wire an
# of ways to wire an
10/09/17 Computational Models of Neural Systems 9
activity level of αo.
t.
t
to produce the desired value of αo. o =
H a=H a
t
minki , F
PH a
10/09/17 Computational Models of Neural Systems 10
two-layer model, consider two patterns A and B.
– Measure the input overlap Ωi = number of units in common. – Compute the expected output overlap Ωo as a function of Ωi.
number of hits an output unit receives for pattern B given that it is already known to be part of the representation for pattern A.
10/09/17 Computational Models of Neural Systems 11
independent, and Hab is distributed like Ha.
hits expected), and narrows: output overlap increases.
is nonlinear.
10/09/17 Computational Models of Neural Systems 12
a) Hits from pattern A. b) Hab = overlap of A&B hits
10/09/17 Computational Models of Neural Systems 13
PbHa , i , Hab , H
ab =
H a H ab F−Ha H
a b
ki−Ha i−H ab Ni−ki−FH a ki−i−H
a b
ki i N i−ki ki−i
3 1 2 4 1 2 3 4 1,3 2,4
Note: Hb = H ab H
a b
# of ways of achieving overlap Ωi # of ways of achieving ki – Hb non-hits given
# of ways of achieving Hb hits given
To calculate PH b we must sum Pb
ab
10/09/17 Computational Models of Neural Systems 14
McClelland chose numbers close to the biology but tailored to avoid round-ofg problems in the overlap formula.
10/09/17 Computational Models of Neural Systems 15
10/09/17 Computational Models of Neural Systems 16
Pattern separation performance of a generic network with activity levels comparable to EC, CA3, or DG. Sparse patterns yield greater separation.
10/09/17 Computational Models of Neural Systems 17
10/09/17 Computational Models of Neural Systems 18
may have higher strength. Let M = mossy fjber strength.
than in CA3 w/o DG.
for M ≥ 15.
projection alone is as good as DG+EC.
10/09/17 Computational Models of Neural Systems 19
the tail.
– DG hit distribution has std. dev. of 0.76 – EC hit distribution has std. dev. of 15. – Setting M=20 would balance the efgects of the two projections.
between the M=0 line (EC only) and the “M only” line.
10/09/17 Computational Models of Neural Systems 20
Less separation between A and subset(A) than between patterns A and B, because there are no noise inputs. But Ωo is still less than Ωi
10/09/17 Computational Models of Neural Systems 21
wo learning rules were tried:
– WI: Weight Increase (like Marr) – WID: Weight Increase/Decrease
exponentially decreases weights to units in F-Ha by multiplying by (1-Lrate).
completion.
10/09/17 Computational Models of Neural Systems 22
10/09/17 Computational Models of Neural Systems 23
10/09/17 Computational Models of Neural Systems 24
No learning (learning rate = 0) Learning rate = 0.1
Percent of possible improvement
10/09/17 Computational Models of Neural Systems 25
10/09/17 Computational Models of Neural Systems 26
Sweet spot
Learning rate
10/09/17 Computational Models of Neural Systems 27
Buckingham's comparison of Marr models.)
– With noisy cues, completion produces a somewhat noisy result
which would lead to further separation at the next stage.
– Perhaps partial EC inputs aren't strong enough to drive DG.
– Learning reduces pattern separation. Real mossy fjbers
undergo LTP, but it's not NMDA-dependent (so non-Hebbian).
– Optimal tradeofg between separation and completion.
10/09/17 Computational Models of Neural Systems 28
10/09/17 Computational Models of Neural Systems 29
highly activated by EC input.
representations of stored patterns.
– So don't use it. Use MSEPO or FMSEPO.
10/09/17 Computational Models of Neural Systems 30
separation and completion can be accomplished in the same architecture.
and connections.
between separation and completion.
completion.
10/09/17 Computational Models of Neural Systems 31
and EC→DG→CA3 connections.
– Store A, measure overlap with B. – No attempt to measure memory capacity.
10/09/17 Computational Models of Neural Systems 32
Measured by IEG (Immediate Early Genes): Arc/H1a catFISH method
Expose rats to two environments 30 minutes apart. Environments can be (i) identical , (ii) similar but with changes to local or distal cues, or (iii) completely different.
Guzowski, Knierim, and Moser (2004)
10/09/17 Computational Models of Neural Systems 33
EC CA1 CA3
Medial Septum ACh fjmbria/fornix pp Sch Acetycholine reduces synaptic efficacy (prevent CA3 from altering CA1 pattern) and enhances synaptic plasticity.
10/09/17 Computational Models of Neural Systems 34
slightly, plus additional foils. Asked for an unrelated judgment about each image (indoor vs. outdoor object).
previously seen object, (iii) slightly difgerent version of a previously seen object: a lure.
10/09/17 Computational Models of Neural Systems 35
10/09/17 Computational Models of Neural Systems 36
areas.