Hippocampus-inspired Adaptive Neural Algorithms Brad Aimone Center - - PowerPoint PPT Presentation

hippocampus inspired adaptive neural algorithms
SMART_READER_LITE
LIVE PREVIEW

Hippocampus-inspired Adaptive Neural Algorithms Brad Aimone Center - - PowerPoint PPT Presentation

Photos placed in horizontal position with even amount of white space between photos and header Hippocampus-inspired Adaptive Neural Algorithms Brad Aimone Center for Computing Research Sandia National Laboratories; Albuquerque, NM Sandia


slide-1
SLIDE 1

Photos placed in horizontal position with even amount of white space between photos and header

Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2011-XXXXP

Hippocampus-inspired Adaptive Neural Algorithms

Brad Aimone Center for Computing Research Sandia National Laboratories; Albuquerque, NM

slide-2
SLIDE 2

Can neural computing provide the next Moore’s Law?

slide-3
SLIDE 3

Moore’s Law was based on scientific discovery and successive innovations

Adapted from Wikipedia

slide-4
SLIDE 4

Each successive advance made more computing feasible

Adapted from Wikipedia

slide-5
SLIDE 5

Better devices made better computers, which allowed engineering new devices…

Images from Wikipedia

Circa 1980

slide-6
SLIDE 6

Better devices made better computers, which allowed engineering new devices…

Images from Wikipedia

Circa 2017

slide-7
SLIDE 7

If we extrapolate capabilities out, it is not

  • bvious better devices is the answer…

Computing Intelligence per second per $

Conventional Computing

2020

ANNs

When Deep Nets became efficient What Comes Next? Devices? or Neural Knowledge?

slide-8
SLIDE 8

Cycle of computing scaling already has begun to influence neuroscience

slide-9
SLIDE 9

Even if Moore’s Law ends, computing will continue to scale to be smarter

1950-2020 2010-???

slide-10
SLIDE 10

The reservoir of known neuroscience untapped for computing inspiration is enormous

James, et al., BICA 2017

slide-11
SLIDE 11

The brain has many mechanisms for adaptation; which are important for computing?

Current hardware focuses on synaptic plasticity, if anything

slide-12
SLIDE 12

There are different algorithmic approaches to neural learning

  • In situ adaptation
  • Incorporate “new” forms of known neural

plasticity into existing algorithms

  • Ex situ adaptation
  • Design entirely new algorithms or algorithmic

modules to provide cognitive learning abilities

slide-13
SLIDE 13

Neurogenesis Deep Learning

slide-14
SLIDE 14

Deep Networks are a function of training sets

5 10 15 20 25 30 35 40 45 50 10 20 30 40 50 60 70

1 7

slide-15
SLIDE 15

Deep Networks are a function of training sets

5 10 15 20 25 30 35 40 45 50 10 20 30 40 50 60 70

slide-16
SLIDE 16

Deep networks often struggle to generalize

  • utside of training domain

5 10 15 20 25 30 35 40 45 50 10 20 30 40 50 60 70

slide-17
SLIDE 17

Neurogenesis can be used to capture new information without disrupting old information

  • Brain incorporates new neurons in a select

number of regions

  • Particularly critical for novelty detection and

encoding of new information

  • “Young” hippocampal neurons exhibit increased

plasticity (learn more) and are dynamic in their representations

  • “Old” hippocampal neurons appear to have

reduced learning and maintain their representations

  • Cortex does not have neurogenesis (or similar

mechanisms) in adult-hood, but does during development

Aimone et al., Neuron 2011

slide-18
SLIDE 18

Neurogenesis can be used to capture new information without disrupting old information

  • Brain incorporates new neurons in a select

number of regions

  • Hypothesis: Can new neurons be used to

facilitate adapting deep learning?

?

slide-19
SLIDE 19

Neurogenesis can be used to capture new information without disrupting old information

  • Brain incorporates new neurons in a select

number of regions

  • Hypothesis: Can new neurons be used to

facilitate adapting deep learning?

  • Neurogenesis Deep Learning Algorithm
  • Stage 1: Check autoencoder reconstruction to

ensure appropriate representations

10 20 30 40 50 10 20 30 40 50 60 70 10 20 30 40 50 10 20 30 40 50 60 70
slide-20
SLIDE 20

Neurogenesis can be used to capture new information without disrupting old information

  • Brain incorporates new neurons in a select

number of regions

  • Hypothesis: Can new neurons be used to

facilitate adapting deep learning?

  • Neurogenesis Deep Learning Algorithm
  • Stage 1: Check autoencoder reconstruction to

ensure appropriate representations

  • Stage 2: If mismatch, add and train new neurons
  • Train new nodes with novel inputs coming in

(reduced learning for existing nodes)

10 20 30 40 50 10 20 30 40 50 60 70 10 20 30 40 50 10 20 30 40 50 60 70
slide-21
SLIDE 21

Neurogenesis can be used to capture new information without disrupting old information

  • Brain incorporates new neurons in a select

number of regions

  • Hypothesis: Can new neurons be used to

facilitate adapting deep learning?

  • Neurogenesis Deep Learning Algorithm
  • Stage 1: Check autoencoder reconstruction to

ensure appropriate representations

  • Stage 2: If mismatch, add and train new neurons
  • Train new nodes with novel inputs coming in

(reduced learning for existing nodes)

  • Intrinsically replay “imagined” training samples

from top-level statistics to fine tune representations

  • Stage 3: Repeat neurogenesis until

reconstructions drop below error thresholds

10 20 30 40 50 10 20 30 40 50 60 70 10 20 30 40 50 10 20 30 40 50 60 70
slide-22
SLIDE 22

Neurogenesis algorithm effectively balances stability and plasticity

slide-23
SLIDE 23

Neurogenesis algorithm effectively balances stability and plasticity

slide-24
SLIDE 24

NDL applied to NIST data set

slide-25
SLIDE 25

A New View

  • f the

Hippocampus

slide-26
SLIDE 26

Deep learning ≈ Cortex What ≈ Hippocampus?

slide-27
SLIDE 27

Can a new framework for studying the hippocampus help inspire computing?

  • Desired functions
  • Learn associations between cortical modalities
  • Encoding of temporal, contextual, and spatial

information into associations

  • Ability for “one-shot” learning
  • Cue-based retrieval of information
  • Desired properties
  • Compatible with spiking representations
  • Network must be stable with adaptation
  • Capacity should scale nicely
  • Biologically plausible in context of extensive

hippocampus literature

  • Ability to formally quantify costs and performance
  • This requires a new model of CA3

Entorhinal Cortex Dentate Gyrus CA3 CA1

slide-28
SLIDE 28

Formal model of DG provides lossless encoding of cortical inputs

  • Constraining EC

inputs to have “grid cell” structure sets DG size to biological level of expansion (~10:1)

  • Mixed code of broad-

tuned (immature) neurons and narrow tuned (mature) neurons confirms predicted ability to encode novel information

28 William Severa, NICE 2016 Severa et al., Neural Computation, 2017

slide-29
SLIDE 29

Classic model of CA3 uses Hopfield-like recurrent network attractors

Problems

  • “Auto-associative” attractors make more

sense in frequency coding regime than in spiking networks

  • Capacity of classic Hopfield networks is

generally low

  • Quite difficult to perform stable one-shot

updates to recurrent networks

29 Deng, Aimone, Gage, Nat Rev Neuro 2010

slide-30
SLIDE 30

Moving away from the Hopfield “learned auto-association” function for CA3

Hopfield dynamics are discrete state transitions

time

Hillar and Tran, 2014

slide-31
SLIDE 31

Spiking dynamics are inconsistent with fixed point attractors in associative models

Biology uses sequence of spiking neurons? Hopfield dynamics are discrete state transitions

time time

slide-32
SLIDE 32

Spiking dynamics are inconsistent with fixed point attractors in associative models

time time

One can see how sequences can replace fixed populations

slide-33
SLIDE 33

Path attractors, such as orbits, are consistent with spiking dynamics

slide-34
SLIDE 34

A new dynamical model of CA3

Problems

  • “Auto-associative” attractors make more

sense in frequency coding regime than in spiking networks

  • Capacity of classic Hopfield networks is

generally low

  • Quite difficult to perform stable one-shot

updates to recurrent networks

34

Orbits of Spiking Neurons

slide-35
SLIDE 35

Neuromodulation can shift dynamics of recurrent networks

35

Carlson, Warrender, Severa and Aimone; in preparation

slide-36
SLIDE 36

Cortex and subcortical inputs can modulate CA3 attractor access

  • Modulation can be provided

mechanistically by several sources

  • Spatial distribution of CA3 synaptic

inputs suggests EC inputs could be considered modulatory

  • Metabotrophic modulators (e.g.,

serotonin, acetylcholine) can bias neuronal timings and thresholds

  • Attractor network can thus have

many “memories”, but only fraction are accessible within each context

slide-37
SLIDE 37

A new modulated, dynamical model of CA3

Problems

  • “Auto-associative” attractors make more

sense in frequency coding regime than in spiking networks

  • Capacity of classic Hopfield networks is

generally low

  • Quite difficult to perform stable one-shot

updates to recurrent networks

37

Orbits of Spiking Neurons Context modulation

slide-38
SLIDE 38

CA1 encoding can integrate cortical input with transformed DG/CA3 input

  • CA1 plasticity is dramatic
  • Synapses appear to be structurally volatile
  • Representations are temporally volatile
  • Consistent with one-shot learning
  • Can consider EC-CA1-EC loosely as an

autoencoder, with DG / CA3 “guiding” what representation is used within CA1

38

Current State Average State of the CA3 Orbit Combined Representation across Time

slide-39
SLIDE 39

A new modulated, dynamical model of CA3

Problems

  • “Auto-associative” attractors make more

sense in frequency coding regime than in spiking networks

  • Capacity of classic Hopfield networks is

generally low

  • Quite difficult to perform stable one-shot

updates to recurrent networks

39

Orbits of Spiking Neurons Context modulation

Schaffer Collateral (CA3-CA1) Learning

slide-40
SLIDE 40

Thanks!

HAANA Grand Challenge LDRD DOE NNSA Advanced Simulation and Computing Program Neurogenesis Deep Learning:

Tim Draelos, Nadine Miner, Chris Lamb, Jonathan Cox, Craig Vineyard, Kris Carlson, William Severa, and Conrad James

Hippocampus Algorithm:

Kris Carlson, William Severa, Ojas Parekh, Frances Chance, and Craig Vineyard