Introduction The goal of neuromorphic engineering is to design and - - PowerPoint PPT Presentation

introduction the goal of neuromorphic engineering is to
SMART_READER_LITE
LIVE PREVIEW

Introduction The goal of neuromorphic engineering is to design and - - PowerPoint PPT Presentation

Introduction The goal of neuromorphic engineering is to design and implement micro- electronic systems that emulate the structure and function of the brain. Address-event representation (AER) is a communication protocol origi- nally


slide-1
SLIDE 1

Introduction

  • The goal of neuromorphic engineering is to design and implement micro-

electronic systems that emulate the structure and function of the brain.

  • Address-event representation (AER) is a communication protocol origi-

nally proposed as a means to communicate sparse neural events between neuromorphic chips.

  • Previous work has shown that AER can also be used to construct large-

scale networks with arbitrary, configurable synaptic connectivity.

  • Here, we further extend the functionality of AER to implement arbitrary,

configurable synaptic plasticity in the address domain.

1

slide-2
SLIDE 2

Address-Event Representation (AER)

1 2 3 3 1 1 2 2 3 Sender Receiver Data bus time REQ REQ ACK ACK Decoder Encoder

(Mahowald, 1994; Lazzaro et al., 1993)

  • The AER communication protocol emulates massive connectivity be-

tween cells by time-multiplexing many connections on the same data bus.

  • For a one-to-one connection topology, the required number of wires is

reduced from N to ∼log2 N.

  • Each spike is represented by:
  • Its location: explicitly encoded as an address.
  • The time at which it occurs: implicitly encoded.

2

slide-3
SLIDE 3

Learning on Silicon

  • Adaptive hardware systems commonly employ learning circuitry embed-

ded into the individual cells.

  • Executing learning rules locally requires inputs and outputs of the algo-

rithm to be local in both space and time.

  • Implementing learning circuits locally increases the size of repeating units.
  • This approach can be effective for small systems, but it is not efficient

when the number of cells increases.

x1 x2 x3 x4 w11 w21 w31 w41 w12 w22 w32 w42 y1 y2

3

slide-4
SLIDE 4

Address Domain Learning

  • By performing learning in the address domain, we can:
  • Move learning circuits to the periphery.
  • Create scalable adaptive systems.
  • Maintain the small size of our analog cells.
  • Construct arbitrarily complex and reconfigurable learning rules.
  • Because any measure of cellular activity can be made globally available us-

ing AER, many adaptive algorithms based on incremental outer-product computations can be implemented in the address domain.

  • By implementing learning circuits on the periphery, we reduce restrictions
  • f locality on constituents of the learning rule.
  • Spike timing-based learning rules are particularly well-suited for imple-

mentation in the address domain.

4

slide-5
SLIDE 5

Enhanced AER

  • In its original formulation, AER implements a one-to-one connection

topology.

  • To create more complex neural circuits, convergent and divergent con-

nections are required.

  • The connectivity of AER systems can be enhanced by routing address-

events to multiple receiver locations via a look-up table (Andreou et al.,

1997; Diess et al., 1999; Boahen, 2000; Higgins & Koch, 1999).

  • Continuous-valued synaptic weights can be obtained by manipulating

event transmission (Goldberg et al., 2001): W = n × p × q

Weight Number of spikes sent Probability of transmission Amplitude of postsynaptic response 5

slide-6
SLIDE 6

Enhanced AER: Example 2 1

‘‘Receiver’’ ‘‘Sender’’ 3 8

  • 1

4

2 1

Sender address Synapse index Receiver address Weight polarity Weight magnitude 1

Decoder Encoder

2 EG

  • - -

1 2

  • - -

1 2 1 2 0 1 3

  • - -

0 0 1

  • - -

1 2 2 1 8 2 1 4

  • - -

1

REQ POL

Look-up table Integrate-and-fire array

  • A two-layer neural network is mapped to the AER framework by means
  • f a look-up table (LUT).
  • The event generator (EG) sends as many events as are specified in the

weight magnitude field of the LUT.

  • The integrate-and-fire array transceiver (IFAT) spatially and temporally

integrates events.

6

slide-7
SLIDE 7

Architecture IFAT System

INACK INREQ AIN[2] AIN[3] AOUT[1] AOUT[0] POL VDD/2 AIN[0] AIN[1] AOUT[2] AOUT[3]

MATCH SCAN ACK MATCH ACK

OUTREQ OUTACK

RREQ RACK RSCAN CREQ CACK

RSEL D Q

Input control Event scanning

CREQ CACK CREQ CACK RREQ RACK RREQ RACK

CPOL RSEL RSEL

RSCAN RSCAN

CPOL CPOL SCAN

Receiver address Weight polarity

RAM

DATA ADDRESS IN

IFAT

OUT POL

MCU

magnitude Sender address Weight

IN

index

OUT

Synapse PC board

7

slide-8
SLIDE 8

Implementation IFAT System

Row decoding Column decoding Column scanning and encoding Row scanning and encoding Single IF cell RAM IFAT MCU

8

slide-9
SLIDE 9

Spike Timing-Dependent Plasticity

  • In spike timing-dependent plastic-

ity (STDP), changes in synaptic strength depend on the time be- tween each pair of presynaptic and postsynaptic events.

  • The most recent inputs to a post-

synaptic cell make larger contri- butions to its membrane potential than past inputs due to passive leakage currents.

  • Postsynaptic events immediately

following incoming presynaptic spikes are considered to be causal and induce weight increments.

  • Presynaptic

inputs that arrive shortly after a postsynaptic spike are considered to be anti-causal and induce weight decrements.

From (Bi & Poo, 1998) 9

slide-10
SLIDE 10

Address Domain STDP: Event Queues

  • To implement our STDP synaptic modification rule in the address do-

main, we augmented our AER architecture with two event queues, one for presynaptic events and one for postsynaptic events.

  • When an event occurs, its address is entered into the appropriate queue

along with an associated value ϕ initialized to τ+ or τ−. This value is decremented over time. Presynaptic queue Postsynaptic queue

x1 x2 Address ϕpre 1 1 1 2 2 2

3.0 2.5 2.1

1

1.0 0.7 0.1

1 2

0.0 0.0

t

  • 3
  • 2
  • 1
  • 4

1.8

y2 y1 Address ϕpost 1 2 1 2 1

6.0 5.6 4.8 4.1 3.5

1 2

2.4 2.1

t

  • 3
  • 2
  • 1
  • 4

2 2

5.3 4.5

ϕpre(t−tpre) =

  • τ+ − (t − tpre)

if t − tpre ≥ τ+ if t − tpre < τ+ ϕpost(t−tpost) =

  • τ− − (t − tpost)

if t − tpost ≥ τ− if t − tpost < τ− 10

slide-11
SLIDE 11

Address Domain STDP: Weight Updates

  • Weight update procedure:

Presynaptic Postsynaptic x1 x1 x1 x1 x2 x2 x2 x1 x2 y ∆w τ+ Presynaptic queue t y2 y2 y2 y2 y1 y1 y1 y1 y1 y2 y1 y1 y1 x Presynaptic Postsynaptic y1 y2 ∆w τ− Postsynaptic queue t

For each postsynaptic event, we iterate backwards through the presynaptic queue to find the causal spikes and increment the appropriate weights in the LUT. For each presynaptic event, we it- erate backwards through the post- synaptic queue to find the anti- causal spikes and decrement the appropriate weights in the LUT.

  • The magnitude of the weight updates are specified by the values stored

in the queue.

∆w =

−η · ϕpost(tpre − tpost)

if 0 ≤ tpre − tpost ≤ τ− +η · ϕpre(tpost − tpre) if − τ+ ≤ tpre − tpost ≤ 0

  • therwise

11

slide-12
SLIDE 12

Address Domain STDP: Details

−τ+ τ− tpre − tpost ∆w(tpre − tpost) presynaptic postsynaptic ∆w

  • For stable learning, the area under the synaptic modification curve in the

anti-causal regime must be greater than that in the causal regime. This ensures convergence of the synaptic strengths (Song et al., 2000).

  • In our implementation of STDP, this constraint is met by setting τ− > τ+.

12

slide-13
SLIDE 13

Experiment: Grouping Correlated Inputs

x20 y x18 x19 Uncorrelated Correlated x17 x16 x5 x4 x3 x2 x1

  • Each of the 20 neurons in the input layer is driven by an externally

supplied, randomly generated list of events.

  • Our randomly generated list of events simulates two groups of neurons,
  • ne correlated and one uncorrelated. The uncorrelated group drives in-

put layer cells x1 . . . x17, and the correlated group drives input layer cells x18 . . . x20.

  • Although each neuron in the input layer has the same average firing rate,

neurons x18 . . . x20 fire synchronous spikes more often than any other combination of neurons.

13

slide-14
SLIDE 14

Experimental Results Single Trial Average Over 20 Trials

  • STDP has been shown to be effective at detecting correlations between

groups of inputs (Song et al., 2000). We demonstrate that this can be accomplished in hardware in the address domain.

  • Given a random starting distribution of synaptic weights for a set of

presynaptic inputs, a neuron using STDP should maximize the weights

  • f correlated inputs and minimize the weights of uncorrelated inputs.
  • Our results illustrate this principle when all synaptic weights are initialized

to a uniform value and the network is allowed to process 200,000 input events.

14

slide-15
SLIDE 15

Conclusion

  • The address domain provides an efficient representation to implement

synaptic plasticity based on the relative timing of events.

  • Learning circuitry can be moved to the periphery.
  • The constituents of learning rules need not be constrained in space
  • r time.
  • We have implemented an address domain learning system using a hybrid

analog/digital architecture.

  • Our experimental results illustrate an application of this approach using

a temporally-asymmetric Hebbian learning rule.

15

slide-16
SLIDE 16

Extensions

  • The mixed-signal approach provides the best of both worlds:
  • Analog cells are capable of efficiently modelling sophisticated neural

dynamics in continuous-time.

  • Nearest-neighbor connectivity can be incorporated into an address-

event framework to exploit the parallel processing capabilities of ana- log circuits.

  • Storing the connections in a digital LUT affords the opportunity to

implement learning rules that reconfigure the network topology on the fly.

  • In the future, we will combine all of the system elements on a single chip.

The local embedding of memory will enable high bandwidth distribution

  • f events.

16