Dynamics of pruning in simulated large-scale spiking neural networks - - PowerPoint PPT Presentation

dynamics of pruning in simulated large scale spiking
SMART_READER_LITE
LIVE PREVIEW

Dynamics of pruning in simulated large-scale spiking neural networks - - PowerPoint PPT Presentation

Dynamics of pruning in simulated large-scale spiking neural networks Javier Iglesias 1 , 2 , 3 joint work with J. Eriksson 2 , F . Grize 1 , M. Tomassini 1 , A.E.P . Villa 2 , 3 Nonlinear Dynamics and Noise in Biological Systems Workshop: Torino,


slide-1
SLIDE 1

Dynamics of pruning in simulated large-scale spiking neural networks

Javier Iglesias1,2,3

joint work with J. Eriksson2, F . Grize1, M. Tomassini1, A.E.P . Villa2,3

Nonlinear Dynamics and Noise in Biological Systems Workshop: Torino, 2004-04-19

1: Information Management Department, University of Lausanne, Switzerland 2: Laboratory of Neuro-heuristics, University of Lausanne, Switzerland 3: Laboratory of Neurobiophysics, University Joseph-Fourier, France

<javier.iglesias@hec.unil.ch>

slide-2
SLIDE 2

description of the experiment

1

  • model synaptic pruning after over-growth observed during brain

maturation

  • size: 100 × 100 2D lattice, torus wrapped
  • duration: 1 · 106 time steps (ms)
  • compatible with hardware implementation
  • Iglesias, J., Eriksson, J., Grize, F., Tomassini, M., Villa, A.E.P

.,

  • submitted. Dynamics of pruning in simulated large-scale spik-

ing neural networks. BioSystems.

slide-3
SLIDE 3

leaky integrate and fire neuro-mimetic model

2

~250 excitations V(t) S(t) B(t) w(t) ~100 inhibitions

Type I = excitatory 80% Type II = inhibitory 20% = Vrest = -76 [mV] θi = -40 [mV] τmem = 8 [ms] trefract = 1 [ms] λi = 10 [spikes/s] n = 50

Vi(t+1) = Vrest[q]+(1−Si(t))·((Vi(t)−Vrest[q])·kmem[q])+

  • j

wji(t)+Bi(t) Si(t) = H(Vi(t) − θqi) wji(t + 1) = Sj(t) · Aji(t) · P[qj,qi] Bi(t + 1) = Preject(λqi) · n · P[q1,qi]

slide-4
SLIDE 4

digression: random number generators

3

acceptance/rejection Poisson process of λ = 10 spikes/s, n = 107

20000 40000 60000 80000 100000 200 400 600 800 1000 20000 40000 60000 80000 100000 200 400 600 800 1000

default GSL GNU Scientific default C RNG implementation Library RNG implementation (GNU/Linux, MacOS X, ...)

slide-5
SLIDE 5

STPD - spike timing dependent plasticity

4

wji(t + 1) = Sj(t) · Aji(t) · P[qj,qi]

delta weight

j i LTP LTD

time

Aji(t) ∈ {0, 1, 2, 4} for P[1,1], LTP: Long Term Potentiation Aji(t) = 1 for the others; LTD: Long Term Depression P[1,1] = P[1,2] = +1.34[mV ] P[2,1] = P[2,2] = −2.40[mV ]

slide-6
SLIDE 6

pruning dynamics

5

time L0 L1 L2 L3 L4 time L0 L1 L2 L3 L4 time L0 L1 L2 L3 L4

a b c

A4 A3 A2 A1

Lji(t + 1) = kact · Lji(t) + (Si(t) · Mj(t)) − (Sj(t) · Mi(t))

Lji ∈ ]0, Lmax] τact = 11000[ms]

Mi(t + 1) = Si(t) · Mmax + (1 − Si(t)) · (Mi(t) · klearn)

Lmax = 10 · Mmax τlearn = 2 · τmem

slide-7
SLIDE 7

laying out the two unit types

6

space-filling quasi-random Sobol distribution of 20% of inhibitory neurons on the 100 × 100 2D lattice

slide-8
SLIDE 8

random local connectivity

7

50

  • 50

x

  • 50

0.0 0.2 0.4 0.6

probability

e

50

  • 50

x

50

  • 50

y

0.0 0.2 0.4 0.6

probability

+50

  • 50

x

+50

  • 50

y

f

+50

  • 50

x

+50

  • 50

y

+50

  • 50

+50

  • 50

y x

+50

  • 50

+50

  • 50

y x

g

250 500 200 400

cell count connection count

250 500 200 400

cell count connection count

a b c d h

50 0 y

e e i e i i e i

slide-9
SLIDE 9

result 1: effect of random generator seed

8

5 10 15 20 25 30 1 2 3 4 5

count percentage of synapses [A4] R1 n = 100 R2

Same simulation settings, except random generator seed: variation Same network, different random generator seed: small variation (not shown)

slide-10
SLIDE 10

result 2: no change in preferential direction or length

9

50

  • 50

50

  • 50

y x t = 1 10

50

  • 50

50

  • 50

y x t = 2 10

50

  • 50

50

  • 50

y x

a

50

  • 50

50

  • 50

y x

50

  • 50

50

  • 50

y x

50

  • 50

50

  • 50

y x

1.5 1.0 0.5 71 50 25

ratio distance

1.5 1.0 0.5 71 50 25

ratio distance

1.5 1.0 0.5 71 50 25

ratio distance

1.5 1.0 0.5 71 50 25

ratio distance

1.5 1.0 0.5 71 50 25

ratio distance

1.5 1.0 0.5 71 50 25

ratio distance

c

t = 8 105

5 5

b d

slide-11
SLIDE 11

result 3: effect of size of network

10

10 20 30 40 50 60 2 4 6 8 active synapses [A4] (% ) tmax[A4] (104)

1 2 3 4 5 6 7 8 9 10

tsteady

slide-12
SLIDE 12

discussion

11

  • lots of oversimplified hypotheses
  • bimodal distribution of activation levels at steady state
  • no distortion of geometrical properties induced
  • try other synaptic transfer functions
  • use more realistic transfer functions for other projection types
  • add content-related inputs