dynamics of pruning in simulated large scale spiking
play

Dynamics of pruning in simulated large-scale spiking neural networks - PowerPoint PPT Presentation

Dynamics of pruning in simulated large-scale spiking neural networks Javier Iglesias 1 , 2 , 3 joint work with J. Eriksson 2 , F . Grize 1 , M. Tomassini 1 , A.E.P . Villa 2 , 3 Nonlinear Dynamics and Noise in Biological Systems Workshop: Torino,


  1. Dynamics of pruning in simulated large-scale spiking neural networks Javier Iglesias 1 , 2 , 3 joint work with J. Eriksson 2 , F . Grize 1 , M. Tomassini 1 , A.E.P . Villa 2 , 3 Nonlinear Dynamics and Noise in Biological Systems Workshop: Torino, 2004-04-19 1: Information Management Department, University of Lausanne, Switzerland 2: Laboratory of Neuro-heuristics, University of Lausanne, Switzerland 3: Laboratory of Neurobiophysics, University Joseph-Fourier, France < javier.iglesias@hec.unil.ch >

  2. description of the experiment 1 • model synaptic pruning after over-growth observed during brain maturation • size: 100 × 100 2D lattice, torus wrapped • duration: 1 · 10 6 time steps (ms) • compatible with hardware implementation • Iglesias, J., Eriksson, J., Grize, F., Tomassini, M., Villa, A.E.P ., submitted . Dynamics of pruning in simulated large-scale spik- ing neural networks. BioSystems.

  3. leaky integrate and fire neuro-mimetic model 2 Type I = excitatory 80% ~250 excitations Type II = inhibitory 20% B(t) = = -76 [mV] V rest = -40 [mV] θ i S(t) w(t) V(t) = 8 [ms] τ mem t refract = 1 [ms] λ i = 10 [spikes/s] n = 50 ~100 inhibitions � V i ( t +1) = V rest[ q ] +(1 − S i ( t )) · (( V i ( t ) − V rest[ q ] ) · k mem[ q ] )+ w ji ( t )+ B i ( t ) j S i ( t ) = H ( V i ( t ) − θ q i ) w ji ( t + 1) = S j ( t ) · A ji ( t ) · P [ q j ,q i ] B i ( t + 1) = P reject ( λ q i ) · n · P [ q 1 ,q i ]

  4. digression: random number generators 3 acceptance/rejection Poisson process of λ = 10 spikes/s, n = 10 7 100000 100000 80000 80000 60000 60000 40000 40000 20000 20000 0 0 0 200 400 600 800 1000 0 200 400 600 800 1000 default GSL GNU Scientific default C RNG implementation Library RNG implementation (GNU/Linux, MacOS X, ...)

  5. STPD - spike timing dependent plasticity 4 w ji ( t + 1) = S j ( t ) · A ji ( t ) · P [ q j ,q i ] delta weight LTP j i time LTD A ji ( t ) ∈ { 0 , 1 , 2 , 4 } for P [1 , 1] , LTP : Long Term Potentiation A ji ( t ) = 1 for the others; LTD : Long Term Depression P [1 , 1] = P [1 , 2] = +1 . 34[ mV ] P [2 , 1] = P [2 , 2] = − 2 . 40[ mV ]

  6. pruning dynamics 5 a b c L 4 L 4 L 4 A 4 L 3 L 3 L 3 A 3 L 2 L 2 L 2 A 2 L 1 L 1 L 1 A 1 L 0 L 0 L 0 time time time L ji ( t + 1) = k act · L ji ( t ) + ( S i ( t ) · M j ( t )) − ( S j ( t ) · M i ( t )) L ji ∈ ]0 , L max ] τ act = 11000 [ms] M i ( t + 1) = S i ( t ) · M max + (1 − S i ( t )) · ( M i ( t ) · k learn ) L max = 10 · M max τ learn = 2 · τ mem

  7. laying out the two unit types 6 space-filling quasi-random Sobol distribution of 20% of inhibitory neurons on the 100 × 100 2D lattice

  8. random local connectivity 7 a b c d probability +50 +50 500 0.6 e i 0.4 cell count 0.2 y y 0 0 250 0.0 e e 50 0 y -50 -50 0 -50 0 -50 0 +50 0 200 400 -50 0 +50 x -50 50 x x connection count e f g h probability +50 +50 500 0.6 0.4 cell count 0.2 y y 0 250 0 0.0 i i 50 i e -50 0 y -50 0 -50 0 -50 0 +50 -50 0 +50 0 200 400 x -50 50 x x connection count

  9. result 1: effect of random generator seed 8 30 n = 100 25 20 count 15 10 R 1 R 2 5 0 0 1 2 3 4 5 percentage of synapses [A 4 ] Same simulation settings, except random generator seed: variation Same network, different random generator seed: small variation (not shown)

  10. result 2: no change in preferential direction or length 9 a b c d 50 1.5 50 1.5 ratio ratio 5 y y t = 1 10 0 1.0 0 1.0 -50 0.5 -50 0.5 -50 0 50 0 25 50 71 -50 0 50 0 25 50 71 x distance x distance 50 1.5 50 1.5 ratio ratio 5 y y t = 2 10 0 1.0 0 1.0 -50 0.5 -50 0.5 -50 0 50 0 25 50 71 -50 0 50 0 25 50 71 x distance x distance 50 1.5 50 1.5 ratio ratio t = 8 10 5 y y 0 1.0 0 1.0 -50 0.5 -50 0.5 -50 0 50 0 25 50 71 -50 0 50 0 25 50 71 x distance x distance

  11. result 3: effect of size of network 10 60 4 5 active synapses [A 4 ] (% ) 6 7 50 3 2 1 40 8 30 20 9 t steady 10 10 0 0 2 4 6 8 t max [A 4 ] (10 4 )

  12. discussion 11 • lots of oversimplified hypotheses • bimodal distribution of activation levels at steady state • no distortion of geometrical properties induced • try other synaptic transfer functions • use more realistic transfer functions for other projection types • add content-related inputs

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend