supervised learning in structured spiking neural
play

Supervised Learning in Structured Spiking Neural Networks 12 - PowerPoint PPT Presentation

Introduction Background Our Approach Results Results SpiNNaker References Supervised Learning in Structured Spiking Neural Networks 12 (including a detour to SpiNNaker) Andr e Gr uning, Eric Nichols, Brian Gardner and Ioana Sporea


  1. Introduction Background Our Approach Results Results SpiNNaker References Supervised Learning in Structured Spiking Neural Networks 12 (including a detour to SpiNNaker) Andr´ e Gr¨ uning, Eric Nichols, Brian Gardner and Ioana Sporea Department of Computer Science University of Surrey, Guildford, UK 1st December 2015

  2. Introduction Background Our Approach Results Results SpiNNaker References

  3. Introduction Background Our Approach Results Results SpiNNaker References

  4. Introduction Background Our Approach Results Results SpiNNaker References Introduction 1 Background 2 Our Approach 3 Results 4 Results 5 SpiNNaker 6 References 7

  5. Introduction Background Our Approach Results Results SpiNNaker References What are we doing? What are we doing? Formulate a supervised learning rule for spiking neural networks that can train spiking networks containing a hidden layer(s) of neurons, can map arbitrary spatio-temporal input into arbitrary such output spike patterns, ie multiple spike trains. Why worthwhile? Understand how spike-pattern based information processing takes place in the brain. A learning rule for spiking neural networks with technical potential. Find a rule that is to spiking networks what is backprop to rate neuron networks.

  6. Introduction Background Our Approach Results Results SpiNNaker References Scientific Background Human Brain Project The SPIKEFRAME project: “Structures of Learning Algorithms for Spiking Neural Networks” carried out as part of SP4 “Theoretical Neuroscience” in Ramp Up Phase and SGA1 Where are we scientifically? computational neuroscience cognitive science artificial intelligence / machine learning

  7. Introduction Background Our Approach Results Results SpiNNaker References

  8. Introduction Background Our Approach Results Results SpiNNaker References Introduction 1 Background 2 Our Approach 3 Results 4 Results 5 SpiNNaker 6 References 7

  9. Introduction Background Our Approach Results Results SpiNNaker References Spiking Neurons (a) input spikes output spike (c) u output spike (b) input spikes Spiking neurons: real neurons communicate with each other via sequences of pulses – spikes . 1 Dendritic tree, axon and cell body of a neuron. 2 Top: Spikes arrive from other neurons and its membrane potential rises. Bottom: incoming spikes on various dendrites elicit timed spikes responses as the output. 3 response of the membrane potential to incoming spikes. If the threshold θ is crossed, the membrane potential is reset to a low value, and a spike fired. From Andr´ e Gr¨ uning and Sander Bohte. Spiking neural networks: Principles and challenges. In Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning – ESANN , Brugge, 2014. Invited Contribution.

  10. Introduction Background Our Approach Results Results SpiNNaker References Spiking Neurons Spiking Information Processing The precise timing of spikes generated by neurons conveys meaningful information. Synaptic plasticity forms the basis of learning. Changes in synaptic strength depend on relative pre- and postsynaptic spike times, and on third signals . Challenge: to relate such localised plasticity changes to learning on the network level .

  11. Introduction Background Our Approach Results Results SpiNNaker References Learning for Spiking NN General Learning Algorithms for Spiking NN? There is no general-purpose algorithm for spiking neural networks. Challenge: discontinuous nature of spiking events. Various supervised learning algorithms exist, each with its own limitations eg: network topology, adaptability (e.g. reservoir computing), limited spike encoding (e.g. latency, or spike vs no spike). size of network

  12. Introduction Background Our Approach Results Results SpiNNaker References Some Learning Algorithms for Spiking NN SpikeProp: S.M. Bohte, J.N. Kok, and H. La Poutr´ e. Spike-prop: error-backpropagation in multi-layer networks of spiking neurons. Neurocomputing , 48(1–4):17–37, 2002 ReSuMe Filip Ponulak and Andrzej Kasi´ nski. Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classification and spike shifting. Neural Computation , 22:467–510, 2010 Tempotron Robert G¨ utig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience , 9(3), 2006. doi: 10.1038/nn1643 Chronotron R˘ azvan V Florian. The chronotron: A neuron that learns to fire temporally precise spike patterns. PLoS ONE , 7(8):e40233, 2012 SPAN A. Mohemmed, S. Schliebs, and N. Kasabov. SPAN: Spike pattern association neuron for learning spatio-temporal sequences. Int. J. Neural Systems , 2011 R. Urbanczik and W. Senn. A gradient learning rule for the tempotron. Neural Computation , 21:340–352, 2009 Johanni Brea, Walter Senn, and Jean-Pascal Pfister. Matching recall and storage in sequence learning with spiking neural networks. The Journal of Neuroscience , 33(23):9565–9575, 2013 Nicolas Fremaux, Henning Sprekeler, and Wulfram Gerstner. Functional requirements for reward-modulated spile-timing-dependent plasticity. The Journal of Neuroscience , 30(40):13326–13337, 10 2010 Ioana Sporea and Andr´ e Gr¨ uning. Supervised learning in multilayer spiking neural networks. Neural Computation , 25(2), 2013

  13. Introduction Background Our Approach Results Results SpiNNaker References

  14. Introduction Background Our Approach Results Results SpiNNaker References Introduction 1 Background 2 Our Approach 3 Results 4 Results 5 SpiNNaker 6 References 7

  15. Introduction Background Our Approach Results Results SpiNNaker References Our Approach MultilayerSpiker Generalise backpropagation to Spiking Neural Networks with hidden neurons. Use stochastic neuron model to connect smooth quantities (derivative exists) with discrete spike trains (no derivative)

  16. Introduction Background Our Approach Results Results SpiNNaker References Neuron model Membrane potential � t � t Z o ( t ′ ) κ ( t − t ′ ) d t ′ , (1) � u o ( t ) := Y h ( t ′ ) ǫ ( t − t ′ ) d t ′ + w oh 0 0 h o postsynaptic neurons, h presynaptic neuron u o membrane potential of o . w oh strength of synaptic connection from h to o . Y h ( t ) = � t h < t δ ( t − t h ) spike train of neuron h where t h are the firing times of h Z o ( t ) = � t o < t δ ( t − t o ) spike train of neuron o where t o are the firing times of o .

  17. Introduction Background Our Approach Results Results SpiNNaker References Neuron model Spike response kernel ǫ and reset kernel κ κ ( s ) = κ 0 e − s /τ m Θ( s ) , (2) ǫ ( s ) = ǫ 0 [ e − s /τ m − e − s /τ s ] Θ( s ) and spike response kernel ǫ 0 = 4 mV , reset kernel κ 0 = − 15 mV , membrane time constant τ m = 10 ms , the synaptic rise time τ s = 5 ms Heaviside step function Θ( s ).

  18. Introduction Background Our Approach Results Results SpiNNaker References Neuron model Stochastic Intensity (instantaneous firing rate) and Spikes � u ( t ) − ϑ � ρ ( t ) = ρ [ u ( t )] = ρ 0 exp (3) , ∆ u firing rate at threshold ρ 0 = 0 . 01 ms − 1 . “threshold” ϑ = 15 mV . smoothness of the threshold ∆ u o = 0 . 2 mV (output layer) or ∆ u h = 2 mV (hidden layer) Spikes are generated by a point process taking stochastic intensity ρ o ( t ). Ie in a small time interval [ t , t + δ t ) a spike is generated with probability ρ o ( t ) δ t .

  19. Introduction Background Our Approach Results Results SpiNNaker References Backpropagation Objective (“Error”) function � � � P ( z ref log ( ρ o ( t )) Z ref o | x ) = exp o ( t ) − ρ o ( t ) d t (4) , where Z ref f δ ( t − ˜ t f o ( t ) = � o ) is the target output spike train for input x . a a J. P. Pfister, T. Toyoizumi, K. Aihara, and W. Gerstner. Optimal spike-timing dependent plasticity for precise action potential firing in supervised learning. Neural Computation , 18(6):1309–1339, 2006 Backprop approach ∂ log P ( z ref | x ) ∆ w oh = η o (5) ∂ w oh

  20. Introduction Background Our Approach Results Results SpiNNaker References Backprop approach . . . and some ten slides later Lots of derivatives, indices, probabilities. Derivatives only possible due to smoothness of probability function. Relatively freely switching between expected values and their best estimates to be had when you only have single cast.

  21. Introduction Background Our Approach Results Results SpiNNaker References Backprop Weight Update Backpropagated Error Signal 1 � � Z ref δ o ( t ) := o ( t ) − ρ o ( t ) , (6) ∆ u o Hidden-to-Output Weights � T ∆ w oh = η o δ o ( t ) ( Y h ∗ ǫ )( t ) d t . (7) 0 Input-to-Hidden Weights � T η h � ∆ w hi = w oh δ o ( t )([ Y h ( X i ∗ ǫ )] ∗ ǫ )( t ) d t . (8) ∆ u h 0 o a a Brian Gardner, Ioana Sporea, and Andr´ e Gr¨ uning. Learning spatio-temporally encoded pattern transformations in structured spiking neural networks. Neural Computation , 27(12):2548–2586, 12 2015. doi: 10.1162/NECO a 00790 . Preprint available at http://arxiv.org/abs/1503.09129

  22. Introduction Background Our Approach Results Results SpiNNaker References

  23. Introduction Background Our Approach Results Results SpiNNaker References Introduction 1 Background 2 Our Approach 3 Results 4 Results 5 SpiNNaker 6 References 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend