bio inspired computation clock free grid free scale free
play

Bio-inspired computation: Clock-free, grid-free, scale-free, and - PowerPoint PPT Presentation

Bio-inspired computation: Clock-free, grid-free, scale-free, and symbol-free (FA2386-12-1-4050) PI: Janet Wiles (University of Queensland) AFOSR Program Review: Mathematical and Computational Cognition Program Computational and Machine


  1. Bio-inspired computation: Clock-free, grid-free, scale-free, and symbol-free (FA2386-12-1-4050) PI: Janet Wiles (University of Queensland) AFOSR Program Review: Mathematical and Computational Cognition Program Computational and Machine Intelligence Program Robust Decision Making in Human-System Interface Program (Jan 28 – Feb 1, 2013, Washington, DC)

  2. Bio-inspired computation (PI Janet Wiles) Technical Approach: Research Objectives: • Develop spiking neural networks • Develop bio-inspired algorithms for robots which induce their own that use dendritic computation, different interneuron classes and temporal terms (symbol-free); use transient micro-synchrony (clock- neural architectures inspired by free); compute at multiple scales hippocampus. (scale-free); and navigate using experience graphs (grid-free). Budget ($453k): DoD Benefits: • Fundamental discoveries into YR 1 YR 2 YR 3 YR 4 computation in natural systems have the potential to provide new 151 151 151 approaches to computation with robust and scalable features. Project Start Date: awarded 19 Mar 2012 Project End Date: 1 Sept 2015 (42 months) 2

  3. List of Project Goals 1. Develop a system for extracting multi-scale structure from sequences where the elements in each sequence are not pre-specified and can vary in duration. 2. Develop neural systems that use inhibitory interneurons for clock-free coordination at multiple temporal scales. 3. Develop grid-free algorithms for navigation based on integrated sensory-motor coordination. 4. Test algorithms in simulation and on robot sensory streams using the iRat (a robot developed at UQ for research at the intersection of neurorobotics, neuroscience, and embodied cognition). 3

  4. Progress Towards Goals (or New Goals) 9 months into the project: 1. Progress on a prediction paradigm to extract structure from temporal sequences at multiple scales. Tests on Elman’s simple recurrent network badiiguuu letter prediction task show multiscale term prediction 2. Progress developing neural networks with different inhibitory neuron classes that create an asynchronous temporal pipeline in excitatory neurons 3. Development of a learning algorithm that explicitly adjusts dendritic delays to learn temporally extended patterns. 4. Ongoing development of the iRat as a test platform. 4

  5. Overview 1. Representation of multi-scale structure 2. Prediction at multiple scales 3. Learning transmission delays 4. iRat test platform 5

  6. Background 1. Task: represent multi-scale structure building on simple recurrent networks (SRN) COGNITIVE SCIENCE , 14 , 179-211 (1990) Simple Recurrent Finding Structure in Time Network Jeffrey L. Elman Time underlies many interesting behaviors. Finding word boundaries from letter prediction: badiiguuu Extracting grammatical structure from word prediction adiiguuubaguuudiidii 6 {ba | dii | guuu}*

  7. Background An advantage is SRNs have unbounded temporal histories • Prediction in context free and context sensitive grammars a n b n aaabbbaabbaaaabbbb a 3 b 3 a 2 b 2 a 4 b 4 … SRN: 2 inputs, 2 hidden units, 2 output units Task: predict the next input Learning to predict a context-free language: Analysis of dynamics in recurrent 7 hidden units, Boden, Wiles, Tonkes & Blair ICANN 1999

  8. The limitation is that each SRN has a given spatial scale for both elements and combinations (elements that can SRNs can use an be contrasted) Paradigmatic unbounded past, but b b b predict the future one d d d step at a time g g g b a d i i g u u u Syntagmatic (elements that can be combined – phonemes, letters, words)

  9. The limitation is that each SRN has a given spatial scale for both elements and combinations (elements that can SRNs can use an be contrasted) Paradigmatic unbounded past, but b b b predict the future one d d d step at a time g g g b a d i i g u u u Syntagmatic (elements that can be combined – phonemes, letters, words) (elements that can Need to represent an be contrasted) Paradigmatic unbounded future ba ba ba dii dii dii guuu guuu guuu b a d i i g u u u Syntagmatic (elements that can be combined)

  10. Prediction at multiple scales: badiiguuu Time-locked predictions Spiking Duration (averaged over 3000 letters) neuron predictions using dendritic b d g a i u b d g a i u computation and t-5 12% t-5 6% 10% 0% 15% 26% 36% 17% 17% 13% 15% 32% delay learning t-4 13% t-4 16% 0% 34% 12% 20% 21% 17% 0% 6% 22% 39% t-3 0% t-3 16% 34% 0% 13% 10% 43% 0% 0% 16% 34% 34% ba dii guuu t-2 32% t-2 0% 0% 0% 0% 34% 34% 50% 0% 16% 17% 17% t-d 33% 33% 33% t-1 0% t-1 0% 0% 0% 32% 34% 34% 50% 0% 0% 50% 0% dii time time d i t t t 100% 100% 100% ba dii guuu b d g a i u b d g a i u t+1 0% t+1 17% 0% 0% 0% 100% 0% 17% 16% 0% 50% 0% t+d 33% 33% 33% t+2 0% t+2 17% 0% 0% 0% 100% 0% 17% 16% 17% 17% 16% t+3 34% t+3 6% 34% 32% 0% 0% 0% 5% 5% 17% 34% 32% t+4 0% t+4 12% 0% 0% 34% 34% 32% 10% 12% 6% 22% 38% t+5 13% t+5 12% 10% 11% 0% 34% 32% 13% 14% 12% 15% 34% 10

  11. 2. Coordination without global clocks Neural networks with different inhibitory neuron classes can create an asynchronous temporal pipeline in excitatory neurons Clock-free systems Control points Defined types of cortical interneurons • Fully asynchronous structure space and spike timing in the Data • Every signal sends state hippocampus Control information (Fig. Somogyi and Klausberger, 2005) Reset • Scalable to any size circuit Input- • Require duplicate ready hardware and more complex circuits • Can run at maximum Output- speed when needed (no ready oscillations required) • Can give rise to rhythms and mixtures of rhythms through emergent Proces processes sing Input- States ready Output- Smith, DeMar, Yuan, ready Hagedorn and Ferguson, (2001) Delay-insensitive gate- Returning level pipelining, Integration, to 11 baseline the VLSI Journal , 30(2):103- 131

  12. 3. Learning transmission delays in spiking neural networks: A novel approach to sequence learning based on spike delay variance Characteristics of structured temporal sequences In neural systems, timing is critical at the millisecond level. Real neural systems: • learn temporal order in perceptual input, motor control, sensorimotor coordination and memory tasks • become faster with increased experience • can change if the stimuli changes • don’t need explicit reward to learn • can learn sequences in one or a few trials

  13. Learning mechanisms: Neural plasticity when mean and variance are both adaptable • A synapse encodes mean 0.5 and std dev for x, initially x is 0.4 normally distributed N(0,1). 0.3 Individual fitness is 1 if x is • 0.2 within .1 of the target mean, 0.1 else 0 0 -5 0 5 10 15 20 25 Challenge: The environment changes and the optimal value for x changes radically. How does the mean value of x move from x=0 to x=20?

  14. Mean: 0, 5, 10, 15, 20 0.5 0.02 x = [19.9,20.1] St dev: 1 0.4 0.01 0.3 0.2 0 19.9 19.95 20 20.05 20.1 0.1 The signal in the target range [19.9,20.1] is 0 -5 0 5 10 15 20 25 virtually nil 0.5 Mean 0 0.02 St dev: 1, 2, 3, 4, 5, 10, 20 0.4 0.3 0.01 0.2 0 19.9 19.95 20 20.05 20.1 0.1 The signal can be 0 enhanced by raising -5 0 5 10 15 20 25 the variance.

  15. PDF Adapt both x and std dev (50 generations) Probability density functions (1+1 EA, 50 generations) 0.5 Population variation 0.4 0.3 0.2 0.1 0 -5 0 5 10 15 20 25 X

  16. Message Variance affects fitness signals Variance can move (Gaussian) mountains.

  17. A spiking neural network with spike delay variance learning (SVDL) 2-layer feed-forward network plasticity: gaussian synapses winner-take-all output layer Wright and Wiles (2012)

  18. Gaussian Synapse Model • where: • p is the peak postsynaptic current • t 0 is the time since a presynaptic spike in ms ( t 0 ≥ 0) • μ is the mean • v is the variance • The ‘weight’ of a synapse is the integral of the curve, so p is varied to match a given integral • Hence each synapse has 3 parameters ( μ, v and integral) Wright and Wiles (2012)

  19. Post synaptic release profiles for mean and variance A. variable mean B. fixed variance (fixed integral) With low variance (0.1), A all the current is delivered as a single burst at a delay determined by the mean. At high variances (>5), current is slowly released from the synapse over a large period of time, peaking at the delay determined B by the mean. Wright and Wiles (2012)

  20. Spike Delay-Variance Learning (SDVL) Algorithm The change of mean, Δμ , is determined by: where: t 0 is the time difference between the presynaptic and postsynaptic spike (ms) μ is the mean of the synapse in milliseconds [min 0, max 15] v is the variance of the synapse [min 0.1, max 10] k(v) is the learning accelerator, here k = (v + 0.9) 2 η μ is the mean learning rate α 1 , α 2 are constants The change of variance, Δv , is determined by: where: η v is the variance learning rate β 1 , β 2 are constants Wright and Wiles (2012)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend