1 brainchip
play

1 Brainchip OCTOBER 2017 | Agenda Neuromorphic computing - PowerPoint PPT Presentation

1 Brainchip OCTOBER 2017 | Agenda Neuromorphic computing background Akida Neuromorphic System-on-Chip (NSoC) 2 Brainchip OCTOBER 2017 | Neuromorphic Computing Background 3 Brainchip OCTOBER 2017 | A Brief History of Neuromorphic


  1. 1 Brainchip OCTOBER 2017 |

  2. Agenda Neuromorphic computing background Akida Neuromorphic System-on-Chip (NSoC) 2 Brainchip OCTOBER 2017 |

  3. Neuromorphic Computing Background 3 Brainchip OCTOBER 2017 |

  4. A Brief History of Neuromorphic Computing 4 Brainchip OCTOBER 2017 |

  5. Semiconductor Compute Architecture Cycles Artificial Intelligence Acceleration CPU/MPU/GPU  Acceleration   Convolutions Architectural   Spiking Von Neumann  Architecture  Harvard Disruption  VLIW  Multiplicity of ISAs  Array  Multiplicity of Vendors  Memory  AlexNet wins Multiplicity of accelerators  Datatype  FPU Imagenet Challenge  Floating  GPU  Fixed 1990 2012  DSP  Binary 1971 X86/RISC Intel 4004 Introduced GPU FPGA Consolidation 5 Brainchip OCTOBER 2017 |

  6. The Next Major Semiconductor Disruption $60B opportunity in next decade AI Acceleration Chipset Forecast 70,000 Training 60,000 Training is important, but Inference 50,000 inference is the major General Purpose 40,000 market $M 30,000 20,000 Machine learning requires 10,000 dedicated acceleration 0 2018 2019 2020 2021 2022 2023 2024 2025 Source: Tractica Deep Learning Chipsets, Q2 2018 6 Brainchip OCTOBER 2017 |

  7. Explosion of AI Acceleration Convolutional Neuromorphic Computing Neural Networks Software Simulation of ANNs X86 CPU X86 CPU Cloud Acceleration Edge Acceleration Re-Purposed Hardware Acceleration Google TPU Customized Acceleration Loihi TrueNorth Test Chip Test Chip + Internal ASIC Development 7 Brainchip OCTOBER 2017 |

  8. Traditional CPU Architecture Inefficient for ANNs Traditional Compute Architecture Artificial Neural Network Architecture Memory Control Arithmetic input unit logic unit output PROCESSOR ACCUMULATOR Optimal for sequential execution Distributed, parallel, feed-forward 8 Brainchip OCTOBER 2017 |

  9. ANN Differences – Primary Compute Function Spiking Neural Network Convolutional Neural Network ∫ ∫ ∫ Synapses Linear Algebra Matrix Multiplication Reinforced connections ∫ Neurons Inhibited connections Spikes 9 Brainchip OCTOBER 2017 |

  10. Neural Network Comparison Convolutional Neural Networks Spiking Neural Networks Characteristic Result Characteristic Result Computational Matrix Multiplication, Math intensive, high Threshold logic, Math-light, low functions ReLU, Pooling, FC power, custom connection power, standard layers acceleration blocks reinforcement logic Training Backpropagation off- Requires large pre- Feed-Forward, on or Short training chip labeled datasets, long off-chip cycles, continuous and expensive training learning periods Math intensive cloud compute Low power edge deployments 10 Brainchip OCTOBER 2017 |

  11. Previous Neuromorphic Computing Programs Primarily research programs Investigating neuron simulation 1,000’s of ways to emulate spiking neurons Investigating training methods Academia or government programs SpiNNaker (Human Brain Project) IBM TrueNorth (DARPA) Neurogrid (Stanford) Intel Loihi test chip 11 Brainchip OCTOBER 2017 |

  12. 12 Brainchip OCTOBER 2017 |

  13. Culmination of Decades of Development 13 Brainchip OCTOBER 2017 |

  14. World’s first Neuromorphic System on Chip (NSoC) Efficient neuron model Innovative training methodologies Everything required for embedded/edge applications On-chip processor Data->spike conversion Scalable for Server/Cloud Neuromorphic computing for multiple markets Vision systems Cyber security Financial tech 14 Brainchip OCTOBER 2017 |

  15. Akida NSoC Architecture 15 Brainchip OCTOBER 2017 |

  16. Akida Neuron Fabric Most efficient spiking neural Right-Sized for embedded network implementation applications 1.2M Neurons 10B Synapses 10 classifiers (CIFAR 10) 11 Layers Able to replicate most CNN functionality 517K Neurons Convolution 616M Synapses Pooling Fully connected Meets demanding performance criteria 1,100 fps CIFAR-10 82% accuracy 16 Brainchip OCTOBER 2017 |

  17. Neuron and Synapse Counts in the Animal Kingdom 17 Brainchip OCTOBER 2017 |

  18. The Most Efficient Neuromorphic Computing Fabric Keys to efficiency Relative Implementation Efficiency (Neurons and Synapses) Fixed neuron model Right-sized Synapses minimized 300X on-chip RAM 6MB compared to 30-50MB Programmable training and firing thresholds Flexible neural processor cores Highly optimized to perform convolutions Also fully connected, pooling Efficient connectivity Global spike bus connects all neural processors 3X Multi-chip expandable to 1.2 Billion neurons 18 Brainchip OCTOBER 2017 |

  19. Neuromorphic Computing Benefits ~$1,000 Cifar-10 IBM 83% Tremendous TrueNorth Cifar-10 82% ~$10 BrainChip throughput with low Top-1 Accuracy Akida power 6K fps/w 1.4K fps/w Math-lite, no MACs No DRAM access Cifar-10 Cifar-10 79% ~$1,000 ~$10 80% Intel Xilinx for weights Myriad 2 ZC709 Comparable accuracy 18 fps/w 6K fps/w Optimized synapses and neurons ensures precision GoogLeNet GoogLeNet ~$10 69% ~$300 69% Intel Tegra TX2 Myriad 2 15 fps/w 4.2 fps/w Frames per Second/watt 19 Brainchip OCTOBER 2017 | Note: For comparison purposes only. Data and pricing are estimated and subject to change

  20. Akida NSoC Applications 20 Brainchip OCTOBER 2017 |

  21. Vision Applications: Object Classification Complete embedded SNN Model solution Object Classification Flexible for multiple data Lidar Conversion Complex types Sensor Interfaces 01010110 Data Interfaces Metadata Pixel 01010110 <1 Watt 01010110 Metadata DVS Metadata On-chip training available Metadata for continuous learning Ultrasound Data Interfaces Neuron Fabric 21 Brainchip OCTOBER 2017 |

  22. Financial Technology Applications: Fintech Data Analysis SNN Model Fintech Data Pattern Recognition Unsupervised Conversion 01010110 Complex 01010110 01010110 learning on chip to 01010110 detect repeating Data Interfaces CPU patterns (Clustering) These trading Neuron Fabric Metadata Metadata patterns and Metadata Metadata clusters can then be analyzed for effectiveness Fintech data – distinguishing parameters for stock characteristics and trading information, can be converted to spikes in SW on CPU or by Akida NSoC 22 Brainchip OCTOBER 2017 |

  23. Cybersecurity Applications: Malware Detection SNN Model File or packet File Classification properties Supervised Conversion 01010110 Complex 01010110 01010110 learning for file 01010110 classification Data Interfaces CPU based on file properties Neuron Fabric Metadata Metadata Metadata Metadata File or packet properties – distinguishing parameters for files/network traffic, can be converted to spikes in SW on CPU or by Akida NSoC 23 Brainchip OCTOBER 2017 |

  24. Cybersecurity Applications: Anomaly Detection SNN Model Behavior Properties Behavior classifiers Supervised learning Conversion 01010110 Complex 01010110 01010110 on known good 01010110 behavior and Data Interfaces CPU anomalous behavior Neuron Fabric Metadata Metadata Metadata Metadata Behavior properties can be CPU loads for common applications, network packets, power consumption, fan speed, etc.. 24 Brainchip OCTOBER 2017 |

  25. Creating SNNs: The Akida Development Environment 25 Brainchip OCTOBER 2017 |

  26. AKIDA Training Methods Unsupervised learning from unlabeled data Detection of unknown patterns in data On-chip or off-chip Unsupervised learning with label classification First layers learns unlabeled features, labeled in fully connected layer On-chip or off-chip 26 Brainchip OCTOBER 2017 |

  27. World’s first NSoC Low power and footprint of neuromorphic computing Highest performance /w/$ Estimated tape-out 1H2019, samples 2H2019 Complete solution for embedded/edge applications – but scalable for cloud/server usage 27 Brainchip OCTOBER 2017 |

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend