binarized neural network
play

Binarized Neural Network to Compress DNN Xianda ( Bryce ) Xu - PowerPoint PPT Presentation

Using Binarized Neural Network to Compress DNN Xianda ( Bryce ) Xu xxu373@wisc.edu November 7th Why is model compression so important ? Problem 1 : Computation Cost A = ( X i W T + B ) Multiplication is energy & time consuming !


  1. Using Binarized Neural Network to Compress DNN Xianda ( Bryce ) Xu xxu373@wisc.edu November 7th

  2. Why is model compression so important ? Problem 1 : Computation Cost A = σ ( X i W T + B ) Multiplication is energy & time consuming ! Problem 2 : Memory Cost If float-32 Figure 1. AlexNet architecture 60 M Parameters = 240 MB Memory ! (ImageNet Large Scale Visual Recognition Challenge) However, Top-1 accuracy: 57.1% The energy and memory are limited on Top-5 accuracy: 80.2% mobile devices and embedded devices ! ~ 60 M Parameters !

  3. How can we compress DNN ? Pruning Removing unimportant parameters Decomposition Apply Singular Value Decomposition on W Parameters! Use the combined output of several large Distillation networks to train a simpler model Finding efficient representation of each Quantization parameters

  4. What is BNN ? In brief, it binarizes parameters and activations to +1 and -1 Why should we choose BNN ? -- Reduce memory cost -- Save energy and speed up i Full-precision parameter takes 32 bits Full-precision multiplication: Binary parameter takes 1 bit Binary multiplication: (XOR) ⊙ Compress network by 32-X theoretically Multiply-accumulations can be replaced by XOR and bit-count

  5. How do we implement BNN ? Problem 1 : How to binarize ? Problem 2 : When to binarize ? -- Forward propagation -- Stochastic Binarization 1. First Layer: We do not binarize the input but binarize the Ws and As 2. Hidden Layers: We binarize all the Ws and As -- Deterministic Binarization 3. Output Layer: We binarize Ws and only binarize the output in training -- Back-propagation Though Stochastic Binarization seems We do not binarize gradients in back-propagation more reasonable, we prefer deterministic But we have to clip weights when we update them binarization for its efficiency.

  6. How do we implement BNN ? Straight-Through Estimator (STE) Yoshua Bengio etc. Estimating or propagating gradients through stochastic neurons for conditional computation ( 15 Aug 2013 ). Problem 3 : How to do back-propagation ? Adapted, for hidden layers: Recall : We calculate the gradients of g a L = ∂ C For the last layer: the loss function ! with respect to " # , the ∂ a L input of the $ layer. For hidden layers: (map sign(x)) g l = ∂ L g a k = g a k b 1 | a k | ≤ 1 STE ∂ I l The layer activation gradients: g s k = BN ( g a k ) Back Batch Norm g l − 1 = g l W l T b = g s k W k b g a k − 1 Activation gradients The layer weight gradients: T a k − 1 b = g s k b g W k Weight gradients g W l = g l I l − 1 T But since we use Binarizing function, gradients are all zero ! STE = The gradient on Htanh(x) So, we use Htanh(x) as our activation function Straight-Through Estimator ! H tanh( x ) = Clip ( x , − 1,1)

  7. How was my experiment ? The architecture of BNN in this paper (fed by Cifar-10). The block In this paper: The validation accuracy 89% My experiment: The training accuracy 95% The validation accuracy 84% https://github.com/brycexu/BinarizedNeuralNetwork/tree/master/SecondTry

  8. The problems about the current BNN model ? Accuracy Loss ! Possible reasons ? Problems 1 : Robustness Issue Problems 2 : Stability Issue BNN always has larger output change which BNN is hard to optimize due to problems makes them more susceptible to input such as gradient mismatch. This is because perturbation. of the non-smoothness of the whole architecture. Gradient mismatch: The effective activation function in a fixed point network is a non-differentiable function in a discrete point network That is why we cannot apply ReLU in BNN ! DarryI D. Lin etc. Overcoming challenges in challenges in fixed point training of deep convolutional networks. 8 Jul 2016.

  9. The potential ways to optimize BNN model ? 2. Weakening learning rate ? Robustness Issue Research shows that higher learning rate 1. Adding more bits ? can cause turbulence inside the model, so -- Ternary model (-1,0,+1) BNN needs finer tuning. -- Quantization 4. Modifying the architecture ? Research shows that having more bits at activations improve model’ robustness. -- AdaBoost (BENN) 3. Adding more weights ? -- Recursively using binarization -- WRPN Stability Issue ? More bits per network ? 1. Better activation function ? 2. Better back-propagation methods ? More networks per bit ?

  10. Thank you ! xxu373@wisc.edu

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend