outlier channel splitting
play

Outlier Channel Splitting Improving DNN Quantization without - PowerPoint PPT Presentation

Outlier Channel Splitting Improving DNN Quantization without Retraining Ritchie Zhao , Yuwei Hu, Jordan Dotzel, Christopher De Sa, Zhiru Zhang School of Electrical and Computer Engineering Cornell University Specialized DNN Processors are


  1. Outlier Channel Splitting Improving DNN Quantization without Retraining Ritchie Zhao , Yuwei Hu, Jordan Dotzel, Christopher De Sa, Zhiru Zhang School of Electrical and Computer Engineering Cornell University

  2. Specialized DNN Processors are Ubiquitous Mobile Cloud Embedded Apple (A12) Google (TPU) Google (Edge TPU) Samsung (Exynos 9820) Microsoft (Brainwave) Intel (Movidius) Huawei (Kirin 970) Xilinx (EC2 F1) Deephi/Xilinx (Zynq) Qualcomm (Hexagon) Intel (FPGAs, Nervana) ARM (announced) AWS Offerings Many Startups 2

  3. Quantization is Key to Hardware Acceleration Lower Precision → less energy and area per op Lower Precision → fewer bits of storage per data FPGA Performance GPU Performance ResNet-50 float, 3-bit float, 2-bit mantissa mantissa https://developer.nvidia.com/tensorrt E. Chung, J. Fowers et al. Serving DNNs in Real Time at Datacenter Scale with Project Brainwave , IEEE Micro , April 2018. 3

  4. Data-Free Quantization ▸ DNN quantization techniques that require training are discouraged by the current ML service model ML Customer ML Service Provider Model Training Model Serving Data Floating Optimization Point Model ▸ Reasons to prefer data-free quantization : ML providers typically cannot access customer training data 1. Customer is using a pre-trained off-the-shelf model 2. Customer is unwilling to retrain a legacy model 3. Customer lacks the expertise for quantization training 4. 4

  5. Paper Summary Baseline Prior Art Our Method Linear Quantizer Clipping Outlier Channel Splitting Distorted Log Frequency Outliers Outliers − Poor quantizer resolution + Reduces quantization noise + Reduces quantization noise due to outliers + Used in NVIDIA TensorRT + Removes outliers − Distorts outliers − Model size overhead ▸ OCS improves quantization without retraining ▸ OCS can outperform existing methods with negligible size overhead (<2%) in both CNNs and RNNs ▸ We also perform a comprehensive evaluation of different clipping methods in literature 5

  6. Outlier Channel Splitting (b) (a) z z z v 2 v 2 v 2 v 2 v 1 v 2 v 1 v 1 2 2 y 2 y 2 y 2 y 1 y 2 y 1 y 2 y 1 or 2 2 x 1 x 2 x 1 x 2 x 1 x 2 𝑨 = 𝑤 1 𝑧 1 + 𝑤 2 2 𝑧 2 + 𝑤 2 𝑧 2 𝑧 2 𝑨 = 𝑤 1 𝑧 1 + 𝑤 2 𝑧 2 2 𝑧 2 𝑨 = 𝑤 1 𝑧 1 + 𝑤 2 2 + 𝑤 2 2 ▸ OCS splits weights or activations, halving them – (a) Duplicate node y 2 to halve the weight v 2 – (b) Duplicate weight v 2 to halve the activation y 2 – Inspired by Net2Net , a paper on layer transformations T. Chen, I. Goodfellow, J. Shlens, Net2Net: Accelerating Learning via Knowledge Transfer . ICLR’16 , May 2016. 6

  7. Quantization-Aware Splitting 𝑥 Naïve Splitting ( Net2Net ) 2 𝑥 split 𝑥 → (𝑥 2 , 𝑥 2) quantize Halves round in the same direction Δ 2 Δ 3 Δ Quantization-Aware Splitting 𝑥 2 − ∆ 𝑥 2 + ∆ 𝑥 → ( 𝑥 2 − ∆ 4 , 𝑥 2 + ∆ 𝑥 4 4 4 ) split Halves can round in opposite directions quantize to help cancel out quantization noise Δ 2 Δ 3 Δ ▸ In the paper, we show that QA splitting preserves the expected quantization noise on a single value 7

  8. Results on CNNs Network Wt. Quantized Acc. ( ± vs. Best Clipping Result) (Float Acc.) Bits OCS OCS + Clip 6 +1.0 +0.5 VGG-16 BN 5 +3.3 +2.6 (73.4) −33.1 4 +4.4 In these results OCS is 6 +0.4 +0.5 constrained to ~2% size ResNet-50 5 +2.0 +2.0 overhead. (76.1) −26.8 4 +4.2 6 +1.6 +1.7 DenseNet-121 Blue = +1% or better 5 +4.3 +5.3 Red = −1% or worse (74.4) −5.1 4 +13.9 6 +5.6 +5.5 Inception-V3 5 +13.5 +19.5 (75.9) −1.4 4 +0.7 ▸ OCS constrained to 2% overhead outperforms Clipping at 6-5 bits ▸ OCS + Clipping outperforms Clipping alone at 4 bits 8

  9. Thank you! Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Zhiru Zhang. Improving Neural Network Quantization without Retraining using Outlier Channel Splitting. ICML , June 2019 Code available at: https://github.com/cornell-zhang/dnn-quant-ocs 9

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend