incremental and approximate inference for faster
play

Incremental and Approximate Inference for Faster Occlusion-based - PowerPoint PPT Presentation

Incremental and Approximate Inference for Faster Occlusion-based Deep CNN Explanations Supun Nakandala , Arun Kumar, and Yannis Papakonstantinou University of California, San Diego Introduction Deep Convolutional Neural Networks (CNNs) are


  1. Incremental and Approximate Inference for Faster Occlusion-based Deep CNN Explanations Supun Nakandala , Arun Kumar, and Yannis Papakonstantinou University of California, San Diego

  2. Introduction Deep Convolutional Neural Networks (CNNs) are revolutionizing many image analytics tasks Surveillance Autonomous Vehicles 2

  3. Background: What is a CNN? Input Image Predict Class Series of Convolution Layer Transformations 3-D array Probability - … P(pneumonia) Convolution Layer Input 3-D Output 3-D Array Array *Simplified representation of a CNN 3

  4. Explainability of CNN predictions is important in many critical applications such as in healthcare! 4

  5. How to Explain CNN Predictions? An active research area Occlusion-based explanation (OBE) is widely used by practitioners 5

  6. Occlusion-based Explanations (OBE) Original Image Occluded Image P(pneumonia) Occlusion heatmap Oc localizes the region of interest 6 Source: http://blog.qure.ai/notes/visualizing_deep_learning

  7. Problem: OBE is Highly Time Consuming CNN Inference is time consuming. E.g. Inception3: 35 MFLOPS, ResNet152 : 65 MFLOPS Possibly … thousands Can take between several seconds to several minutes! Cast OBE as a query optimization task Our Idea Database-inspired optimization techniques 7 *MFLOPS: Mega Floating Point Operations

  8. Outline 1. Background on CNN Internals 2. Incremental CNN Inference Inspired by MQO + IVM 3. Approximate CNN Inference Inspired by AQP + Vision Science 4. Experimental Results *IVM: Incremental View Maintenance *MQO: Multi-Query Optimization *AQP: Approximate Query Processing 8

  9. Background: What is a CNN? Input Image Predict Class Series of Convolution Layer Transformations 3-D array Probability - … P(pneumonia) Convolution Layer Input 3-D Output 3-D Array Array *Simplified representation of a CNN 9

  10. Background: Convolution Layer 3-D Filter Kernel 2-D slice of Output (learned) x x x x x x K1 x x x SUM( K1 o X’) x x x X x x x x x x K2 Input 3-D Array x x x x x x o : Hadamard product x x x Kn 10 Convolution Layer

  11. Reimagining Convolution as a Query Input: A SELECT A.X AS X, A.Y AS Y, K.N AS Z, SUM(A.V * K.V) AS V X Y Z V FROM K, (SELECT A.X , A.Y, A.Z, A.V, A.X - T.X + FW /2 AS A_K.X, A.Y - T.Y + FH /2 AS A_K.Y FROM A, A AS T Filter Kernels: K WHERE ABS(A.X - T.X) <= FW /2 X Y Z N V AND ABS(A.Y - T.Y) <= FH/2) WHERE A_K.X = K.X AND A_K.Y = K.Y GROUP BY A.X, A.Y, K.N CNN performs series of Joins and Aggregates Takeaway: Linear Algebra data model improves hardware utilization *FW: Filter Width, FH: Filter Height 11

  12. Outline 1. Background on CNN Internals 2. Incremental CNN Inference Inspired by MQO + IVM 3. Approximate CNN Inference Inspired by AQP + Vision Science 4. Experimental Results *IVM: Incremental View Maintenance *MQO: Multi-Query Optimization *AQP: Approximate Query Processing 12

  13. Observation: Redundant Computations - Same Different Multiple such occluded images This is a new instance of the Incremental View Maintenance task in databases Layer 2 Layer N Layer 1 Geometric properties of CNN determines how to propagate changes 13 * Only a cross section is shown. Changed region spans the depth dimension

  14. Our Solution: Incremental Inference Cast OBE as a set of sequence of “queries” Original Image - P(pneumonia) Materialized Views Algebraic Framework for Incremental Propagation: No redundant computations Multiple occluded images. Sequential execution throttles the - P(pneumonia) performance, especially on GPUs! IVM 14

  15. Our Solution: Batched Incremental Inference Share and reuse materialized views across all occluded images - Multiple IVM queries run in one go (form of MQO). We create a custom GPU kernel for parallel memory copies. Improves hardware utilization. Read additional context … 15

  16. What speedups can we expect? Theoretical speedups for popular deep CNNs with our IVM Issue: “Avalanche E ff ect” causes low speedups in some CNNs 16

  17. Outline 1. Background on CNN Internals 2. Incremental CNN Inference 3. Approximate CNN Inference Inspired by AQP + Vision Science 4. Experimental Results 17

  18. Approximate CNN Inference Trade o ff visual quality of heatmap to reduce Basic Idea: runtime How do we quantify new heatmap quality? Structural similarity index (SSIM) 1.0 : identical Exact heatmap SSIM … -1.0 SSIM values close to 0.9 are widely used Approximate heatmap 18

  19. Overview of our Approximations a. Projective Field Thresholding Combats Avalanche E ff ect by pruning computations Makes every query in OBE faster b. Adaptive Drill-down Lower granularity queries for less sensitive regions Reduces total number of queries in OBE 19

  20. Avalanche Effect Example: 1-D Convolution Gains from Incremental Inference diminishes at latter layers! Projective Field Filter Kernel 20

  21. Our Solution: Projective Field Thresholding Projective Field Number of di ff erent Threshold paths: 7 τ = 5/9 6 6 3 3 1 1 21

  22. How do we pick τ ? Visual Runtime Heatmap Depends on Image and CNN Properties Quality τ =1.0 τ =0.8 τ =0.6 τ =0.4 Auto tune τ for SSIM target using a sample image set Done once upfront during system configuration 22

  23. Outline 1. Background on CNN Internals 2. Incremental CNN Inference 3. Approximate CNN Inference a. Projective Field Thresholding b. Adaptive Drill-down 4. Experimental Results 23

  24. Workload Images Chest X-Ray Images Task Predicting Pneumonia CNNs VGG16, ResNet18, Inception3 Occluding Patch Black Color Occluding Patch 16 x 16 Size Occluding Patch 4 Stride SSIM Target 0.90 More datasets in the paper 24

  25. Experimental Setup CPU Intel i7 @ 3.4 GHz GPU One Nvidia Titan Xp Memory 32 GB Deep Learning Toolkit PyTorch version 0.4.0 25

  26. Naive Incremental (Exact) Incremental + Approximate GPU 0.7x 12 9 Runtime (s) 6 2.3x 3 3.9x 1.6x 8.6x 3.1x 0 VGG16 ResNet18 Inception3 CPU 18 Runtime (min) 13.5 1.5x 9 3.7x 5.4x 4.5 13.8x 2.1x 4.9x 0 26 VGG16 ResNet18 Inception3

  27. Summary Explaining CNN predictions is important. OBE is widely used. DB-inspired incremental and approximate inference optimizations to accelerate OBE. Our optimizations make OBE more amenable to interactive diagnosis of CNN predictions. Project Web Page: https://adalabucsd.github.io/krypton.html Video: https://tinyurl.com/y2oy9hqq snakanda@eng.ucsd.edu 27

  28. System Architecture Flow of Data Invokes 0: Invoke incremental inference. Krypton 1: Initialize the input tensors, kernel weights and output buffer Python 0 in the GPU memory. PyTorch 2: Invoke the Custom Kernel Interface (written in C) using FFI Python Python foreign function interface (FFI) support. Pass memory 2 Custom Kernel Interface C references of input tensors, kernel weights and output buffer. 3 3: Forward the call to the Custom Kernel Implementation Custom Kernel Impl. 1 (written in CUDA). 7 Cuda 4: Parallely copy the memory regions from the input tensor to 5 an intermediate memory buffer. 4 cuDNN Library 5: Invoke the CNN transformation using cuDNN. 6 6: cuDNN reads the input from intermediate buffer and writes GPU Memory the transformed output to the output buffer. 7: Read the output to the main memory or pass reference as the input to the next transformation. 28

  29. Read Context Padding Output Input 0 0 0 0 Updated patch in the output Input patch that needs to be read in to the transformation operator Updated patch in the input Filter kernel 29

  30. Incremental and Approximate Inference for Faster Occlusion-Based Deep CNN Explanations Supun Nakandala, Arun Kumar, and Yannis Papakonstantinou University of California, San Diego Observation Explainability of CNN predictions is important. Occlusion-based explainability (OBE) is widely used. Problem OBE is highly compute intensive. This Work Cast OBE as an instance of view-materialization problem. Perform incremental and approximate inference. ~5x and ~35x speedups for exact and approximate heatmaps. 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend