verification of deep learning systems
play

Verification of Deep Learning Systems Xiaowei Huang, University of - PowerPoint PPT Presentation

Verification of Deep Learning Systems Xiaowei Huang, University of Liverpool December 25, 2017 Outline Background Challenges for Verification Deep Learning Verification [2] Feature-Guided Black-Box Testing [3] Conclusions and Future Works


  1. Verification of Deep Learning Systems Xiaowei Huang, University of Liverpool December 25, 2017

  2. Outline Background Challenges for Verification Deep Learning Verification [2] Feature-Guided Black-Box Testing [3] Conclusions and Future Works

  3. Human-Level Intelligence

  4. Robotics and Autonomous Systems

  5. Figure: safety in image classification networks

  6. Figure: safety in natural language processing networks

  7. Figure: safety in voice recognition networks

  8. Figure: safety in security systems

  9. Microsoft Chatbot On 23 Mar 2016, Microsoft launched a new artificial intelligence chat bot that it claims will become smarter the more you talk to it.

  10. Microsoft Chatbot after 24 hours ...

  11. Microsoft Chatbot

  12. Microsoft Chatbot

  13. Major problems and critiques ◮ un-safe, e.g., instability to adversarial examples ◮ hard to explain to human users ◮ ethics, trustworthiness, accountability, etc.

  14. Outline Background Challenges for Verification Deep Learning Verification [2] Feature-Guided Black-Box Testing [3] Conclusions and Future Works

  15. Automated Verification, a.k.a. Model Checking

  16. Robotics and Autonomous Systems Robotic and autonomous systems (RAS) are interactive, cognitive and interconnected tools that perform useful tasks in the real world where we live and work.

  17. Systems for Verification: Paradigm Shifting

  18. System Properties ◮ dependability (or reliability) ◮ human values, such as trustworthiness, morality, ethics, transparency, etc (We have another line of work on the verification of social trust between human and robots [1]) ◮ explainability ?

  19. Verification of Deep Learning

  20. Outline Background Challenges for Verification Deep Learning Verification [2] Safety Definition Challenges Approaches Experimental Results Feature-Guided Black-Box Testing [3] Conclusions and Future Works

  21. Human Driving vs. Autonomous Driving Traffic image from “The German Traffic Sign Recognition Benchmark”

  22. Deep learning verification (DLV) Image generated from our tool Deep Learning Verification (DLV) 1 1 X. Huang and M. Kwiatkowska. Safety verification of deep neural networks . CAV-2017.

  23. Safety Problem: Tesla incident

  24. Deep neural networks all implemented with

  25. Safety Definition: Deep Neural Networks ◮ R n be a vector space of images (points) ◮ f : R n → C , where C is a (finite) set of class labels, models the human perception capability, ◮ a neural network classifier is a function ˆ f ( x ) which approximates f ( x )

  26. Safety Definition: Deep Neural Networks A (feed-forward and deep) neural network N is a tuple ( L , T , Φ), where ◮ L = { L k | k ∈ { 0 , ..., n }} : a set of layers. ◮ T ⊆ L × L : a set of sequential connections between layers, ◮ Φ = { φ k | k ∈ { 1 , ..., n }} : a set of activation functions φ k : D L k − 1 → D L k , one for each non-input layer.

  27. Safety Definition: Illustration

  28. Safety Definition: Traffic Sign Example

  29. Safety Definition: General Safety [General Safety] Let η k ( α x , k ) be a region in layer L k of a neural network N such that α x , k ∈ η k ( α x , k ). We say that N is safe for input x and region η k ( α x , k ), written as N , η k | = x , if for all activations α y , k in η k ( α x , k ) we have α y , n = α x , n .

  30. Challenges Challenge 1: continuous space, i.e., there are an infinite number of points to be tested in the high-dimensional space

  31. Challenges Challenge 2: The spaces are high dimensional Note: a colour image of size 32*32 has the 32*32*3 = 784 dimensions. Note: hidden layers can have many more dimensions than input layer.

  32. Challenges Challenge 3: the functions f and ˆ f are highly non-linear, i.e., safety risks may exist in the pockets of the spaces Figure: Input Layer and First Hidden Layer

  33. Challenges Challenge 4: not only heuristic search but also verification

  34. Approach 1: Discretisation by Manipulations Define manipulations δ k : D L k → D L k over the activations in the vector space of layer k . δ 2 δ 2 δ 1 δ 1 α x,k α x,k δ 3 δ 3 δ 4 δ 4 Figure: Example of a set { δ 1 , δ 2 , δ 3 , δ 4 } of valid manipulations in a 2-dimensional space

  35. ladders, bounded variation, etc η k ( α x,k ) η k ( α x,k ) α x j +1 ,k α x j +1 ,k δ k δ k α x j ,k α x j ,k δ k δ k δ k δ k α x 2 ,k α x 2 ,k α x,k = α x 0 ,k α x,k = α x 0 ,k α x 1 ,k α x 1 ,k δ k δ k δ k δ k δ k δ k Figure: Examples of ladders in region η k ( α x , k ). Starting from α x , k = α x 0 , k , the activations α x 1 , k ...α x j , k form a ladder such that each consecutive activation results from some valid manipulation δ k applied to a previous activation, and the final activation α x j , k is outside the region η k ( α x , k ).

  36. Safety wrt Manipulations [Safety wrt Manipulations] Given a neural network N , an input x and a set ∆ k of manipulations, we say that N is safe for input x with respect to the region η k and manipulations ∆ k , written as N , η k , ∆ k | = x , if the region η k ( α x , k ) is a 0-variation for the set L ( η k ( α x , k )) of its ladders, which is complete and covering. Theorem ( ⇒ ) N , η k | = x (general safety) implies N , η k , ∆ k | = x (safety wrt manipulations).

  37. Minimal Manipulations Define minimal manipulation as the fact that there does not exist a finer manipulation that results in a different classification. Theorem ( ⇐ ) Given a neural network N, an input x, a region η k ( α x , k ) and a set ∆ k of manipulations, we have that N , η k , ∆ k | = x (safety wrt manipulations) implies N , η k | = x (general safety) if the manipulations in ∆ k are minimal.

  38. Approach 2: Layer-by-Layer Refinement Figure: Refinement in general safety

  39. Approach 2: Layer-by-Layer Refinement Figure: Refinement in general safety and safety wrt manipulations

  40. Approach 2: Layer-by-Layer Refinement Figure: Complete refinement in general safety and safety wrt manipulations

  41. Approach 3: Exhaustive Search η k ( α x,k ) η k ( α x,k ) α x j +1 ,k α x j +1 ,k δ k δ k α x j ,k α x j ,k δ k δ k δ k δ k α x 2 ,k α x 2 ,k δ k δ k α x,k = α x 0 ,k α x,k = α x 0 ,k α x 1 ,k α x 1 ,k δ k δ k δ k δ k Figure: exhaustive search (verification) vs. heuristic search

  42. Approach 4: Feature Discovery Natural data, for example natural images and sound, forms a high-dimensional manifold, which embeds tangled manifolds to represent their features. Feature manifolds usually have lower dimension than the data manifold, and a classification algorithm is to separate a set of tangled manifolds.

  43. Approach 4: Feature Discovery

  44. Experimental Results: MNIST Image Classification Network for the MNIST Handwritten Numbers 0 – 9 Total params: 600,810

  45. Experimental Results: MNIST

  46. Experimental Results: GTSRB Image Classification Network for The German Traffic Sign Recognition Benchmark Total params: 571,723

  47. Experimental Results: GTSRB

  48. Experimental Results: GTSRB

  49. Experimental Results: CIFAR-10 Image Classification Network for the CIFAR-10 small images Total params: 1,250,858

  50. Experimental Results: CIFAR-10

  51. Experimental Results: imageNet Image Classification Network for the ImageNet dataset, a large visual database designed for use in visual object recognition software research. Total params: 138,357,544

  52. Experimental Results: ImageNet

  53. Outline Background Challenges for Verification Deep Learning Verification [2] Feature-Guided Black-Box Testing [3] Preliminaries Safety Testing Experimental Results Conclusions and Future Works

  54. Contributions Contributions: ◮ feature guided black-box ◮ theoretical safety guarantee, with evidence of practical convergence ◮ time efficiency, moving towards real-time detection ◮ evaluation of safety-critical systems ◮ counter-claiming a recent statement

  55. Black-box vs. White-box

  56. Human Perception by Feature Extraction Figure: Illustration of the transformation of an image into a saliency distribution. ◮ (a) The original image α , provided by ImageNet. ◮ (b) The image marked with relevant keypoints Λ( α ). ◮ (c) The heatmap of the Gaussian mixture model G (Λ( α )).

  57. Human Perception as Gaussian Mixture Model SIFT: ◮ invariant to image translation, scaling, and rotation, ◮ partially invariant to illumination changes and ◮ robust to local geometric distortion

  58. Pixel Manipulation define pixel manipulations δ X , i : D → D for X ⊆ P 0 a subset of input dimensions and i ∈ I :  α ( x , y , z ) + τ, if ( x , y ) ∈ X and i = +  δ X , i ( α )( x , y , z ) = α ( x , y , z ) − τ, if ( x , y ) ∈ X and i = − α ( x , y , z ) otherwise 

  59. Safety Testing as Two-Player Turn-based Game

  60. Rewards under Strategy Profile σ = ( σ 1 , σ 2 ) ◮ For terminal nodes, ρ ∈ Path F I , 1 R ( σ, ρ ) = sev α ( α ′ ρ ) where sev α ( α ′ ) is severity of an image α ′ , comparing to the original image α ◮ For non-terminal nodes, simply compute the reward by applying suitable strategy σ i on the rewards of the children nodes

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend