the impact of adversarials within cnn based image
play

THE IMPACT OF ADVERSARIALS WITHIN CNN- BASED IMAGE CLASSIFICATION - PowerPoint PPT Presentation

THE IMPACT OF ADVERSARIALS WITHIN CNN- BASED IMAGE CLASSIFICATION By Josue Flores PI: Zhigang Zhu CCNY STEM Communities This material is based upon work supported by the National Science Foundation under Grant No. 1832567. Any opinions,


  1. THE IMPACT OF ADVERSARIALS WITHIN CNN- BASED IMAGE CLASSIFICATION By Josue Flores PI: Zhigang Zhu CCNY STEM Communities This material is based upon work supported by the National Science Foundation under Grant No. 1832567. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

  2. Introduction Computer Vision Artificial Neural Networks What are CNNs? Adversarials Examples

  3. Computer Vision • Computer vision is a subfield of computer science that focuses on imitating portions of the complex human vision system, enabling computers to identify/process objects in images and videos in a similar manner that humans do. • In a way, Computer vision is mainly focused on pattern recognition. So, one way to train a computer how to understand visual data is to feed it a plethora of images, thousands, even millions, if possible that have been labeled, and then subject those photos to various software applications, or algorithms, that allow the computer to hunt down patterns in all the elements that relate to those labels. • Applications of CV: Self-Driving Cars, Facial Recognition, Healthcare, etc..

  4. Artificial Neural Networks An artificial neural ANNs are ANNs have three network (ANN) is a considered layers that are computational nonlinear statistical interconnected. The model based on the data modeling tools first layer consists structure and where the complex of input neurons. functions of relationships Those neurons biological neural between inputs and send data on to the networks. outputs are second layer, which modeled or in turn sends the patterns are found. output neurons to the third layer.

  5. CONVOLUTIONAL NEURAL NETWORKS (CNNS) Convolutional Neural Networks (CNNs), also known as ConvNets, act as a Deep Learning algorithm that takes an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. The architecture of a CNN is analogous to that of the connectivity pattern of neurons in the human brain and was inspired by the organization of the Visual Cortex. Most ConvNets can successfully comprehend the spatial and temporal dependencies in an image through the application of relevant filters. The architecture performs a better fitting to the image dataset due to the reduction in the number of parameters involved and reusability of weights.

  6. Adversarial examples are images that contain nuanced aspects of alteration to them, that incur confusion or failure within a deep neural network’s ability to accurately classify images or information. This is primarily done is by incorporating perturbations (visual changes) to a notable number of pixels in an image. Adversarial Adversarial examples can be contrived and applied in multiple ways Examples towards Neural Networks, especially ConvNets. These methods include: Fast Gradient Basic Iterative Carlini & Sign Method DeepFool Method (BIM) Wagner (FGSM)

  7. MATERIALS AND METHODS • Python 3 with TensorFlow Library via Google Colaboratory • Generated CNN-based image classifers on existing architectures/pretrained models (MobileNet_V2, ResNet50_V2, Inception_V3) • Generated Adversarial attacks based on the pre-existing methods mentioned earlier. • Applied these adversarials upon various datasets including: MNIST, CIFAR-10, • FASHION_MNIST

  8. Results Fig (a). MobileNet_V2 Training Results Fig (b). ResNet50_V2 Training Results Fig (c). Inception_V3 Training Results Figures (a), (b), and (c) were all pretrained models based on CNN architectures that were trained with the CIFAR-10 dataset. They were all trained with 5 epochs being the standard number of iterations over the entire dataset.

  9. Code for FGSM adversarial against pretrained model image_raw = tf.io.read_file(image_path) • import tensorflow as tf image = tf.image.decode_image(image_raw) • import matplotlib as mpl image = preprocess(image) • import matplotlib.pyplot as plt image_probs = pretrained_model.predict(image) plt.figure() • mpl.rcParams['figure.figsize'] = (8, 8) plt.imshow(image[0]*0.5+0.5) # To change [-1, 1] to [0,1] loss_object = tf.keras.losses.CategoricalCrossentropy() • mpl.rcParams['axes.grid'] = False def create_adversarial_pattern(input_image, input_label): • pretrained_model = tf.keras.applications.MobileNetV2(include_top=True, with tf.GradientTape() as tape: • weights='imagenet') tape.watch(input_image) prediction = pretrained_model(input_image) • pretrained_model.trainable = False loss = loss_object(input_label, prediction) • # ImageNet labels # Get the gradients of the loss w.r.t to the input image. gradient = tape.gradient(loss, input_image) • decode_predictions = tf.keras.applications.mobilenet_v2.decode_predictions # Get the sign of the gradients to create the perturbation • signed_grad = tf.sign(gradient) # Helper function to preprocess the image so that it can be inputted in MobileNe tV2 return signed_grad • def preprocess(image): _, image_class, class_confidence = get_imagenet_label(image_probs) • image = tf.cast(image, tf.float32) plt.title('{} : {:.2f}% Confidence'.format(image_class, class_confidence*100),color = 'magenta') plt.show() • image = tf.image.resize(image, (224, 224)) # Get the input label of the image. retriever_index = 91 • image = tf.keras.applications.mobilenet_v2.preprocess_input(image) label = tf.one_hot(retriever_index, image_probs.shape[-1]) • image = image[None, ...] label = tf.reshape(label, (1, image_probs.shape[-1])) • return image perturbations = create_adversarial_pattern(image, label) plt.imshow(perturbations[0]*0.5+0.5); # To change [-1, 1] to [0,1] • # Helper function to extract labels from probability vector epsilons = [0, 0.01, 0.1, 0.15,0.30] • def get_imagenet_label(probs): descriptions = [('Epsilon = {:0.3f}'.format(eps) if eps else 'Input') for eps in epsilons] • return decode_predictions(probs, top=1)[0][0] • image_path = tf.keras.utils.get_file('Sports_Car.jpg', 'https://www.autocar.co.u for i, eps in enumerate(epsilons): k/sites/autocar.co.uk/files/styles/flexslider_full/public/slideshow_image/0- adv_x = image + eps*perturbations pininfarina-battista.jpg?itok=3UsQ0zMD') adv_x = tf.clip_by_value(adv_x, -1, 1) display_images(adv_x, descriptions[i])

  10. Results Continued…. Figure 2. This figure illustrates how applying noise/perturbation to an image can incur misclassification in the case of a sports car.

  11. Results Continued… Figure 3. This figure illustrates how applying noise/perturbation to an image can incur misclassification in the case of a street sign.

  12. Discussion • As expected, I observed that various CNN models reacted differently and accordingly to various attacks. • For example, FGSM and BIM seemed to impact Inception V3 the most compared to other CNN architectures. These attacks ultimately skewed the image classification process especially when the epsilon of the perturbation gradually increased. • However with this method, as the intensity of epsilon increases, the adversarial attack becomes more easily discernable to the human vision. As such, this renders the surreptitious aspect of the attack ineffective.

  13. Next Steps / Conclusion The Data illustrates how CNNs despite their supernatural efficiency in image classification, remain vulnerable to many forms of interference that impede upon their function. Not to mention, this demonstrates that CNNs must continuously be refined along with the new technological advances that are released over time. This is depicted by the creation of adversarial defenses to foil these unsightly examples, whether they are contrived, naturally or unnaturally. Additionally, I plan to procure more knowledge relating to the defensive measures and examine their interactions with prominent adversarial attack systems.

  14. Acknowledgements • CCNY-NSF STEM REU • CCNY-SC project funded by the National Science Foundation Grant No. 1832567 • Professor Zhigang Zhu

  15. QUESTIONS?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend