A Comparative Study on Adversarial Noise Generation for Single Image Classification

A Comparative Study on Adversarial Noise Generation for Single Image Classification

Rishabh Saxena (VIT University, Vellore, India), Amit Sanjay Adate (VIT University, Vellore, India) and Don Sasikumar (VIT University, Vellore, India)
Copyright: © 2020 |Pages: 13
DOI: 10.4018/IJIIT.2020010105

Abstract

With the rise of neural network-based classifiers, it is evident that these algorithms are here to stay. Even though various algorithms have been developed, these classifiers still remain vulnerable to misclassification attacks. This article outlines a new noise layer attack based on adversarial learning and compares the proposed method to other such attacking methodologies like Fast Gradient Sign Method, Jacobian-Based Saliency Map Algorithm and DeepFool. This work deals with comparing these algorithms for the use case of single image classification and provides a detailed analysis of how each algorithm compares to each other.
Article Preview
Top

1. Introduction

Generative models have become the dominant form of data generation tool in recent years due to their vastly superior results and optimized method. Goodfellow (2017) showed how Adversarial Learning can be used as a technique by training two networks simultaneously, by training them together under a single loss signal, in order to produce better results. This paper looks into this methodology of adversarially training samples for the use case of producing noisy images for attacking image classifiers. Several previous models using adversarial learning have shown to create images that are extremely close to their original training sample (Arjovsky & Bottou, 2017), which only helps us to use this method for creating a Deep Convolutional Generative Adversarial Network based architecture that can create the aforementioned noisy images. Previously tried and tested models exist that use Generative Adversarial networks as their base networks. These include: Deep Convolutional Generative Adversarial Networks (Radford, Metz, & Chintala, 2015) which use a convolutional neural network as it's discriminator and a deconvolutional neural network as a generator for generating images. Radford et al. (Radford, Metz, & Chintala, 2015) use various techniques for their network, including the All-Convolutional Neural Network (Springenberg, Dosovitskiy, Brox, & Riedmiller, 2014) which replaces the commonly used max-pooling layer with another convolutional layer that contains a stride of 2 that provides the same functionality on their dataset, along with the famously used Batch Normalization(Ioffe & Szegedy, 2015). Earth Mover’s distance (Hou, Yu, & Samaras, 2016) is used in Wasserstein GAN(Arjovsky, Chintala, & Bottou, 2017) as the loss function to compare and analyse the difference between the histogram of the original dataset and the one that needs to be generated; and Bayesian GAN(Saatchi & Wilson, 2017) which takes advantage of the Bayesian function to approximate the probability density of the original dataset and the generated samples uses it as the loss function.

The aforementioned architectures produce remarkable results in their own field of image generation from the original dataset. However, these architectures fail to meet the need for adversarial image generation as the requirement for the same is that the image generated by the network must work in tandem with the original image to produce a new noisy image layer that must then be applied to the original dataset to produce a classification of that same original classifier. this intricate process involves an intermediary step for the generation of the noisy image that these legacy networks cannot make. Hence, this paper takes inspiration from Goodfellow et al. (Goodfellow, Shlens, & Szegedy, 2014), which uses a method called Fast Sign Gradient Method. This method trains the loss function of the classifier and that of the noise generator as a combined function using the following equation:

In the equation above, the noise layer is denoted by IJIIT.2020010105.m01, the original image is denoted by x, the magnitude of the perturbations is ϵ, is the truth label y and the noise parameter is Θ. The loss function for the same is given by IJIIT.2020010105.m02.

In another method proposed by Papernot et al. (2015), Jacobian-Based Saliency Map, the author of the paper uses an input image xin a model f that has a classification metric j and a target classification t where the difference between the probability of classification t and jis reduced and all other classification differences are increased using the following equation:

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 16: 4 Issues (2020): 1 Released, 3 Forthcoming
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing