Evolutionary Algorithm With Self-Learning Strategy for Generation of Adversarial Samples

Evolutionary Algorithm With Self-Learning Strategy for Generation of Adversarial Samples

Aruna Animish Pavate, Rajesh Bansode
Copyright: © 2022 |Pages: 21
DOI: 10.4018/IJACI.300797
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Knowledge engineering algorithms such as deep learning models have exhibited tremendous success in solving complex problems. However, the linear nature of the neural network is the primary reason for vulnerability to the perturbed samples. Adversarial attacks pose a severe threat to applying deep models, especially while designing safety-critical applications. This work proposes security attacks against neural architectures. In particular, we introduce a novel method to create adversarial samples. First, we propose a differential evolution population resizing scheme, which enlarges the generation of adversarial samples by allowing adversaries to speed the convergence process. The proposed system is a novel self-adaptive population resizing-based adversarial mechanism. The result shows the success rate for targeted attack LeNet(60.07%), Network_in_Network(97%), Wide_ResNet50(99%), Pure CNN (97%), DenseNet (54.11%),ResNet50(51%) and LeNet(85.13%), Network_in_Network(33.37%), WideResnet(24.40%), Pure_CNN(19.96%),DenseNet (63.67%), ResNet (68.00%) for non targeted attacks respectively.
Article Preview
Top

Introduction

With the advancement in many domains, the use of machine learning and artificial intelligence has become a growing interest to automate the processes such as robotic surgery (Lavanchy et al.,2021), driverless cars (Hironobu et al.,2019), agriculture domain (Shah et al.,2021) and many more. The convergence of deep learning algorithms with other technologies like sensor technology cloud computing are becoming more flexible and cost-effective solutions. Security is the primary concern when using deep neural networks in safety-critical applications. The knowledge engineering models, including deep learning models, are inclined to the perturbed input samples generated from the original input samples (Szegedy et al.,2013). These samples(adversarial examples) are crafts by applying small changes in the input samples. Adversarial samples are designed in such a way that changes are not noticeable to human eyes and can cause the output of the classifier to be changed other than the actual one (Su et al.,2019; Lin et al.,2020; Jun-Ichi et al.,2020; Pavate & Bansode,2021). With the help of optimization methods, perturbed samples can change the classification results awfully. The classification of labels does not influence all pixels. The perturbation added in a specific direction helps to misclassify the result.

Many previous adversarial mechanisms tend to take advantage of the gradient-based optimization techniques in various setting environments such as white-box setting(Kurakin et al.,2017; Madry et al.,2018; Papernot et al.,2016;) black-box(Chen et al.,2017; Su et al.,2017; Zhao et al.,2017) for generating well crafted perturbed input samples. Gradient-based optimization methods boost the attack when internal details of the model are available such as architectural model, training sample domain, parameters Etc. The real world often assumes that data comes from physical devices such as mobile phones and cameras to machine learning models. In such a scenario, it is not easy to get information about the model. Vast amounts of data and computational resources are required to train deep neural networks. To utilize the resources, reduce the cost and time, developers often use pre-trained neural network models to reuse them for developing new models. Many developers use publicly available classifiers trained on millions of images and fine-tune them on a few samples for new applications. There might be chances that the adversary might interfere with the neural network model to generate specific output.

Deep neural network models are black-box models consisting of multiple layers of nonlinear transformation, so it is not easy to analyze line by line, even though the internal details of the model are known. Therefore, designing a predictive model using deep neural networks is a challenging task. Many of the adversarial attack mechanisms like surrogate/proxy models have shown impressively in deceiving the networks. J. Su et al.(2017) demonstrated the sharpness of the one-pixel attack to misclassify the output of the classifier simply by altering one pixel and considering no gradient information available using the differential evolution algorithm. Differential evolution is a very significant global optimizer that helps to search a vast area of solution space effectively (K. Price et al.,2006)(Rahnamayan et al.,2008)(Qin et al.,2009). Meanwhile, various improved evolutionary algorithms make use for the creation of perturbing samples. So far (Su et al. 2017; Su et al. 2019; Jun-Ichi 2020) have utilized the DE strategy to add small perturbations to inputs and generate attacks on different network models. However, in most of the existing research (Su et al. 2017; Su et al. 2019; Jun-Ichi 2020), population size remains fixed, increasing the population's uniformity to some extent.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing