Generation of Adversarial Mechanisms in Deep Neural Networks: A Survey of the State of the Art

Generation of Adversarial Mechanisms in Deep Neural Networks: A Survey of the State of the Art

Aruna Animish Pavate, Rajesh Bansode
Copyright: © 2022 |Pages: 18
DOI: 10.4018/IJACI.293111
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deep learning is a subspace of intelligence system learning that experienced prominent results in almost all the application domains. However, Deep Neural Network found to be susceptible to perturbed inputs such that the model generates output other than the expected one. By including insignificant perturbation to the input effectuate computer vision models to make an erroneous prediction. Though, it is still a dilemma whether humans are prone to comparable errors. In this paper, we focus on this issue by leveraging the latest practices that help to generate adversarial examples in computer vision applications by considering diverse identified parameters, unidentified parameters, and architectures. The analysis of the distinct techniques has been done by considering different common parameters. Adversarial examples are easily transferable while designing computer vision applications that control the condition of the classifications of labels. The finding highlights that some methods like Zoo and Deepfool achieved 100% success for the nontargeted attack but are application-specific.
Article Preview
Top

Introduction

Background

Nowadays, Deep learning systems have achieved human-compatible success in predicting the labels in almost all domains. Many artificial intelligent subdomain techniques are applied in variety of appliances from Malware Detection(Kumar,2020), Object Recognition(Bayraktar,2019),Image Classification(Ahuja, 2020) (Rajagopal,2020), Speech Recognition(Llombart, 2021), Natural Language Processing(Do, 2021), Medical Science(Esteva, 2017), Satellite Applications(Kumar,2020), to Facial Recognition systems(Menon, 2021). With the growing adoption of deep neural networks by many companies, DNN the use of DNN in safety-critical environment applications including, Drones, Robotics, Voice Recognition, Self-driving cars like Uber, Apple & Samsung, Tesla(Lex,2019), Surveillance systems (Pillai,2021), Apple Siri(“Apple,” 2019), Amazon Alexa(2019), Etc.

Figure 1.

Deep learning timeline

IJACI.293111.f01

However, the deep neural network is prone to adversarial attacks(Szegedy,2014). With the increasing boom of deep neural networks, their security has become an essential consideration in all industries. This study presents an empirical survey of different approaches to generate adversarial examples from the computer vision field. According to (Krizhevsky et al.,2012), deep learning is a point of convergence in visual perception. Neural networks learn from a large amount of data like humans learn from circumstances. Deep neural learning accomplishes actions repeatedly and tunes them with a slight error loss to improve the outcome. Figure 1 shows a timeline of deep neural networks from 1940 to 2018.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing