Adversarial Attacks and Defense on Deep Learning Models for Big Data and IoT

Adversarial Attacks and Defense on Deep Learning Models for Big Data and IoT

Nag Nami (San Jose State University, USA) and Melody Moh (San Jose State University, USA)
DOI: 10.4018/978-1-5225-8407-0.ch003

Abstract

Intelligent systems are capable of doing tasks on their own with minimal or no human intervention. With the advent of big data and IoT, these intelligence systems have made their ways into most industries and homes. With its recent advancements, deep learning has created a niche in the technology space and is being actively used in big data and IoT systems globally. With the wider adoption, deep learning models unfortunately have become susceptible to attacks. Research has shown that many state-of-the-art accurate models can be vulnerable to attacks by well-crafted adversarial examples. This chapter aims to provide concise, in-depth understanding of attacks and defense of deep learning models. The chapter first presents the key architectures and application domains of deep learning and their vulnerabilities. Next, it illustrates the prominent adversarial examples, including the algorithms and techniques used to generate these attacks. Finally, it describes challenges and mechanisms to counter these attacks, and suggests future research directions.
Chapter Preview
Top

Chapter Organization

The chapter first provides an overview of deep learning models, its prominent architectures and its applications in different domains of big data and IoT. The chapter next discusses how these different domains of deep learning applications can be exposed to an adversarial example attack. Then, it delves into the concept and details of the adversarial examples, different techniques and algorithms to generate them. And how these crafted attacks expose the vulnerabilities of popular applications of deep learning models. This is followed by challenges of securing the models against these attacks, and a description of the existing security measures in place to prevent such attacks. Finally, the chapter ends with some concluding remarks and suggestions for future research direction.

Complete Chapter List

Search this Book:
Reset