Attacks by Hardware Trojans on Neural Networks

Attacks by Hardware Trojans on Neural Networks

Naveenkumar R., N.M. Sivamangai, P. Malin Bruntha, V. Govindaraj, Ahmed A. Elngar
Copyright: © 2023 |Pages: 28
DOI: 10.4018/978-1-6684-6596-7.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The security aspects of neural networks (NN) have become a crucial and appropriate theme for basic research as a result of recent developments in neural networks and their use in deep learning techniques. In this research, the authors examine the security issues and potential solutions in computing hardware for deep neural networks (DNN). The latest hardware-based attacks against DNN are then described, with an emphasis on fault injection (FI), hardware Trojan (HT) insertion, and side-channel analysis (SCA). This chapter presents the various security issues in hardware-based attacks and security concerns in the hardware trojan (HT) and side-channel analysis are focused. Moreover, discussed the countermeasure for the hardware trojan and side channel attacks (SCA) is in neural networks.
Chapter Preview
Top

Introduction

Artificial intelligence (AI) and machine learning (ML) challenges that have been there for a while are now being solved to a great extent because to DNN oriented approaches (Buchanan et.al, 2015). Solutions of deep learning (DL) are crucial to the upcoming features of autonomous systems due to the DNN models' superhuman performance in tasks like object identification, natural language processing (NLP) and gaming. Despite the fact that DNN-based designs are quite effective at solving challenging issues, thorough security analyses for DL-based trusted and explicable methods, applications, and platforms are still being developed. The development of tensor processing units (TPUs) and graphics processing units (GPUs), as well as their use in solving data-intensive computational workloads, are significantly responsible for the merit of DL approaches. The architectures of GPUs and TPUs are straightforward but enormously parallel and lack security. The creation of a DNN model necessitates a substantial investment in material resources. For instance, the cost of the GPU hardware for one current NLP system, Generative Pre-trained Transformer 3 (GPT-3), is projected to be $5 million.

By developing an intricate attack tactics, researchers show the viability and applicability of neural network-based trojan attacks in this study. The attack engine creates a short piece of input data known as the trojan trigger by altering a prior model and a target prediction output as inputs. The trojan trigger-stamped modified model will produce the specified classification output for any valid model input. The suggested assault creates the trigger with the initial model in a way that can cause significant activation in some Neural network (NN) neurons. It is comparable to examining a person's brain to determine what stimuli could unconsciously stimulate them, then using it to determine the trojan trigger. This avoids the intensive training needed for the individual to recall the trigger, which may interfere with the person's prior knowledge, as opposed to utilizing an arbitrary trigger. Then, in order to implant the malicious behaviour, the attacker's mechanism retrains the model to create causality between a small number of neurons which can be activated via the trigger and the desired categorization outcome. It revers engineers the model inputs for every result classification in order to adjust for the weight changes (needed to establish the malicious causality) and preserve the original model functionalities. Designer then retrain the model using the reverse-engineered input and corresponding stamped equivalents.

Due to their strong performance, DNN techniques advance quickly. And due to DNNs' invasion of numerous security-critical applications, the security issue for DNN systems has grown into a serious and urgent concern. Even while there may be instances where DNNs improve our lives, attacks on DNNs are extremely harmful and may have dire repercussions (Akhtar et.al, 2018). A DNN-based autonomous vehicle might be tricked into thinking a stop sign with undetectable noises on it is a speed restriction sign, resulting in a serious collision.

In spite of autonomous vehicles, there are numerous more “life-and-death” scenarios that rely on the equivalent DNN security, including recognition of faces, reconnaissance, drones, and automation. The accompanying security issue will become a major concern as billions more DNN-powered gadgets are anticipated to emerge and take on a bigger part in various facets of our daily lives. In light of the widespread deployment of Convolutional Neural Networks (CNNs) in applications involving images or video.

In this investigation, researchers mainly concentrate on the confidentiality issue of CNN-powered systems. Previous research, examines the innate characteristics of DNN resilience from the algorithm perspective. As a crucial component of DNN mechanisms, the security of the accompanying hardware platforms is typically taken for granted. For simpler and quicker system integration, modern integrated circuits (ICs) frequently incorporate third-party intellectual property (IP) blocks. A tendency towards globalization in semiconductor design and manufacture gives attackers opportunities to launch HT attacks. One of the most significant hardware attacks that embeds harmful alterations in the target ICs is the HT. Trojan attacks are effective covert because the infected systems act normally in everyday situations just like the uninfected systems do and only fail when trigger inputs are present (Naveenkumar et.al, 2023).

Complete Chapter List

Search this Book:
Reset