Deep Learning Approaches to Overcome Challenges in Forensics

Deep Learning Approaches to Overcome Challenges in Forensics

Kiruthigha M. (Anna University, Chennai, India) and Senthil Velan S. (Amity University, Dubai, UAE)
Copyright: © 2021 |Pages: 12
DOI: 10.4018/978-1-7998-4900-1.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cyber forensics deals with collecting, extracting, analysing, and finally reporting the evidence of a crime. Typically investigating a crime takes time. Involving deep learning methods in cyber forensics can speed up the investigation procedure. Deep learning incorporates areas like image classification, morphing, and behaviour analysis. Forensics happens where data is. People share their activities, pictures, videos, and locations visited on the readily available platform, social media. An abundance of information available on social networking platforms renders them a favourite of cybercriminals. Compromising a profile, a hacker can gain access, modify, and use its data for various activities. Unscrupulous activities on such platforms include stalking, bullying, defamation, circulation of illegal or pornographic material, etc. Social network forensics is more than the application of computer investigation and analysis techniques, such as collecting information from online sources. CNNs and autoencoders can learn and obtain features from an image.
Chapter Preview
Top

Introduction To Deep Learning:

Deep learning is a part of machine learning technique that enables the computer to learn by example. Deep learning consists of Artificial Neural network (ANN). ANN consists of layers of nodes as how a human brain has. Each node in a layer is connected to the adjacent layer. Signals are transmitted between nodes as they are between neurons in brain and corresponding weights are assigned. A heavier weighted node will contribute more effect on the next layer. System that learns through deep learning will be in similar to how a toddler does. Each learning model applies non-linear transformation to its input and uses the learning to build a statistical output. The learning is iterated until a level of accuracy is reached. The term deep represents the number of layers the input has to pass through.

Figure 1.

Artificial Neural Networks

978-1-7998-4900-1.ch005.f01
Top

Convolutional Neural Networks (Cnn)

CNN has proven its efficiency in image classification and processing. Though ideal for image data, works better for non-image data too. CNN can be explained in four steps

  • 1.

    Convolution: It helps with feature detection with the help of kernels (filters). Feature Maps are created

  • 2.

    Apply the ReLu (Rectified Linear Unit): increase the non-linearity of the image

  • 3.

    Pooling: Helps CNN to detect features in various images irrespective of lighting, position, angle etc. Max pooling helps to preserve important features of the image.

  • 4.

    Flattening: Feature map matrix is flattened to single column and fed to neural network

Figure 2.

Convolutional Neural Networks

978-1-7998-4900-1.ch005.f02
Top

Recurrent Neural Networks (Rnn)

In RNN nodes are connected to form a directed graph along temporal sequence. RNN has memory which remembers all things calculated. It remembers every information through time. This feature makes RNN efficient in time series prediction. RNN are also called LSTM (Long Short Term Memory)

How a RNN works:

  • 1.

    A single temporal input is provided

  • 2.

    Calculate current state using current input and previous state

  • 3.

    It can be iterated through any number of time steps

  • 4.

    Once all time steps are over, current state is used to calculate output

Figure 3.

Recurrent Neural Networks

978-1-7998-4900-1.ch005.f03
Top

Boltzmann Machine (Bm)

Boltzmann Machine are stochastic deep learning models which consist of only hidden and visible nodes. They don’t contain output node. The input nodes activate certain nodes in hidden layer. The activated nodes in the hidden layer reconstruct the hidden nodes. Boltzmann machine are generative models. They produce activation of hidden nodes in the forward pass and in the backward pass they perform reconstruction of input.

Figure 4.

Boltzmann Machine

978-1-7998-4900-1.ch005.f04

Complete Chapter List

Search this Book:
Reset