A Comprehensive Study on Architecture of Neural Networks and Its Prospects in Cognitive Computing

A Comprehensive Study on Architecture of Neural Networks and Its Prospects in Cognitive Computing

Sushree Bibhuprada B. Priyadarshini
Copyright: © 2020 |Pages: 19
DOI: 10.4018/IJSE.2020070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper proffers an overview of neural network, coupled with early neural network architecture, learning methods, and applications. Basically, neural networks are simplified models of biological nervous systems and that's why they have drawn crucial attention of research community in the domain of artificial intelligence. Basically, such networks are highly interconnected networks possessing a huge number of processing elements known as neurons. Such networks learn by examples and exhibit the mapping capabilities, generalization, fault resilience conjointly with escalated rate of information processing. In the current paper, various types of learning methods employed in case of neural networks are discussed. Subsequently, the paper details the deep neural network (DNN), its key concepts, optimization strategies, activation functions used. Afterwards, logistic regression and conventional optimization approaches are described in the paper. Finally, various applications of neural networks in various domains are included in the paper before concluding it.
Article Preview
Top

2. Artificial Neuron: An Abstract Representation

The human brain can be considered as a highly complex structure that can be viewed as a highly connected network of neurons (Neural Network, n.d.; Sivanandam & Deepa, 2011). Accordingly, the biological neuron can be modeled into artificial neuron. Each constituent of the model bears analogy to actual components of biological neuron. Figure 1 shows a simple model of artificial neuron on the basis of which the artificial neural network is built. In the diagram, x1, x2,…, xn represent the n number of inputs supplied to the artificial neurons and w1, w2, …, wn represent the weights concerned with the inputs respectively. Similar to the biological neurons, the whole input received by the artificial neuron I can be denoted as shown in following equation:

IJSE.2020070103.m01
Or,

IJSE.2020070103.m02
.

Now the above sum gets passed through the non-linear filter Φ known as Activation function or squash function.

IJSE.2020070103.m03
.

In this context, an indubitable activation function is employed called as the threshold function. Here the sum gets compared with the threshold value ɵ. If the value of I is higher than ɵ, them the output becomes 1; otherwise, this becomes 0.

IJSE.2020070103.m04
Where, Φ is the Heaviside Function such that:

Complete Article List

Search this Journal:
Reset
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing