Fundamental Categories of Artificial Neural Networks

Fundamental Categories of Artificial Neural Networks

Arunaben Prahladbhai Gurjar, Shitalben Bhagubhai Patel
Copyright: © 2022 |Pages: 30
DOI: 10.4018/978-1-6684-2408-7.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.
Chapter Preview
Top

Convolutional Neural Networks

A CNN architecture consists of different ConvNet stages. For each degree winding of the following comparison module / Sub sampling order module. While the traditional clustering ConvNet modules conceal the average or most of the groups, this grouping uses LP. Ordinance and that, unless it is to become a parameter, as opposed to subtractive. Subtractive is not division, that is, on the average of the value of each of its immediate surroundings; own extended withdrawn(Lawrence,1997). Finally, as is known, also multi-stage the functions that are used in place of the same rank.

One-dimensional convolution is an operation between a vector of weight m 2 Rm and vector entries seen as a series of s 2 Rs. The carrier m is the convolution filter. In particular, we think of it as an insertion phrase and yes 2 R is a unique function value associated with the ith word of the phrase. The idea behind the one-dimensional convolution is to take the point product of the vector m with every m-gram in the sentence s another series c:

cj = m| sj – m + 1: j(1)

Equation 1 gives rise to two types of convolutions according to the scope of the J. Lo strait index the type of convolution requires that s ³ give me a series c 2 Rs-m + 1 with j going from m one s. The broad type of convolution does not have this requirements for s or m and give a series of c 2 Rs + m-1 where the index j varies from 1 to s +m - 1. Enter out of range values where i <1 where i> s are considered zero. The result of the narrow convolution is a partial sequence of the result of wide convolution (Cireşan,2011). Two types of one dimensional convolution are illustrated in Fig. 1.

Figure 1.

Narrow and wide types of CNN

978-1-6684-2408-7.ch001.f01
m=5
Figure 2.

A classic convolutional network

978-1-6684-2408-7.ch001.f02

Why ConvNets Over Feed-Forward Neural Nets?

Figure 3.

Flattening of a 3x3 image matrix into a 9x1 vector

978-1-6684-2408-7.ch001.f03

An image is just a matrix of values in pixels, isn't it? So why not paste the image (for example a description of 3x3 images into a 9x1 vector) and send it to a multi-part Perceptron for processing?

In the case of high quality imagery(Kalchbrenner,2014), the method may show sufficient scale during class imaging, but is low or poorly organized for complex images with pixel dependencies.

ConvNet is able to efficiently capture spatial and model dependencies in the image through the use of appropriate filters. The architecture is more efficient for the data structure due to the reduced number of parameters involved and the overload of the load. In other words, the network can be formed to better understand image stability.

Input Image

Figure 4.

4x4x3 RGB Image

978-1-6684-2408-7.ch001.f04

In the figure we have a separate RGB image of the three color areas: red, green and blue. There are many color spaces that contain images: grayscale, RGB, HSV, CMYK, and so on.

You can imagine the intensity of the calculation when the images are the desired size, for example 8K (7680 × 4320). The role of ConvNet is to shrink the images into a simpler form to process, without losing the essential functions to get a good prediction. This is important when designing an architecture that is not only powerful for learning functions, but also scalable for large data sets.

Complete Chapter List

Search this Book:
Reset