Artificial Neural Networks and Data Science

Artificial Neural Networks and Data Science

Trevor Bihl, William A. Young II, Adam Moyer, Steven Frimel
Copyright: © 2023 |Pages: 23
DOI: 10.4018/978-1-7998-9220-5.ch052
(Individual Chapters)
No Current Special Offers


Artificial neural networks (ANNs) consist of a family of techniques that are commonly employed to recognize and interpret patterns in big data that are used in prediction, clustering, classification, and identification of other previously unknown data patterns. This article describes foundational concepts that relate to ANNs, including an understanding of how ANNs are linked to biological concepts and the underlying ANN families. The article includes an explanation of common ANN methods, architecture/hyperparameter determination for initializing ANNs, and current research directions. The article concludes with a discussion on the need for algorithmic transparency and repeatability of research.
Chapter Preview


Computational ANNs are based on the biological neuron models consisting of multiple interconnected nodes termed “neurons.” However, unlike biologic neurons, ANNs employ statistical methods to learn patterns between inputs and outputs (Jain, Duin, & Mao, 2000). A conceptualization showing the ANN interconnected neurons is presented in Figure 1. Through organizational and iterative principles, connection weights between neurons, inputs, and outputs are computed to learn a nonlinear input-output relationship (Jain, Duin, & Mao, 2000).

Figure 1.

Basic conceptualization of an ANN


Key Terms in this Chapter

Back-Propagation: A supervised learning method used to determine the weights of an ANN, where the difference between the desired and the model’s output is minimized.

Neuron: An individual building block of an ANN in which weighted input values are transformed via a transfer function into an output, which is typically passed to other portions of the network.

Knowledge Extraction: The process of discovering how input attributes are used within an ANN to formulate the output such that one can validate functional relationships within the model.

Over-Fitting: Occurs when a mathematical model describes random error or noise instead of the real underlying relationships within a dataset, which artificially produces desirable goodness of fit metrics for training data, but produces poor metrics for testing data.

Post-Processing: A process of utilizing a trained mathematical model in order to improve the understanding of the database that has been modeled.

Unsupervised Learning: A learning strategy in which the desired output, or dependent attribute, is unknown.

Epoch: The representation of an entire training set of sample data through the learning algorithm so that an ANN’s weights can be determined. AU80: Reference appears to be out of alphabetical order. Please check

Pre-Processing: A process of preparing a dataset in order to develop a mathematical model.

Supervised Learning: A learning strategy in which the desired output, or dependent attribute, is known.

Complete Chapter List

Search this Book: