Artificial Neural Networks and Their Applications in Business

Artificial Neural Networks and Their Applications in Business

Copyright: © 2018 |Pages: 16
DOI: 10.4018/978-1-5225-2255-3.ch576
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Despite the natural advantage humans have for recognizing and interpreting patterns, large and complex datasets, as in Big Data, preclude efficient human analysis. Artificial neural networks (ANNs) provide a family of pattern recognition approaches for prediction, clustering and classification applicable to KDD with ANN model complexity ranging from simple (for small problems) highly complex (for large issues). To provide a starting point for readers, this chapter first describes foundational concepts that relate to ANNs. A listing of commonly used ANN methods, heuristics, and criteria for initializing ANNs is then discussed. Common pre- and post- data processing methods for dimensionality reduction and data quality issues are then described. The authors then provide a tutorial example of ANN analysis. Finally, the authors list and describe applications of ANNs to specific business related endeavors for further reading.
Chapter Preview
Top

Introduction

Perspectives of Chapter

Knowledge discovery in datasets (KDD), the process of finding non-obvious patterns or trends in datasets primarily to assist in understanding complicated systems (Mannila, 1996), leveraging computational pattern recognition methods to predict, describe, classify, group, and categorization data (Jain, Duin, & Mao, 2000). The nonlinearity of ANNs lend to modeling complex data structures; however, this also results in ANNs being complex and opaque to many users (Weckman, et al., 2009). While ANNs see application in business applications, some hesitation and misconceptions appear due to the ‘black-box’ nature of ANNs (Dewdney, 1997); hence traditional statistical-based models are far more used in practice. Although some interpretability aspects exist when applying ANNs (de Marchi, Gelpi, & Grynaviski, 2004), it should be noted that the interpretability of ANNs is subjective relative to other commonly used tools.

Objectives of Chapter

ANNs methods are interconnected networks of nodes, which are trained on patterns in data through statistical learning methods (Jain, Duin, & Mao, 2000). Although ANNs are computationally complex, inherently ANNs are statistical in nature and epistemologically similar in function to Bayesian (Beck, King, & Zeng, 2004) and likelihood methods (Verikas & Bacauskiene, 2002). Various software packages are now available for practitioners, including NeuroDimensions (2005), Matlab (2010), JMP (Sall, Lehman, Stephens, & Creighton, 2012), and R (2008). The objective of this chapter is to provide readers with a general background of ANNs, their business applications, and developing quality ANN models. The target audience is intended to be readers who may not be familiar with this form of mathematical modeling practice but may want to pursue it for their business need.

In Young, Bihl, and Weckman (2014), the authors provided a brief overview of ANNs and their applications in business. Herein, the authors have revised this discussion and included new material on ANN architectures and an example end-to-end analysis of business data using ANNs with the JMP13 Pro platform.

Biological Inspiration of ANNs

ANNs are computational machine learning models that are neurologically inspired with the intent of representing complex non-linear input and output relationships. The earliest known development of the ANN model was in the 1940s (McCulloch & Pitts, 1943), with the fundamental building block of the ANN model being the neuron. Figure 1 displays a basic sketch of the biological neural network model (Neuralpower, 2007).

Figure 1.

Biological Neuron (Young, Holland, & Weckman, 2008)

978-1-5225-2255-3.ch576.f01

Computational ANNs extend biological neuron models by considering multiple interconnected nodes termed “neurons” which employ statistical methods to learn patterns between inputs and outputs (Jain, Duin, & Mao, 2000). Through organizational and iterative principles, connection weights between neurons, inputs and outputs are computed to learn a nonlinear input-output relationship (Jain, Duin, & Mao, 2000).

Key Terms in this Chapter

Pre-Processing: A process of preparing a dataset in order to develop a mathematical model.

Epoch: The representation of an entire training set of sample data through the learning algorithm so that an ANN’s weights can be determined.

Supervised Learning: A learning strategy in which the desired output, or dependent attribute, is known.

Knowledge Extraction: The process of discovering how input attributes are used within an ANN to formulate the output such that one can validate functional relationships within the model.

Neuron: An individual building block of an ANN in which weighted input values are transformed via a transfer function into an output, which is typically passed to other portions of the network.

Post-Processing: A process of utilizing a trained mathematical model in order to improve the understanding of the database that has been modeled.

Unsupervised Learning: A learning strategy in which the desired output, or dependent attribute, is unknown.

Over-Fitting: Occurs when a mathematical model describes random error or noise instead of the real underlying relationships within a dataset, which artificially produces desirable goodness of fit metrics for training data, but produces poor metrics for testing data.

Back-Propagation: A supervised learning method used to determine the weights of an ANN, where the difference between the desired and the model’s output is minimized.

Complete Chapter List

Search this Book:
Reset