Artificial Neural Network What-If Theory

Artificial Neural Network What-If Theory

Paolo Massimo Buscema, William J. Tastle
DOI: 10.4018/978-1-7998-0414-7.ch001
(Individual Chapters)
No Current Special Offers


Data sets collected independently using the same variables can be compared using a new artificial neural network called Artificial neural network What If Theory, AWIT. Given a data set that is deemed the standard reference for some object, i.e. a flower, industry, disease, or galaxy, other data sets can be compared against it to identify its proximity to the standard. Thus, data that might not lend itself well to traditional methods of analysis could identify new perspectives or views of the data and thus, potentially new perceptions of novel and innovative solutions. This method comes out of the field of artificial intelligence, particularly artificial neural networks, and utilizes both machine learning and pattern recognition to display an innovative analysis.
Chapter Preview

General Theory

Using an auto-encoder ANN we will approximate the implicit function of any dataset during the learning phase and to assign a fuzzy output to any new input during the recall phase. A fuzzy output is a value in the range [0..1] in which zero means complete absence or non-membership in the output and a one means complete membership. Any other value indicates the degree of partial membership.

Recent research has improved the auto-encoder ANNs in order to optimize a deep learning process (Hinton, Osindero & Teh, 2006) or to select the fundamental hidden features of a large dataset (Le, et. al., 2012) or to reduce the dimensionality of data (Hinton & Salakhutdinov, 2006; Raina, Madhavan & Ng, 2009; Raiko, Valpola & LeCun, 2012). Other approaches have tried to use auto-encoders as unsupervised ANNs able to perform supervised tasks (Larochelle & Bengio, 2008; Bengio, 2009).

We have chosen to look at the auto-encoders from a different point of view: we define the testing phase of a trained auto-encoder as the interpretation of a dataset (traditionally referred to as the Testing Dataset) using the logic present in another dataset (the Training Dataset). We define this point of view a seminal approach for a new theory, named AWIT (Artificial Neural Networks What If Theory).

The algorithm to implement this new approach to data analysis follows:

  • Let DB1 and DB2 be two different datasets with the same types of variables but possessing different records;

  • The function f() is a non-linear function optimally interpolating DB1 by means of an auto-encoder ANN consisting of one hidden layer:xDB1 = f(h, v*)(1)h = g(xDB1, w*)(2)

    • v*, w* = parameters to be estimated by ANN,

    • x = input variables.

  • The dataset DB2 is rewritten using the ANN trained on the DB1dataset:

    (3)h = g(xDB2, w)(4)

    • v, w = Trained parameters estimated b ANN using DB1

    • 978-1-7998-0414-7.ch001.m02 = output variables.

Complete Chapter List

Search this Book: