Specification and Construction of the Perceptron Multicapa Model

Specification and Construction of the Perceptron Multicapa Model

Paola Andrea Sánchez-Sánchez (University Simón Bolívar, Colombia), José Rafael García-González (University Simón Bolívar, Colombia) and Leidy Haidy Perez Coronell (University Simón Bolívar, Colombia)
DOI: 10.4018/978-1-7998-3351-2.ch007
OnDemand PDF Download:
No Current Special Offers


The objective of this chapter is to analyze the problem surrounding the task of prognosis with neural networks and the factors that affect the construction of the model, and that often lead to inconsistent results, emphasizing the problems of selecting the training algorithm, the number of neurons in the hidden layer, and input variables. The methodology is to analyze the forecast of time series, due to the growing need for tools that facilitate decision-making, especially in series that, given their characteristics of noise and variability, infer nonlinear dynamics. Neural networks have emerged as an attractive approach to the representation of such behaviors due to their adaptability, generalization, and learning capabilities. Practical evidence shows that the Delta Delta and RProp training methods exhibit different behaviors than expected.
Chapter Preview

1. Introduction

The forecast of time series has received a lot of attention in recent decades, due to the growing need for effective tools that facilitate decision making and overcome the theoretical, conceptual and practical limitations of traditional approaches. This motivation has led to the emergence of a wide range of models, where neural networks have demonstrated wide potential due to their adaptability, generalization, learning and the possibility of representing non-linear relationships. Formally, the objective of the time series forecast is to find a flexible mathematical functional form that approximates the data generating process with sufficient precision, so that it adequately represents the different regular and irregular patterns that the series may present, allowing from the built representation to extrapolate future behavior (Lachtermacher & Fuller, 1995). However, the choice of the appropriate model for each series depends on the characteristics that it possesses and its utility is associated with the degree of similarity between the dynamics of the series generating process and the mathematical formulation made of it (Contreras Juárez, Atziry Zuniga, Martínez Flores, & Sánchez Partida, 2016) (Velásquez & Franco, 2012) (Sánchez P., 2008).

The attractiveness of neural networks in the prediction of time series is their ability to identify hidden dependencies based on a finite sample, especially non-linear order, which gives them the recognition of universal approximators of functions (Cybenko, 1989), (Franses & Van Dijk, 2000), (Hornik, 1991), (Hornik, Stinchicombe, & White, 1989). Perhaps the main advantage of this approach over other models is that they do not start from a priori assumptions about the functional relationship of the series and its explanatory variables, a highly desirable characteristic in cases where the data generating mechanism is unknown and unstable (Qi & Zhang, 2001), in addition its high capacity for generalization allows to learn behaviors and extrapolate them, which leads to better forecasts (De Gooijer & Kumar, 1992) (Plazas-Nossa & Torres, 2015).

The growing interest in the development of prognosis applications with neural networks is denoted by the publication of more than 5000 research articles present in the literature (Crone & Kourentzes, 2009). However, as stated Zhang, Patuwo, & Hu (1998), inconsistent results about the performance of neural networks in the time series forecast are often reported in the literature; Many conclusions are obtained from empirical studies, thus presenting limited results that often cannot be extended to general applications and which are not replicable. Cases where the neural network presents a worse performance than linear statistical models or other models, it may be because the series studied do not have high volatilities, that the neural network used to compare was not adequately trained, than the selection criteria of the best model is not comparable, or that the configuration used is not appropriate to the characteristics of the data. While, many of the publications that indicate superior performance of neural networks are related to novel paradigms or extensions of existing methods, architectures and training algorithms, but lack a reliable and valid evaluation of empirical evidence of their performance (Correa Henao & Montoya Suárez, 2013). The high number of factors included in the network configuration, the training, validation and forecasting process, and the data sample, which is required to determine to achieve a suitable network model for the forecast, makes neural networks a technique. unstable, since any change in training or in some parameter produces large changes in prediction (Yu, Wang, & Lai, 2009) (Sánchez & García, 2017) (Bienvenido – Huertas, Marin, Sanchez – Garcia, Fernandez – Valderrama, & Moyano, 2019).

Complete Chapter List

Search this Book: