Neural Networks and Bootstrap Methods for Regression Models with Dependent Errors

Neural Networks and Bootstrap Methods for Regression Models with Dependent Errors

Francesco Giordano, Michele La Rocca, Cira Perna
DOI: 10.4018/978-1-59904-982-3.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter introduces the use of the bootstrap in a nonlinear, nonparametric regression framework with dependent errors. The aim is to construct approximate confidence intervals for the regression function which is estimated by using a single hidden layer feedforward neural network. In this framework, the use of a standard residual bootstrap scheme is not appropriate and it may lead to results that are not consistent. As an alternative solution, we investigate the AR-Sieve bootstrap and the Moving Block bootstrap, which are used to generate bootstrap replicates with a proper dependence structure. Both approaches are nonparametric bootstrap schemes, a consistent choice when dealing with neural network models which are often used as an accurate nonparametric estimation and prediction tool. In this context, both procedures may lead to satisfactory results but the AR sieve bootstrap seems to outperform the moving block bootstrap delivering confidence intervals with coverages closer to the nominal levels.
Chapter Preview
Top

Neural Networks In Regression Models

Let 978-1-59904-982-3.ch016.m01, 978-1-59904-982-3.ch016.m02, be a (possibly non stationary) process modelled as:978-1-59904-982-3.ch016.m03, (1) where f is a non linear continuous function, 978-1-59904-982-3.ch016.m04 is a vector of d non stochastic explanatory variables, and 978-1-59904-982-3.ch016.m05 is a stationary noise process with zero mean. The unknown function f in the model (1) can be approximated with a single hidden layer feedforward neural network of the form:

978-1-59904-982-3.ch016.m06
(2) where 978-1-59904-982-3.ch016.m07 with 978-1-59904-982-3.ch016.m08; 978-1-59904-982-3.ch016.m09, 978-1-59904-982-3.ch016.m10 is the weight of the link between the k-th neuron in the hidden layer and the output; 978-1-59904-982-3.ch016.m11 is the weight of the connection between the j-th input neuron and the k-th neuron in the hidden level. We suppose that the activation function of the hidden layer is the logistic function 978-1-59904-982-3.ch016.m12 and that of the output layer is the identity function. Hornik, Stinchcombe, & White (1989) showed that this class of nonlinear functions can approximate any continuous function uniformly on compact sets, by increasing the size of the hidden layer. Barron (1993) showed that for sufficiently smooth functions the L2 approximation with these activation functions is of order 978-1-59904-982-3.ch016.m13.

Complete Chapter List

Search this Book:
Reset