Neural Networks and Bootstrap Methods for Regression Models with Dependent Errors

Neural Networks and Bootstrap Methods for Regression Models with Dependent Errors

Francesco Giordano (University of Salerno, Italy), Michele La Rocca (University of Salerno, Italy) and Cira Perna (University of Salerno, Italy)
DOI: 10.4018/978-1-59904-982-3.ch016
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

This chapter introduces the use of the bootstrap in a nonlinear, nonparametric regression framework with dependent errors. The aim is to construct approximate confidence intervals for the regression function which is estimated by using a single hidden layer feedforward neural network. In this framework, the use of a standard residual bootstrap scheme is not appropriate and it may lead to results that are not consistent. As an alternative solution, we investigate the AR-Sieve bootstrap and the Moving Block bootstrap, which are used to generate bootstrap replicates with a proper dependence structure. Both approaches are nonparametric bootstrap schemes, a consistent choice when dealing with neural network models which are often used as an accurate nonparametric estimation and prediction tool. In this context, both procedures may lead to satisfactory results but the AR sieve bootstrap seems to outperform the moving block bootstrap delivering confidence intervals with coverages closer to the nominal levels.
Chapter Preview
Top

Neural Networks In Regression Models

Let , , be a (possibly non stationary) process modelled as:, (1) where f is a non linear continuous function, is a vector of d non stochastic explanatory variables, and is a stationary noise process with zero mean. The unknown function f in the model (1) can be approximated with a single hidden layer feedforward neural network of the form:

(2) where with ; , is the weight of the link between the k-th neuron in the hidden layer and the output; is the weight of the connection between the j-th input neuron and the k-th neuron in the hidden level. We suppose that the activation function of the hidden layer is the logistic function and that of the output layer is the identity function. Hornik, Stinchcombe, & White (1989) showed that this class of nonlinear functions can approximate any continuous function uniformly on compact sets, by increasing the size of the hidden layer. Barron (1993) showed that for sufficiently smooth functions the L2 approximation with these activation functions is of order .

Complete Chapter List

Search this Book:
Reset