Trigonometric Polynomial Higher Order Neural Network Group Models and Weighted Kernel Models for Financial Data Simulation and Prediction
Lei Zhang (University of Technology, Australia), Simeon J. Simoff (University of Western Sydney, Australia) and Jing Chun Zhang (IBM, Australia)
Copyright: © 2009
This chapter introduces trigonometric polynomial higher order neural network models. In the area of financial data simulation and prediction, there is no single neural network model that could handle the wide variety of data and perform well in the real world. A way of solving this difficulty is to develop a number of new models, with different algorithms. A wider variety of models would give financial operators more chances to find a suitable model when they process their data. That was the major motivation for this chapter. The theoretical principles of these improved models are presented and demonstrated and experiments are conducted by using real-life financial data.
The basic ideas behind Artificial Neural Networks (ANNs) are not new. McCulloch & Pitts developed their simplified single neuron model over 50 years ago. Widrow developed his ‘ADALINE’ and Posenblatt the ‘PERCEPTRON’ during the 1960’s. Multi-layer feed-forward networks (Multi-Layer Perceptrons or MLPs) and the back-propagation algorithm were developed during the late 1970’s, and Hopfield devised his recurrent (feed back) network during the early 1980’s. The development of MLPs and ‘Hopfiled nets’ heralded a resurgence of worldwide interest in ANNs, which has continued unabated ever since.
ANNs are new types of computers based on (inspired by) models of biological neural networks (brains). It should be emphasized that nobody fully understands how biological neural networks work. Despite this, ANN has captured the imagination of both research scientists and practitioners alike - the prospect of producing computers based on the workings of the human brain is truly inspiring.