Artificial Higher Order Neural Network Nonlinear Models: SAS NLIN or HONNs?

Ming Zhang (Christopher Newport University, USA)
DOI: 10.4018/978-1-59904-897-0.ch001

Abstract

This chapter delivers general format of Higher Order Neural Networks (HONNs) for nonlinear data analysis and six different HONN models. This chapter mathematically proves that HONN models could converge and have mean squared errors close to zero. This chapter illustrates the learning algorithm with update formulas. HONN models are compared with SAS Nonlinear (NLIN) models and results show that HONN models are 3 to 12% better than SAS Nonlinear models. Moreover, this chapter shows how to use HONN models to find the best model, order and coefficients, without writing the regression expression, declaring parameter names, and supplying initial parameter values.
Chapter Preview
Top

Introduction

Background of Higher-Order Neural Networks (HONNs)

Although traditional Artificial Neural Network (ANN) models are recognized for their great performance in pattern matching, pattern recognition, and mathematical function approximation, they are often stuck in local, rather than global minima. In addition, ANNs take unacceptably long time to converge in practice (Fulcher, Zhang, and Xu 2006). Moreover, ANNs are unable to manage non-smooth, discontinuous training data, and complex mappings in financial time series simulation and prediction. ANNs are ‘black box’ in nature, which means the explanations for their output are not obvious. This leads to the motivation for studies on Higher Order Neural Networks (HONNs).

HONN includes the neuron activation functions, preprocessing of the neuron inputs, and connections to more than one layer (Bengtsson, 1990). In this chapter, HONN refers to the neuron type, which can be linear, power, multiplicative, sigmoid, logarithmic, etc. The first-order neural networks can be formulated by using linear neurons that are only capable of capturing first-order correlations in the training data (Giles & Maxwell, 1987). The second order or above HONNs involve higher-order correlations in the training data that require more complex neuron activation functions (Barron, Gilstrap & Shrier, 1987; Giles & Maxwell, 1987; Psaltis, Park & Hong, 1988). Neurons which include terms up to and including degree-k are referred to as kth-order neurons (Lisboa and Perantonis, 1991).

Rumelhart, Hinton, and McClelland (1986) develop ‘sigma-pi’ neurons where they show that the generalized standard BackPropagation algorithm can be applied to simple additive neurons. Both Hebbian and Perceptron learning rules can be employed when no hidden layers are involved (Shin 1991). The performance of first-order ANNs can be improved by utilizing sophisticated learning algorithms (Karayiannis and Venetsanopoulos, 1993). Redding, Kowalczy and Downs (1993) develop a constructive HONN algorithm. Zhang and Fulcher (2004) develop Polynomial, Trigonometric and other HONN models. Giles, Griffin and Maxwell (1988) and Lisboa and Pentonis (1991) show that the multiplicative interconnections within ANNs have been used in many applications, including invariant pattern recognition.

Others suggest groups of individual neurons (Willcox, 1991; Hu and Pan, 1992). ANNs can simulate any nonlinear functions to any degree of accuracy (Hornik, 1991; and Leshno, 1993).

Zhang, Fulcher, and Scofield (1997) show that ANN groups offer superior performance compared with ANNs when dealing with discontinuous and non-smooth piecewise nonlinear functions. Compared with Polynomial Higher Order Neural Network (PHONN) and Trigonometric Higher Order Neural Network (THONN), Neural Adaptive Higher Order Neural Network (NAHONN) offers more flexibility and more accurate approximation capability. Since using NAHONN the hidden layer variables are adjustable (Zhang, Xu, and Fulcher, 2002). In addition, Zhang, Xu, and Fulcher (2002) proves that NAHONN groups are capable of approximating any kinds of piecewise continuous function, to any degree of accuracy. In addition, these models are capable of automatically selecting both the optimum model for a particular time series and the appropriate model order.

Complete Chapter List

Search this Book:
Reset
Dedication
Acknowledgment
Ming Zhang
Chapter 1
Ming Zhang
\$37.50
Chapter 2
Adam Knowles, Abir Hussain, Wael El Deredy, Paulo G.J. Lisboa, Christian L. Dunis
\$37.50
Chapter 3
Da Shi, Shaohua Tan, Shuzhi Sam Ge
\$37.50
Chapter 4
John Seiffertt, Donald C. Wunsch II
\$37.50
Chapter 5
Yuehui Chen, Peng Wu, Qiang Wu
\$37.50
Chapter 6
Yuehui Chen, Peng Wu, Qiang Wu
\$37.50
Chapter 7
Ming Zhang
\$37.50
Chapter 8
Panos Liatsis, Abir Hussain, Efstathios Milonidis
\$37.50
Chapter 9
Abir Hussain, Panos Liatsis
\$37.50
Chapter 10
David R. Selviah, Janti Shawash
\$37.50
Chapter 11
Godfrey C. Onwubolu
\$37.50
Chapter 12
Rozaida Ghazali, Dhiya Al-Jumeily
\$37.50
Chapter 13
Edgar N. Sanchez, Alma Y. Alanis, Jesús Rico
\$37.50
Chapter 14
Shuxiang Xu
\$37.50
Chapter 15
Jean X. Zhang
\$37.50
Chapter 16
Christian L. Dunis, Jason Laws, Ben Evans
\$37.50
Chapter 17
Madan M. Gupta, Noriyasu Homma, Zeng-Guang Hou, Ashu M. G. Solo, Takakuni Goto
\$37.50
Chapter 18
Jinde Cao, Fengli Ren, Jinling Liang
\$37.50
Chapter 19
Zhao Lu, Leang-san Shieh, Guanrong Chen
\$37.50
Chapter 20
David R. Selviah
\$37.50
Chapter 21
Zidong Wang, Yurong Liu, Xiaohui Liu
\$37.50
Chapter 22
Lei Zhang, Simeon J. Simoff, Jing Chun Zhang
\$37.50