Cosine and Sigmoid Higher Order Neural Networks for Data Simulations

Cosine and Sigmoid Higher Order Neural Networks for Data Simulations

DOI: 10.4018/978-1-5225-0788-8.ch029
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

New open box and nonlinear model of Cosine and Sigmoid Higher Order Neural Network (CS-HONN) is presented in this paper. A new learning algorithm for CS-HONN is also developed from this study. A time series data simulation and analysis system, CS-HONN Simulator, is built based on the CS-HONN models too. Test results show that average error of CS-HONN models are from 2.3436% to 4.6857%, and the average error of Polynomial Higher Order Neural Network (PHONN), Trigonometric Higher Order Neural Network (THONN), and Sigmoid polynomial Higher Order Neural Network (SPHONN) models are from 2.8128% to 4.9077%. It means that CS-HONN models are 0.1174% to 0.4917% better than PHONN, THONN, and SPHONN models.
Chapter Preview
Top

Introduction

This chapter introduces a new Higher Order Neural Network (HONN) model. This new model is tested in the data simulation areas. The contributions of this chapter are:

  • Present a new model – CS-HONN.

  • Based on the CS-HONN models, build a time series simulation system – CS-HONN simulator.

  • Develop the CS-HONN learning algorithm and weight update formulae.

  • Shows that CS-HONN can do better than Polynomial Higher Order Neural Network (PHONN), Trigonometric Higher Order Neural Network (THONN), and Sigmoid Polynomial Higher Order Neural Network (SPHONN) models in the data simulation examples.

Top

Background

Many studies use traditional artificial neural network models. Blum and Li (1991) studied approximation by feed-forward networks. Gorr (1994) studied the forecasting behavior of multivariate time series using neural networks. Barron, Gilstrap, and Shrier (1987) used polynomial neural networks for the analogies and engineering applications. However, all of the studies above use traditional artificial neural network models - black box models that did not provide users with a function that describe the relationship between the input and output. The first motivation of this paper is to develop nonlinear “open box” neural network models that will provide rationale for network’s decisions, also provide better results.

Jiang, Gielen, and Wang (2010) investigated the combined effects of quantization and clipping on Higher Order function neural networks (HOFNN) and multilayer feedforward neural networks (MLFNN). Statistical models were used to analyze the effects of quantization in a digital implementation This study established and analyzed the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, the number of network layers, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. The experimental simulation results verify the presented theoretical analysis.

Lu, Song, and Shieh (2010) studied the polynomial kernel higher order neural networks. As a general framework to represent data, the kernel method can be used if the interactions between elements of the domain occur only through inner product. As a major stride towards the nonlinear feature extraction and dimension reduction, two important kernel-based feature extraction algorithms, kernel principal component analysis and kernel Fisher discriminant, have been proposed. In an attempt to mitigate these drawbacks, this study focused on the application of the newly developed polynomial kernel higher order neural networks in improving the sparsity and thereby obtaining a succinct representation for kernel-based nonlinear feature extraction algorithms. Particularly, the learning algorithm is based on linear programming support vector regression, which outperforms the conventional quadratic programming support vector regression in model sparsity and computational efficiency.

Murata (2010) found that A Pi-Sigma higher order neural network (Pi-Sigma HONN) is a type of higher order neural network, where, as its name implies, weighted sums of inputs are calculated first and then the sums are multiplied by each other to produce higher order terms that constitute the network outputs. This type of higher order neural networks have good function approximation capabilities. In this study, the structural feature of Pi-Sigma HONNs is discussed in contrast to other types of neural networks. The reason for their good function approximation capabilities is given based on pseudo-theoretical analysis together with empirical illustrations.

Complete Chapter List

Search this Book:
Reset