Data Simulations Using Cosine and Sigmoid Higher Order Neural Networks

Data Simulations Using Cosine and Sigmoid Higher Order Neural Networks

DOI: 10.4018/978-1-7998-3563-9.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A new open box and nonlinear model of cosine and sigmoid higher order neural network (CS-HONN) is presented in this chapter. A new learning algorithm for CS-HONN is also developed in this chapter. In addition, a time series data simulation and analysis system, CS-HONN simulator, is built based on the CS-HONN models. Test results show that the average error of CS-HONN models are from 2.3436% to 4.6857%, and the average error of polynomial higher order neural network (PHONN), trigonometric higher order neural network (THONN), and sigmoid polynomial higher order neural network (SPHONN) models range from 2.8128% to 4.9077%. This suggests that CS-HONN models are 0.1174% to 0.4917% better than PHONN, THONN, and SPHONN models.
Chapter Preview
Top

Introduction

This chapter introduces a new Higher Order Neural Network (HONN) model. This new model is tested in the data simulation areas. The contributions of this chapter are:

  • Present a new model – CS-HONN.

  • Based on the CS-HONN models, build a time series simulation system – CS-HONN simulator.

  • Develop the CS-HONN learning algorithm and weight update formulae.

  • Shows that CS-HONN can do better than Polynomial Higher Order Neural Network (PHONN), Trigonometric Higher Order Neural Network (THONN), and Sigmoid Polynomial Higher Order Neural Network (SPHONN) models in the data simulation examples.

Top

Background

Many studies use traditional artificial neural network models. Blum and Li (1991) studied approximation by feed-forward networks. Gorr (1994) studied the forecasting behavior of multivariate time series using neural networks. Barron, Gilstrap, and Shrier (1987) used polynomial neural networks for the analogies and engineering applications. However, all the studies above use traditional artificial neural network models - black box models that did not provide users with a function that describe the relationship between the input and output. The first motivation of this paper is to develop nonlinear “open box” neural network models that will provide rationale for network’s decisions, also provide better results.

Jiang, Gielen, and Wang (2010) investigated the combined effects of quantization and clipping on Higher Order function neural networks (HOFNN) and multilayer feedforward neural networks (MLFNN). Statistical models were used to analyze the effects of quantization in a digital implementation This study established and analyzed the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, the number of network layers, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. The experimental simulation results verify the presented theoretical analysis.

Randolph and Smith (2000) have a new approach to object classification in binary images. In this paper, Randolph and Smith address the problem of classifying binary objects using a cascade of a binary directional filter bank (DFB) and a higher order neural network (HONN). Rovithakis, Maniadakis, and Zervakis (2000) present a genetically optimized artificial neural network structure for feature extraction and classification of vascular tissue fluorescence spectrums. The optimization of artificial neural network structures for feature extraction and classification by employing Genetic Algorithms is addressed here. More precisely, a non-linear filter based on High Order Neural Networks whose weights are updated is used. Zhang, Liu, Li, Liu, and Ouyang (2002) discuss the problems of the translation and rotation invariance of a physiological signal in long-term clinical custody. This paper presents a solution using high order neural networks with the advantage of large sample size. Rovithakis, Chalkiadakis, and Zervakis (2004) design a high order neural network structure for function approximation applications with using genetic algorithms, which entails both parametric (weights determination) and structural learning (structure selection). Siddiqi (2005) proposed direct encoding method to design higher order neural networks, since there are two major ways of encoding a higher order neural network into a chromosome, as required in design of a genetic algorithm (GA). These are explicit (direct) and implicit (indirect) encoding methods. The first motivation of this chapter is to use artificial HONN models for applications in the computer science and engineering areas.

Key Terms in this Chapter

THONN: Artificial trigonometric higher order neural network.

PHONN: Artificial polynomial higher order neural network.

HONN: Artificial higher order neural network.

CS-HONN: Artificial cosine and sigmoid higher order neural network.

SPHONN: Artificial sigmoid polynomial higher order neural network.

Complete Chapter List

Search this Book:
Reset