Ultra High Frequency SINC and Trigonometric Higher Order Neural Networks for Data Classification

Ultra High Frequency SINC and Trigonometric Higher Order Neural Networks for Data Classification

DOI: 10.4018/978-1-5225-0063-6.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter develops a new nonlinear model, Ultra high frequency SINC and Trigonometric Higher Order Neural Networks (UNT-HONN), for Data Classification. UNT-HONN includes Ultra high frequency siNc and Sine Higher Order Neural Networks (UNS-HONN) and Ultra high frequency siNc and Cosine Higher Order Neural Networks (UNC-HONN). Data classification using UNS-HONN and UNC-HONN models are tested. Results show that UNS-HONN and UNC-HONN models are better than other Polynomial Higher Order Neural Network (PHONN) and Trigonometric Higher Order Neural Network (THONN) models, since UNS-HONN and UNC-HONN models can classify the data with error approaching 0.0000%.
Chapter Preview
Top

Introduction

The contributions of this chapter will be:

  • Introduce the background of HONNs with the applications of HONNs in classification area.

  • Develop a new HONN models called UNS-HONN and UNC-HONN for ultra- high frequency data classifications.

  • Provide the UNS-HONN and UNC-HONN learning algorithm and weight update formulae.

  • Compare UNS-HONN and UNC-HONN models with other HONN models.

  • Applications of UNS-HONN and UNC-HONN models for classifications.

This chapter is organized as follows: the background section gives the background knowledge of HONN and HONN applications in classification area. Section HONN models introduces UNS-HONN and UNC-HONN structures. Section update formula provides the UNS-HONN and UNC-HONN model update formulae, learning algorithms, and convergence theories of HONN. Section test describes UNS-HONN and UNC-HONN testing results in the data classification area. Conclusions are presented in last section.

Top

Background

Artificial Neural Network (ANN) has been widely used in the classification areas. Lippman (1989) studies pattern classification using neural networks. Moon and Chang (1994) learn classification and prediction of the critical heat flux using fuzzy clustering and artificial neural networks. Lin and Cunningham (1995) develop a new approach to fuzzy-neural system modelling. Behnke and Karayiannis (1998) present a competitive Neural Trees for pattern classifications. Bukovsky, Bila, Gupta, Hou, and Homma (2010) provide foundation and classification of nonconventional neural units and paradigm of non-synaptic neural interaction.

Artificial Higher Order Neural Network (HONN) has been widely used in the classification area too. Reid, Spirkovska, and Ochoa (1989) research simultaneous position, scale, rotation invariant pattern classification using third-order neural networks. Shin (1991) investigate tThe Pi-Sigma network: an Efficient Higher-Order Neural Network for Pattern Classification and Function Approximation. Ghosh and Shin (1992) show efficient higher order neural networks for function approximation and classification. Shin, Ghosh, and Samani (1992) analyze computationally efficient invariant pattern classification with higher order Pi-Sigma networks. Husken and Stagge (2003) expand recurrent neural networks for time series classification. Fallahnezhad, Moradi, and Zaferanlouei (2011) contribute a hybrid higher order neural classifier for handling classification problems.

Shawash and Selviah (2010) test artificial higher order neural network training on limited precision processors, and investigate the training of networks using Back Propagation and Levenberg-Marquardt algorithms in limited precision achieving high overall calculation accuracy, using on-line training, a new type of HONN known as the Correlation HONN (CHONN), discrete XOR and continuous optical waveguide sidewall roughness datasets by simulation to find the precision at which the training and operation is feasible. The BP algorithm converged to a precision beyond which the performance did not improve. The results support previous findings in literature for Artificial Neural Network operation that discrete datasets require lower precision than continuous datasets. The importance of the chapter findings is that they demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware.

Complete Chapter List

Search this Book:
Reset