Rainfall Estimation Using Neuron-Adaptive Higher Order Neural Networks

Rainfall Estimation Using Neuron-Adaptive Higher Order Neural Networks

Ming Zhang (Christopher Newport University, USA)
DOI: 10.4018/978-1-61520-711-4.ch007
OnDemand PDF Download:


Real world data is often nonlinear, discontinuous and may comprise high frequency, multi-polynomial components. Not surprisingly, it is hard to find the best models for modeling such data. Classical neural network models are unable to automatically determine the optimum model and appropriate order for data approximation. In order to solve this problem, Neuron-Adaptive Higher Order Neural Network (NAHONN) Models have been introduced. Definitions of one-dimensional, two-dimensional, and n-dimensional NAHONN models are studied. Specialized NAHONN models are also described. NAHONN models are shown to be “open box”. These models are further shown to be capable of automatically finding not only the optimum model but also the appropriate order for high frequency, multi-polynomial, discontinuous data. Rainfall estimation experimental results confirm model convergence. We further demonstrate that NAHONN models are capable of modeling satellite data. When the Xie and Scofield (1989) technique was used, the average error of the operator-computed IFFA rainfall estimates was 30.41%. For the Artificial Neural Network (ANN) reasoning network, the training error was 6.55% and the test error 16.91%, respectively. When the neural network group was used on these same fifteen cases, the average training error of rainfall estimation was 1.43%, and the average test error of rainfall estimation was 3.89%. When the neuron-adaptive artificial neural network group models was used on these same fifteen cases, the average training error of rainfall estimation was 1.31%, and the average test error of rainfall estimation was 3.40%. When the artificial neuron-adaptive higher order neural network model was used on these same fifteen cases, the average training error of rainfall estimation was 1.20%, and the average test error of rainfall estimation was 3.12%.
Chapter Preview

1 Introduction

Artificial Higher Order Neural Network (HONN) models are the trends for emerging applications in the computer science and engineering areas. An, Mniszewski, Lee, Papcun, and Doolen (1988A and 1988B) test a learning procedure (HIERtalker), based on a default hierarchy of high order neural networks, which exhibited an enhanced capability of generalization and a good efficiency to learn to read English aloud. HIERtalker learns the `building blocks' or clusters of symbols in a stream that appear repeatedly. Salem and Young (1991) study the interpreting line drawings with higher order neural networks. A higher order neural network solution to line labeling is presented. Line labeling constraints in trihedral scenes are designed into a Hopfield-type network. The labeling constraints require a higher order of interaction than that of Hopfield-type network. Liou and Azimi-Sadjadi (1993) present a dim target detection using high order correlation method. This work presents a method for clutter rejection and dim target track detection from infrared (IR) satellite data using neural networks. A high-order correlation method which recursively computes the spatio-temporal cross-correlations is used. Liatsis, Wellstead, Zarrop, and Prendergast (1994) propose a versatile visual inspection tool for the manufacturing process. The dynamically changing nature and the complex behavior of processes in manufacturing cells dictate the need for lean, agile and flexible manufacturing systems. Tseng and Wu (1994) post Constant-time neural decoders for some BCH codes. High order neural networks are shown to decode some BCH codes in constant-time with very low hardware complexity. HONN is a direct extension of the linear perceptron: it uses a polynomial consisting of a set of product terms as its discriminant. Zardoshti-Kermani and Afshordi (1995) try classification of chromosomes by using higher-order neural networks. In this chapter, the application of a higher-order neural network for the classification of human chromosomes is described. The higher order neural network's inputs are 30 dimensional feature space extracted from chromosome images. Starke, Kubota, and Fukuda (1995) research combinatorial optimization with higher order neural networks-cost oriented competing processes in flexible manufacturing systems. Higher order neural networks are applied to handle combinatorial optimization problems by using cost oriented competing processes (COCP). This method has a high adaptability to complicated problems. Miyajima, Yatsuki, and Kubota (1995) study the dynamical properties of neural networks with product connections. Higher order neural networks with product connections which hold the weighted sum of products of input variables have been proposed as a new concept. In some applications, it is shown that they are more superior in ability than traditional neural networks. Wang (1996) researches the suppressing chaos with hysteresis in a higher order neural network. Artificial higher order neural networks attempt to mimic various features of a most powerful computational system-the human brain. Randolph and Smith (2000) have a new approach to object classification in binary images. In this chapter, Randolph and Smith address the problem of classifying binary objects using a cascade of a binary directional filter bank (DFB) and a higher order neural network (HONN). Rovithakis, Maniadakis, and Zervakis (2000) present a genetically optimized artificial neural network structure for feature extraction and classification of vascular tissue fluorescence spectrums. The optimization of artificial neural network structures for feature extraction and classification by employing Genetic Algorithms is addressed here. More precisely, a non-linear filter based on High Order Neural Networks whose weights are updated is used. Zhang, Liu, Li, Liu, and Ouyang (2002) discuss the problems of the translation and rotation invariance of a physiological signal in long-term clinical custody. This chapter presents a solution using high order neural networks with the advantage of large sample size. Rovithakis, Chalkiadakis, and Zervakis (2004) design a high order neural network structure for function approximation applications with using genetic algorithms, which entails both parametric (weights determination) and structural learning (structure selection). Siddiqi (2005) proposed direct encoding method to design higher order neural networks, since there are two major ways of encoding a higher order neural network into a chromosome, as required in design of a genetic algorithm (GA). These are explicit (direct) and implicit (indirect) encoding methods. The first motivation of this chapter is to use artificial HONN models for applications in the computer science and engineering areas.

Complete Chapter List

Search this Book: