Improved Wavelet Neural Networks and Its Applications in Function Approximation

Improved Wavelet Neural Networks and Its Applications in Function Approximation

Zarita Zainuddin, Ong Pauline
Copyright: © 2015 |Pages: 18
DOI: 10.4018/978-1-4666-5888-2.ch627
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

Introduction

The advent of the computer is a blessing to the emergence of artificial neural networks (ANNs) – a research area which has significant links to the effort of scientists to emulate the complex functions of the human brain. The ANNs, which conceptually model the human brain metaphor with vast number of interconnecting artificial neurons, experience an upsurge in popularity due to their effectiveness and suitability for solving large-scale problems where the physical processes are highly complex and ill-defined. Furthermore, in diverse domains of science and engineering applications, the problems at hand usually do not have an algorithmic solution, or the solution is too complex to be obtained by using the conventional methods. The ANNs with the properties of adaptive learning ability and nonparametric characteristic are thus extremely vital in this undertaking.

Extensive and intensive research in ANNs have spawned a variety of different network paradigms, from which the integration of wavelets and feedforward neural networks has fruits in a special variant of ANNs, namely, the wavelet neural networks (WNNs) (Zhang & Benveniste, 1992). Constituting of wavelet activation functions as the nonlinear transfer function in its modeling framework, the WNNs feature the eye-catching characteristics of: (1) preserving the universal approximation property; (2) establishing the explicit link between the networks coefficients and wavelet transform; and (3) achieving same quality of approximation with a network of reduced size. Moreover, its superiority in alleviating the deficiencies of the popular multilayer perceptrons, which are subject to slow learning and local minima problems, has been asserted. Given this capabilities, the WNNs are unceasingly gaining their prevalence and popularity in a large range of practical situations, for instance, forecasting the draught conditions (Belayneh, Adamowski, Khalil, & Ozga-Zielinski, 2014), monitoring the harmonic pollution in power-electronics-based devices (Jain & Singh, 2014), predicting the pulp and paper properties (Zainuddin, Daud, Ong, & Shafie, 2011), detecting microbial contaminants in clinical samples (Kodogiannis, 2013), approximating the wind speed (Yao & Yu, 2013), classifying the heterogeneous cancer profiles (Zainuddin & Ong, 2011b), and diagnosing the early stage of diseases (Dheeba, Singh, & Singh, 2014; San, Ling, & Nguyen, 2013; Zainuddin & Ong, 2012), are some of the real world uses for which the WNNs have demonstrated their excellent viability.

Since numerous application problems are highly dependent on the accuracy of WNNs, in view of the fact that the accuracy is always a key factor in decision making, improving the predictive competence of WNNs therein forms the basis of this study. A host of approaches have been put forward in this regard; including the utilization of different learning algorithms in the search for perfection in optimizing the weight vectors of WNNs (Chen, 2011; Tzeng, 2010), the alteration of network’s architecture (Wan, Li, & Cheng, 2004), the modification of the types of activation functions used in the hidden layer (Zainuddin & Ong, 2011a), the exploitation of fuzzy rules in knowledge representation during the learning process of WNNs (Abiyev, 2011; Liu, Wu, Zhang, Wang, & Chen, 2011), and the initialization of wavelet translation vectors (Zainuddin & Ong, 2011b, 2012), which is the core interest of this work. For a plausible convergence, as well as for superb generalization performance; a good initialization of the appropriate number of wavelet functions and their locations (translation vectors) are extremely crucial. Determining the initial locations of the translation vectors judiciously will effectively reflect upon the essential attributes residing within the input space, and thus, the WNNs can model the underlying mapping between the input-output spaces progressively from a set of good starting points, which potentially improve the validity of modeling. Moreover, with proper initialization of the network’s architecture, a better configuration of WNNs with less computational complexity can be achieved.

Key Terms in this Chapter

Clustering: The action of grouping together of patterns into dissimilar clusters with respect to a similarity measure.

Symmetry Similarity Level: A novel similarity measure that mathematically assigns the pattern into a specific cluster based on the concept of symmetry.

Wavelet: A mathematical function that satisfies certain requirements, such as integrates to zero, oscillates and well localized in time.

Fuzzy C-Means: The classical clustering algorithm which categorizes the patterns into more than one cluster, where the degree of belongingness of each pattern to a respective cluster is specified by a membership function.

Function Approximation: The action to estimate a function f ( x ), such that the underlying relationship between the input-output spaces is reflected by this function.

Translation Parameter: A parameter that indicates where the wavelet activation function is located.

Wavelet Neural Networks: A three-layered artificial neural networks with the wavelets as the activation functions in the hidden layer.

Complete Chapter List

Search this Book:
Reset