Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore

Saeed Panahian Fard (Universiti Sains, Malaysia) and Zarita Zainuddin (Universiti Sains, Malaysia)

Copyright: © 2017
|Pages: 15

DOI: 10.4018/978-1-5225-0788-8.ch055

Chapter Preview

TopThe function approximation can be explained as follows: Let be a continuous function on a compact set. We intend to find a simple function *g* such that ||*f–g*||<ε. This problem has attracted many researchers’ attentions in the last century. According to Tikk et al. (2003), in 1900, Hilbert presented his 23 conjectures at the second international congress of mathematicians in Paris. Based on the 13th conjecture, there exist continuous functions with multi variables which cannot be represented as the finite superposition of continuous functions with fewer variables. In 1957, Arnold rejected this conjecture. In the same year, Kolmogorov proved his representation theorem with a constructive proof. This theorem shows that a continuous function with multi variables can be decomposed as the finite superposition of continuous functions with one variable. In 1965, Sprecher improved the Kolmogorov’s representation theorem. In 1966, Lorentz further improved this theorem.

In the approximation theory of artificial neural networks (ANNs) that problem reduces to find an artificial neural network such that approximate f, i.e. ||f–ANNs||<e . In 1980, De Figueriedo generalized this theorem for multilayer feedforward artificial neural networks. In 1989, Poggio and Girosi showed that this theorem is irrelevant for artificial neural networks because in a Kolmogorov networks, nodes have wild and complex functions. Then, many researchers have been tried to solve the problem of approximation function by artificial neural networks such as Cybenko (1989), Funuhashi (1989), Park and Sandberg, (1991, 1993), Mhaskar, (1993), Leshno (1993), Suzuki (1998), Hahm and Hong, (2004), Li (2008), Ismailov (2012), Wang et al. (2012), Lin et al. (2013), and Arteaga and Marrero (2013).

The standard form of universal approximation capability of feedforward neural networks states that under what kind of conditions an arbitrary continuous function can be approximated by a single-hidden-layer feedforward neural networks to any degree of accuracy. Comprehensive surveys of the universal approximation capability of feedforward neural networks can be found in Nong (2013); Bouaziz et al. (2014); Wang (2010); Ismailov (2014); Arteaga and Marrero (2014), Costarelli (2014). Recently, the history of the development of universal approximation by artificial neural networks has been presented by Pricipe and Chen (2015).

In the present chapter, we are motivated to extend the scheme of the univariate universal approximation capability to the scheme of the multivariate universal approximation capability. In other words, the motivation of this chapter is to develop the theory of the universal approximation capability of a class of feedforward higher order neural networks based on approximate identity in multivariate functions spaces. We should address what we can expect from higher order neural networks based on approximate identity. In fact, higher order neural networks based on approximate identity are merging higher order neural networks and approximate identity neural networks.

Search this Book:

Reset