Ability of the 1-n-1 Complex-Valued Neural Network to Learn Transformations: Computer Science & IT Book Chapter | IGI Global

×

Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.
Additionally, libraries can receive an extra 5% discount. Learn More

Subscribe to the latest research through IGI Global's new InfoSci-OnDemand Plus

InfoSci®-OnDemand Plus, a subscription-based service, provides researchers the ability to access full-text content from over 93,000+ peer-reviewed book chapters and 24,000+ scholarly journal articles covering 11 core subjects. Users can select articles or chapters that meet their interests and gain access to the full content permanently in their personal online InfoSci-OnDemand Plus library.

Purchase the Encyclopedia of Information Science and Technology, Fourth Edition and Receive Complimentary E-Books of Previous Editions

When ordering directly through IGI Global's Online Bookstore, receive the complimentary e-books for the first, second, and third editions with the purchase of the Encyclopedia of Information Science and Technology, Fourth Edition e-book.

Create a Free IGI Global Library Account to Receive a 25% Discount on All Purchases

Exclusive benefits include one-click shopping, flexible payment options, free COUNTER 4 and MARC records, and a 25% discount on all titles as well as the award-winning InfoSci^{®}-Databases.

InfoSci^{®}-Journals Annual Subscription Price for New Customers: As Low As US$ 4,080*

This collection of over 185 e-journals offers unlimited access to highly-cited, forward-thinking content in full-text PDF and HTML with no DRM. There are no platform or maintenance fees and a guarantee of no more than 5% increase annually.

Nitta, Tohru. "Ability of the 1-n-1 Complex-Valued Neural Network to Learn Transformations." Computational Modeling and Simulation of Intellect: Current State and Future Perspectives. IGI Global, 2011. 566-596. Web. 20 Jan. 2019. doi:10.4018/978-1-60960-551-3.ch022

APA

Nitta, T. (2011). Ability of the 1-n-1 Complex-Valued Neural Network to Learn Transformations. In B. Igelnik (Ed.), Computational Modeling and Simulation of Intellect: Current State and Future Perspectives (pp. 566-596). Hershey, PA: IGI Global. doi:10.4018/978-1-60960-551-3.ch022

Chicago

Nitta, Tohru. "Ability of the 1-n-1 Complex-Valued Neural Network to Learn Transformations." In Computational Modeling and Simulation of Intellect: Current State and Future Perspectives, ed. Boris Igelnik, 566-596 (2011), accessed January 20, 2019. doi:10.4018/978-1-60960-551-3.ch022

The ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations has been applied to the estimation of optical flows and the generation of fractal images. The complex-valued neural network has the adaptability and the generalization ability as inherent nature. This is the most different point between the ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations and the standard techniques for 2D affine transformations such as the Fourier descriptor. It is important to clarify the properties of complex-valued neural networks in order to accelerate their practical applications more and more. In this chapter, the behavior of the 1-n-1 complex-valued neural network that has learned a transformation on the Steiner circles is demonstrated, and the relationship the values of the complex-valued weights after training and a linear transformation related to the Steiner circles is clarified via computer simulations. Furthermore, the relationship the weight values of the 1-n-1 complex-valued neural network learned 2D affine transformations and the learning patterns used is elucidated. These research results make it possible to solve complicated problems more simply and efficiently with 1-n-1 complex-valued neural networks. As a matter of fact, an application of the 1-n-1 type complex-valued neural network to an associative memory is presented.

In the early 1940s, the pioneers of the field, McCulloch and Pitts, proposed a computational model based on a simple neuron-like element (McCulloch & Pitts, 1943). Since then, various types of neurons and neural networks have been developed independently of their direct similarity to biological neural networks. They can now be considered as a powerful branch of present science and technology.

Neurons are the atoms of neural computation. Out of those simple computational neurons all neural networks are build up. An illustration of a (real-valued) neuron is given in Figure 1. The activity of neuron n is defined as:, (1)

Figure 1.

Real-valued neuron model. Weights W_{nm}, m = 1, ..., N and threshold V_{n} are all real numbers. The activation function f is a real function

where W_{nm} is the real-valued weight connecting neuron n and m, X_{m} is the real-valued input signal from neuron m, and V_{n} is the real-valued threshold value of neuron n. Then, the output of the neuron is given by f(x). Although several types of activation functions f can be used, the most commonly used are the sigmoidal function and the hyperbolic tangent function.

Neural networks can be grouped into two categories: feedforward networks in which graphs have no loops, and recurrent networks where loops occur because of feedback connections. A feedforward type network is made up a certain number of neurons, arranged in layers, and connected with each other through links whose values determine the weight of the connections themselves. Each neuron in a layer is connected to all of the neurons belonging to the following layer and to all of the neurons of the preceding layer. However, there are no weights among neurons in the same layer. The feedforward network can be trained using a certain learning rule to achieve the desired mapping of the input data so as to match the desired target at the network output. The most popular learning rule is the back-propagation learning algorithm (Rumelhart, Hinton, & Williams, 1986). It is well-known that the feedforward neural network can generalize unlearned input data. The characteristic is called the generalization property.