Artificial Higher Order Neural Network Training on Limited Precision Processors

Artificial Higher Order Neural Network Training on Limited Precision Processors

Janti Shawash (University College London, UK) and David R. Selviah (University College London, UK)
DOI: 10.4018/978-1-61520-711-4.ch014
OnDemand PDF Download:
No Current Special Offers


Previous research suggested Artificial Neural Network (ANN) operation in a limited precision environment was particularly sensitive to the precision and could not take place below a certain threshold level of precision. This study investigates by simulation the training of networks using Back Propagation (BP) and Levenberg-Marquardt algorithms in limited precision to achieve high overall calculation accuracy, using on-line training, a new type of Higher Order Neural Network (HONN) known as the Correlation HONN (CHONN), discrete XOR and continuous optical waveguide sidewall roughness datasets to find the precision at which the training and operation is feasible. The BP algorithm converged to a precision beyond which the performance did not improve. The results support previous findings in literature for ANN operation that discrete datasets require lower precision than continuous datasets. The importance of our findings is that they demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware such as Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) to achieve high overall operational accuracy.
Chapter Preview

1. Introduction

There is a need for high speed, low latency (input to output delay), embedded computing for use in control systems for aircraft, vehicles, and robots, for example. Digital electronic hardware, such as Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) can achieve this real-time, high speed, low latency operation but with the associated penalty of reduced precision. Indeed there is a trade-off of low latency and high precision. Artificial Neural Networks (ANNs) offer the possibility of giving low latency in a limited precision environment. The precision of individual parts of the calculation is reduced as low as possible to give low latency, but without sacrificing unduly, the best overall output error of the calculation performed on the full system. There have been many studies on the operation of ANNs on real-time, low precision electronic hardware (Jung & Kim, 2007; Sahin, Becerikli, & Yazici, 2006; Zhu & Sutton, 2003). It was found that the ANN output error depends on the number of hidden layers (Piche, 1995; Stevenson, Winter, & Widrow, 1990) so by reducing the size of the ANN simpler operating and learning algorithms and higher accuracy can be achieved. A number of researchers have found that they have to train ANNs offline on high precision floating point CPUs on PCs to preserve accuracy during training and then to truncate the final weights to obtain a lower precision and to download them into a limited precision environment such as on DSPs or FPGAs. The size of the ANN limits the learning offline in software as the time and memory requirements grow with ANN size. Parallel hardware processors significantly increase the speed (Lopez-Garcia, Moreno-Armendariz, Riera-Babures, Balsi, & Vilasis-Cardona, 2005; Maguire et al., 2007), but only if the area occupied by the ANN circuit is minimized. In real time hardware, the ANN size poses a more serious problem than in software running on a floating point CPU due to the more limited circuit resources such as memory.

(Dias, Antunes, Vieira, & Mota, 2006) demonstrated that it is possible to implement an on-line Levenberg-Marquardt (LM) training algorithm in software; the use of online learning, as opposed to batch learning, reduces the memory requirements and operation time. The ability to operate LM training online with reduced memory and operation complexity suggests that the LM algorithm may be ideally suited for implementation on real time reduced precision hardware where it has not been used. Therefore, we compare the LM online training with Back Propagation (BP) online training in a limited precision environment to find the lowest precision at which learning is feasible. Another way to reduce the size of an ANN is to use a Higher Order Neural Network (HONN) structure. So we investigate the implementation of the recently introduced Correlation HONN (CHONN) (Selviah & Shawash, 2008) and compare it with that of a first order ANN in a limited precision environment.

To our knowledge, no one has yet succeeded in training ANNs adequately in a very limited precision environment. It has been found that if training is performed in a limited precision environment the ANN converges correctly for high precision but below some threshold level of precision the training does not correctly converge. Moreover, to our knowledge no one has either trained nor even run or operated a HONN in a limited precision environment. We present the first demonstration of running and operating a HONN in a limited precision environment and show how to reduce the threshold precision which had earlier prevented training in very low precision environments and demonstrate for the first time training of both ANNs and HONNs in a very limited precision environment to achieve high overall calculation accuracy.

Section 2 describes HONNs and on-line learning algorithms. Section 3 details the experimental method, while sections 4, 5, and 6 present the simulations and the results. Discussions, conclusions, and acknowledgements are presented in sections 7 and 8, followed by references.

Complete Chapter List

Search this Book: