Evaluating the Effects of Size and Precision of Training Data on ANN Training Performance for the Prediction of Chaotic Time Series Patterns

Evaluating the Effects of Size and Precision of Training Data on ANN Training Performance for the Prediction of Chaotic Time Series Patterns

Lei Zhang
DOI: 10.4018/IJSSCI.2019010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this research, artificial neural networks (ANN) with various architectures are trained to generate the chaotic time series patterns of the Lorenz attractor. The ANN training performance is evaluated based on the size and precision of the training data. The nonlinear Auto-Regressive (NAR) model is trained in open loop mode first. The trained model is then used with closed loop feedback to predict the chaotic time series outputs. The research goal is to use the designed NAR ANN model for the simulation and analysis of Electroencephalogram (EEG) signals in order to study brain activities. A simple ANN topology with a single hidden layer of 3 to 16 neurons and 1 to 4 input delays is used. The training performance is measured by averaged mean square error. It is found that the training performance cannot be improved by solely increasing the training data size. However, the training performance can be improved by increasing the precision of the training data. This provides useful knowledge towards reducing the number of EEG data samples and corresponding acquisition time for prediction.
Article Preview
Top

Introduction

In the age of big data, it seems that both data scientists and machine learning experts have been unanimously advocating for infinitely increasing the amount of training data and contributing excessive time and effort on data collection from all sources (Deng, 2009; Wu, 2014; Labrinidis, 2012). The training outcomes with a mere few percentage point improvement in the learning accuracy can be considered hugely satisfying, and sufficient for raising a toast to the developing of convolutional neural network (CNN) (Simonyan, 2014) and the advancing of deep learning (LeCun, 2015). In the light of artificial neural network (ANN) technology inspired by brain research (Schalkoff, 1997), with scientific prudence and inquisition, this paper offers some alternative perspectives on improving the training efficacy by using smaller training data size with better data precision and employing optimized ANN architecture with less computational cost. It has been reported that deep learning (Krizhevsky, 2017) has triumphed human brain in terms of achieving better accuracy in image classification. It is however necessary to point out that human brain is by far the most energy efficient design of evolution, of which the ultimate goal is to save energy for survival instead of sparing it to achieve some impressive but impractical accuracy. Although there are approximately a billion neurons and a trillion neural connections in a human brain, they are only activated partially when necessary, in that specific brain functions only activate the related brain region. That is perhaps why people often feel exhausted after trying to multi-task, while creativity and innovation can be better cultivated by allowing the brain to focus intensively on a dedicated single task. For a normal daily task such as face recognition, a human brain can easily succeed in feature detection and classification accurately and efficiently, with a very short period of time and a negligible amount of energy. In a real-life scenario, the ambitious may be willing to spend extra effort to achieve extraordinary performance in remembering thousands of faces in detail, while some may be able to get by with much less energy strategically by distinguishing a small number of key features. In this research, the goal is to investigate the “get by” solution for the generation and prediction of chaotic time series patterns using ANN by improving the quality instead of the quantity of the training data. This can consequently reduce the number of Electroencephalogram (EEG) samples and correspondingly the data acquisition time required for ANN training (Zou et al., 2011), meanwhile lower the energy consumption for the hardware implementation of a real-time prediction system for brain research (Wang et al., 2010).

From a practical point of view, it is difficult to acquire EEG signals with big data size as it is time consuming to setup EEG equipment with multi-channel wet electrodes and the collected EEG signals are generally noisy and individually dependent. Therefore, it is necessary to first simulate the EEG signals with ANN-based chaotic system generator model for theoretical research purpose; then to evaluate the training performances using data with various sizes and precisions to define the optimal training data for ANN training; and additionally to optimize ANN architecture to improve the training efficacy with reduced number of EEG signals. Lorenz system can be used for modeling many real-world chaotic phenomena such as in weather prediction. This study uses Lorenz system as the research subject, with the potential to apply the research method and outcomes to other similar chaotic systems in general.

The rest of the paper provides the research background and explains the research approach of using ANN-based chaotic system generator to simulate and predict EEG signals for brain research; describes the generation of the training data; demonstrates and discusses the generation of different training data sets and the corresponding training results; and finally summarizes the research work.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing