Article Preview
TopIntroduction
A large attention has been given to the problem of financial data modeling and decision making (Davalos et al., 2009; Sun, 2010; Joseph & Mazouz, 2010; Hammami & Boujelbene, 2012; Lai and Joseph, 2012; Strang, 2012). However, due to non-stationary, high volatility clustering and chaotic properties of the stock market data, the prediction of share prices is always considered as a hard and challenging task. Recently, multi-resolution techniques such as the wavelet transform (Mallat, 1989; Daubechies, 1992) have been successfully applied to engineering problems (Bhutada et al., 2011; Chen et al., 2012) and financial data forecasting (Li et al, 2006; Huang & Wu, 2008, 2010; Hsieh et al., 2011; Huang, 2011; Wang et al., 2011) because of its powerful patterns extraction capability. The wavelet transform (WT) is a mathematical technique that decomposes a given data into approximation and detail components. The approximation components (coefficients) characterize the coarse structure of data to identify the long-run trend, and detail components (coefficients) are able to capture discontinuities, ruptures and singularities in the original data. Thus, the wavelet analysis allows extracting the hidden and significant temporal patterns of the original data. Further decompositions of the data are applied on approximation components at each level of decomposition.
Previously, Li et al. (2006) applied the WT to decompose the Down Jones Industrial Average (DJIA) index data and used genetic programming algorithm for forecasting purpose. They concluded that the wavelet analysis provides promising indicators and helps improving the forecasting performance of the genetic programming algorithm. Huang and Wu (2008) applied the WT to NASDAQ (US), NK225 (Japan), TWSI (Taiwan) and the KOSPI (South Korea) data and used the extracted time-scale patterns to serve as inputs of the relevance vector machine (RVM) which is a Bayesian version of the support vector machine to perform non-parametric regression and forecasting. They found that the combined WT-RVM outperforms traditional forecasting models based on root-mean-squared forecasting errors (RMSFE). In their subsequent work, Huang and Wu (2010) used the WT to analyze the NASDAQ, S&P500 (US), CAC40 (France), FTSE100 (UK), DAX30 (Germany), MIB40 (Italy), TSX60 (Canada), NK225, TWSI and the KOSPI data. Recurrent self-organizing map neural network was used for partitioning and storing temporal context of the pattern space, and a multiple kernel partial least square regression was used for forecasting purpose. The simulation results indicated that the presented model achieved the lowest (RMSFE) in comparison with NN, SVM or the traditional general autoregressive conditional heteroskedasticity (GARCH) model. Hsieh et al. (2011) applied the WT to analyze the stock price data of the DJIA, FTSE-100, Nikkei-225, and Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). A recurrent neural network was used to perform forecasting task, and the artificial bee colony algorithm was adopted to optimize its weights and biases. The proposed system was found to be highly promising based on the obtained simulation results. Huang (2011) combined WT with kernel partial least square regressions for stock index forecasting including the NASDAQ, S&P500, TSX60, NK225, TWSI, CAC40, FTSE100, DAX30, and the MIB40. In terms of the forecasting errors, the empirical results showed that the presented model outperformed traditional NN, SVM, and GARCH models. Wang et al. (2011) used the WT to decompose the Shanghai Stock Exchange (SCE) prices and the backpropagation neural network (BPNN) was adopted to predict SCE future values. The authors found that the BPNN with WT coefficients outperforms BPNN that uses past values of the original data.