Support Vector Regression for Missing Data Estimation

Support Vector Regression for Missing Data Estimation

Tshilidzi Marwala
DOI: 10.4018/978-1-60566-336-4.ch006
(Individual Chapters)
No Current Special Offers


This chapter develops and compares the merits of three different data imputation models by using accuracy measures. The three methods are auto-associative neural networks, a principal component analysis and support vector regression all combined with cultural genetic algorithms to impute missing variables. The use of a principal component analysis improves the overall performance of the auto-associative network while the use of support vector regression shows promising potential for future investigation. Imputation accuracies up to 97.4% for some of the variables are achieved.
Chapter Preview


The problem with data collection in surveys is that the data invariably suffers from some loss of information. This may for example be a consequence of problems such as incorrect data entry, or unfilled fields in surveys. This chapter explores three different methods for data imputation. These are the combination of cultural genetic algorithms with three learning methods i.e., neural networks, a principal component analysis and support vector regression.

The general approach pursued in this chapter is to use regression models to model the inter-relationships between data variables using neural networks (Chang and Tsai, 2008), a principal component analysis (Adams et al., 2002) and a support vector regression (Cheng, Yu, & Yang, 2007). Thereafter, a controlled and planned approximation of missing data is conducted using an optimization method, in this chapter a cultural genetic algorithm (Yuan and Yuan, 2006) is selected.

Data imputation using auto-associative neural networks as a regression model has been conducted as explained in earlier chapters by Abdella and Marwala (2006); Abdella (2005); Leke, Marwala, and Tettey (2006); Nelwamondo, Mohamed, and Marwala (2007), while other variations include expectation maximization (Nelwamondo, 2008); rough sets as is described in Chapter V; decision trees (Barcena and Tussel, 2002). The use of auto-associative networks comes with a trade-of between computational complexity and time. However, the advantage of using auto-associative networks is that it does give good results as observed in Chapters III and IV.

Auto-associative networks are used in this chapter because they have been found to be successful when they were applied to many problems including fault diagnosis in the optimal production of yeast with a controllable temperature by Shimizu et al. (1997). The auto-associative network based system was able to accurately detect faults in real-time. When the same problem was solved using linear principal component analysis, it could not detect these faults.

Shen, Fu, and Lu (2005) presented a support vector regression based color image watermarking scheme that operates by using the information supplied by the reference positions and the watermark which was adaptively embedded into the blue channel of the host image (taking into account the human visual system). Other successful implementations of support vector machine include Marwala, Chakraverty, and Mahola (2006) who successfully applied it to fault classification in mechanical systems and Msiza, Nelwamondo, and Marwala (2007) who used support vector machines for water demand time-series forecasting.

Pan, Flynn, and Cregan (2007) successfully applied principal component analysis and sub-space principal component analysis for monitoring of a combined cycle gas turbine while Marwala and Hunt (2001) successfully applied principal component analysis and neural networks for damage detection in a population of cylindrical shells. Other successful applications of principal component analysis include the classification of pasteurized milk (Horimoto & Nakai, 1998) as well as Brito et al. (2006) who used principal component analysis for classifying heat-treated liver pastes according to container type, using heavy metal content and manufacturer’s data.

Complete Chapter List

Search this Book: