Article Preview
Top1. Introduction
Data quality can greatly affect the analytical performance of data mining algorithms, for example in data classification. There are a variety of different reasons for data quality problem, such as errors in manual data entry procedures, incorrect measurements, equipment errors, and many others. Consequently, it is a common problem that many real-life datasets contain missing (attributes) values or missing data (Lakshminarayan et al., 1999).
Without effective data preparation, the high-quality data mining results cannot be obtained. This is the major reason that the data preparation stage is so important and requires such a very large proportion of effort in data mining processes such as the knowledge discovery in database (KDD) process, usually about 60% of the whole effort (Cios and Kurgan, 2002).
Since most existing data mining and machine learning algorithms cannot deal with incomplete data, efforts are made in order to simplify the data preparation stage. Case deletion (or listwise deletion), that is directly ignoring the examples with missing values is the simplest way to do this. The remaining data are then used for the data analysis stage. However, this method is generally appropriate only when the chosen dataset contains a very small amount of missing data. Alternatively, missing data imputation can be considered, aimed at providing estimations for missing values by reasoning from observed data (Batista and Monard, 2003). Statistical analysis of case deletion and some conventional imputation methods are discussed in Little (1992).
Recently, some novel imputation methods have been proposed (e.g. Zhang, 2008; Zhu et al., 2011) and different imputation methods compared through different simulations of small to large missing rates over different kinds of datasets (e.g. Acuna and Rodriguez, 2004; Batista and Monard, 2003; Farhangfar et al., 2008). While these results show that case deletion is only appropriate when the chosen dataset has a very small proportion of missing values and they may allow us to understand which imputation method performs better under what circumstances (i.e., for which datasets), they rarely answer the question: “When should we ignore the examples with missing values during the data preparation stage?” In other words, although the datasets may contain categorical (i.e., discrete), numerical (i.e., continuous), or both types of data, very few have examined the effect of different missing rates by case deletion over various kinds of datasets on the results of data analysis (see Section 2.3 for further discussion of the limitations of related studies).
Therefore, the aim of this paper is to provide clearer guidelines for determining when to directly apply the case deletion method without performing imputation over what kind of incomplete dataset having how many missing values. In order to assess the applicability of the case deletion method over various kinds of datasets with different missing rates, 40 different attribute types of datasets are used, containing discrete, continuous, and both types of data. The missing values for each dataset are introduced into all attributes with missing rates of 5%, 10%, 15%, 20% to 50% at 5% intervals (c.f., Section 3.2). In addition, a decision tree is constructed based on the data characteristics as input variables and their missing rates as the output variables. The resulting decision rules can be used to determine when to apply the case deletion method over what kind of dataset with different rates of missing values (c.f., Section 3.3). Comparison is also made with the k-nearest neighbor imputation (kNNI) algorithm (k = 10) (Jonsson and Wohlin, 2004) to observe the final difference in performance in terms of classification accuracy between case deletion and missing data imputation (c.f., Section 3.4).
The rest of this paper is organized as follows. Section 2 contains an overview of the related literature including the missingness mechanisms and the k-NNI method. The limitations of related work are also discussed. The experimental setup and results are discussed in Section 3. Finally, Section 4 concludes the paper.