When Should We Ignore Examples with Missing Values?

When Should We Ignore Examples with Missing Values?

Wei-Chao Lin, Shih-Wen Ke, Chih-Fong Tsai
Copyright: © 2017 |Pages: 11
DOI: 10.4018/IJDWM.2017100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In practice, the dataset collected from data mining usually contains some missing values. It is common practice to perform case deletion by ignoring those data with missing values if the missing rate is certainly small. The aim of this paper is to answer the following question: When should one directly ignore sampled data with missing values? By using different types of datasets having various numbers of attributes, data samples, and classes, it is found that there are some specific patterns that can be considered for case deletion over different datasets without significant performance degradation. In particular, these patterns are extracted to act as the decision rules by a decision tree model. In addition, a comparison is made between cases with deletion and imputation over different datasets with the allowed missing rates and the decision rules. The results show that the classification performance results obtained by case deletion and imputation are similar, which demonstrates the reliability of the extracted decision rules.
Article Preview
Top

1. Introduction

Data quality can greatly affect the analytical performance of data mining algorithms, for example in data classification. There are a variety of different reasons for data quality problem, such as errors in manual data entry procedures, incorrect measurements, equipment errors, and many others. Consequently, it is a common problem that many real-life datasets contain missing (attributes) values or missing data (Lakshminarayan et al., 1999).

Without effective data preparation, the high-quality data mining results cannot be obtained. This is the major reason that the data preparation stage is so important and requires such a very large proportion of effort in data mining processes such as the knowledge discovery in database (KDD) process, usually about 60% of the whole effort (Cios and Kurgan, 2002).

Since most existing data mining and machine learning algorithms cannot deal with incomplete data, efforts are made in order to simplify the data preparation stage. Case deletion (or listwise deletion), that is directly ignoring the examples with missing values is the simplest way to do this. The remaining data are then used for the data analysis stage. However, this method is generally appropriate only when the chosen dataset contains a very small amount of missing data. Alternatively, missing data imputation can be considered, aimed at providing estimations for missing values by reasoning from observed data (Batista and Monard, 2003). Statistical analysis of case deletion and some conventional imputation methods are discussed in Little (1992).

Recently, some novel imputation methods have been proposed (e.g. Zhang, 2008; Zhu et al., 2011) and different imputation methods compared through different simulations of small to large missing rates over different kinds of datasets (e.g. Acuna and Rodriguez, 2004; Batista and Monard, 2003; Farhangfar et al., 2008). While these results show that case deletion is only appropriate when the chosen dataset has a very small proportion of missing values and they may allow us to understand which imputation method performs better under what circumstances (i.e., for which datasets), they rarely answer the question: “When should we ignore the examples with missing values during the data preparation stage?” In other words, although the datasets may contain categorical (i.e., discrete), numerical (i.e., continuous), or both types of data, very few have examined the effect of different missing rates by case deletion over various kinds of datasets on the results of data analysis (see Section 2.3 for further discussion of the limitations of related studies).

Therefore, the aim of this paper is to provide clearer guidelines for determining when to directly apply the case deletion method without performing imputation over what kind of incomplete dataset having how many missing values. In order to assess the applicability of the case deletion method over various kinds of datasets with different missing rates, 40 different attribute types of datasets are used, containing discrete, continuous, and both types of data. The missing values for each dataset are introduced into all attributes with missing rates of 5%, 10%, 15%, 20% to 50% at 5% intervals (c.f., Section 3.2). In addition, a decision tree is constructed based on the data characteristics as input variables and their missing rates as the output variables. The resulting decision rules can be used to determine when to apply the case deletion method over what kind of dataset with different rates of missing values (c.f., Section 3.3). Comparison is also made with the k-nearest neighbor imputation (kNNI) algorithm (k = 10) (Jonsson and Wohlin, 2004) to observe the final difference in performance in terms of classification accuracy between case deletion and missing data imputation (c.f., Section 3.4).

The rest of this paper is organized as follows. Section 2 contains an overview of the related literature including the missingness mechanisms and the k-NNI method. The limitations of related work are also discussed. The experimental setup and results are discussed in Section 3. Finally, Section 4 concludes the paper.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing