Combining kNN Imputation and Bootstrap Calibrated: Empirical Likelihood for Incomplete Data Analysis

Combining kNN Imputation and Bootstrap Calibrated: Empirical Likelihood for Incomplete Data Analysis

Yongsong Qin, Shichao Zhang, Chengqi Zhang
Copyright: © 2010 |Pages: 13
DOI: 10.4018/jdwm.2010100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The k-nearest neighbor (kNN) imputation, as one of the most important research topics in incomplete data discovery, has been developed with great successes on industrial data. However, it is difficult to obtain a mathematical valid and simple procedure to construct confidence intervals for evaluating the imputed data. This paper studies a new estimation for missing (or incomplete) data that is a combination of the kNN imputation and bootstrap calibrated EL (Empirical Likelihood). The combination not only releases the burden of seeking a mathematical valid asymptotic theory for the kNN imputation, but also inherits the advantages of the EL method compared to the normal approximation method. Simulation results demonstrate that the bootstrap calibrated EL method performs quite well in estimating confidence intervals for the imputed data with kNN imputation method.
Article Preview
Top

Introduction

Item non-response is usually handled by some form of imputation to fill in the missing item values. Imputation not only permits the creation of a general purpose complete public-use data file that can be used for standard analyses, but also recoveries part of lost information. For example, we may intend to getjdwm.2010100104.m01, from n individuals. If X-values are observed completely and some of Y-values may be missing, we could use some ways to fill in the missing values of Y by the relationship between the variables X and Y as each pair jdwm.2010100104.m02 comes from one individual. It then would be better than to drop the ith pair jdwm.2010100104.m03 if jdwm.2010100104.m04 alone is missing. The k-nearest neighbor (kNN) imputation is one of the most important hot deck methods used to compensate for nonresponse in incomplete data mining. It is a nonparametric imputation method, which is simple but effective for many applications. With the method, a case is imputed using values from its k nearest neighbors (points). A thorough description of the kNN technique was presented by McRoberts et al. (2002).

The nearest neighbor (NN) imputation, a special case of kNN when k = 1, has widely been used in the data analysis applications, such as surveys conducted at Statistics Canada, the U.S. Bureau of Labor Statistics, and the U.S. Census Bureau (Chen & Shao, 2000). Although this happened, it is difficult to obtain a mathematical valid and simple procedure to construct confidence intervals for evaluating the imputed data. Instead of building a validation theory, the efficiency of kNN imputation is evaluated with few experiments in data mining and machine learning. This leaves open the confidence of the compensated values with the kNN imputation.

In this paper, we propose a new estimation for missing (or incomplete) data, named kNN-BEL, that is a combination of the kNN imputation and bootstrap calibrated EL (Empirical Likelihood). The kNN-BEL first uses the Cross-validation method to choose k. And then missing values are estimated with their k nearest neighbors. Finally, a bootstrap calibrated empirical likelihood (EL) method is applied to construct confidence intervals for the mean of the dependent variable, i.e., evaluating the confidence of the compensated values with the kNN imputation.

In the complete data setting, the original idea of empirical (or nonparametric) likelihood (EL) dates back to Hartley and Rao (1968) in the context of sample surveys, and Owen (2001) made a systematic study of the EL method. The EL confidence intervals are range preserving and transformation respecting, and the shape and orientation of EL intervals are determined entirely by the data, unlike the normal approximation based intervals.

In the incomplete data setting (Banek et al., 2008; Golfarelli & Rizzi, 2009; Gomez et al., 2009; Nikulin, 2008; Pighin & Ieronutti, 2008; Yu et al., 2009), the kNN imputation is used associated with the bootstrap calibrated EL. The use of the Cross-validation method can guarantee the smallest mean squared errors in the process of imputation, and the bootstrap calibrated EL method releases the burden of having a mathematical valid asymptotic theory for the estimators of parameters in interest, while this method also keeps the advantages of the EL method compared to the normal approximation method. Without loss of generality, we simulate the kNN-BEL and demonstrate that the combination of the kNN imputation and the bootstrap calibrated EL method perform quite well in data analysis with missing data.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing