Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.

Additionally, libraries can receive an extra 5% discount. Learn More

Additionally, libraries can receive an extra 5% discount. Learn More

Óscar Pérez (Universidad Autónoma de Madrid, Spain) and Manuel Sánchez-Montañés (Universidad Autónoma de Madrid, Spain)

DOI: 10.4018/978-1-59904-849-9.ch044

One common assumption in supervised learning algorithms is that the statistical structure of the training and test datasets are the same (Hastie, Tibshirani & Friedman, 2001). That is, the test set is assumed to have the same attribute distribution p(**x**) and same class distribution p(c|**x**) as the training set. However, this is not usually the case in real applications due to different reasons. For instance, in many problems the training dataset is obtained in a specific manner that differs from the way the test dataset will be generated later. Moreover, the nature of the problem may evolve in time. These phenomena cause p^{Tr}(**x**, c) p^{Test}(**x**, c), which can degrade the performance of the model constructed in training.

Here we present a new algorithm that allows to re-estimate a model constructed in training using the unlabelled test patterns. We show the convergence properties of the algorithm and illustrate its performance with an artificial problem. Finally we demonstrate its strengths in a heart disease diagnosis problem where the training set is taken from a different hospital than the test set.

Chapter Preview

TopIn practical problems, the statistical structure of training and test sets can be different, that is, pTr(x, c) ¹ pTest(x, c). This effect can be caused by different reasons. For instance, due to biases in the sampling selection of the training set (Heckman, 1979; Salganicoff, 1997). Other possible cause is that training and test sets can be related to different contexts. For instance, a heart disease diagnosis model that is used in a hospital which is different from the hospital where the training dataset was collected. Then, if the hospitals are located in cities where people have different habits, average age, etc., this will cause a test set with a different statistical structure than the training set.

The special case pTr(x) ¹ pTest(x) and pTr(c | x) = pTest(c | x) is known in the literature as “covariate shift” (Shimodaira, 2000). In the context of machine learning, the covariate shift can degrade the performance of standard machine learning algorithms. Different techniques have been proposed to deal with this problem, see for example (Heckman, 1979; Salganicoff, 1997; Shimodaira, 2000; Sugiyama, Krauledat & Müller, 2007). Transductive learning has also been suggested as another way to improve performance when the statistical structure of the test set is shifted with respect to the training set (Vapnik, 1998; Chen, Wang & Dong, 2003; Wu, Bennett, Cristianini & Shawe-Taylor, 1999).

The statistics of the patterns x can also change in time, for example in a company that has a continuous flow of new and leaving clients (figure 1). If we are interested in constructing a model for prediction, the statistics of the clients when the model is exploited will differ from the statistics in training. Finally, often the concept to be learned is not static but evolves in time (for example, predicting which emails are spam or not), causing pTr(x, c) ¹ pTest(x, c). This problem is known as “concept drift” and different algorithms have been proposed to cope with it (Black & Hickey, 1999; Wang, Fan, Yu, & Han, 2003; Widmer & Kubat, 1996).

Classifier: function that associates a class c to each input pattern x of interest. A classifier can be directly constructed from a set of pattern examples with their respective classes, or indirectly from a statistical model

Statistical model: mathematical function that models the statistical structure of the problem. For classification problems, the statistical model is or equivalently {, } since

EM (Expectation-Maximization algorithm): standard iterative algorithm for estimating the parametersof a parametric statistical model. EM finds the specific parameter values that maximize the likelihood of the observed data D given the statistical model, . The algorithm alternates between the Expectation step and the Maximization step, finishing when meets some convergence criterium

Missing value: special value of an attribute that denotes that it is not known or can not be measured.

Attribute: each of the components that constitute an input pattern.

Training/Test sets: in the context of this chapter, the training set is composed by all labelled examples that are provided for constructing a classifier. The test set is composed by the new unlabelled patterns whose classes should be predicted by the classifier

Semi-Supervised Learning: machine learning technique that uses both labelled and unlabelled data for constructing the model.

Supervised Learning: type of learning where the objective is to learn a function that associates a desired output (‘label’) to each input pattern. Supervised learning techniques require a training dataset of examples with their respective desired outputs. Supervised learning is traditionally divided into regression (the desired output is a continuous variable) and classification (the desired output is a class label).

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved