Context-Sensitive Attribute Evaluation

Context-Sensitive Attribute Evaluation

Marko Robnik-Šikonja
Copyright: © 2009 |Pages: 5
DOI: 10.4018/978-1-60566-010-3.ch052
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The research in machine learning, data mining, and statistics has provided a number of methods that estimate the usefulness of an attribute (feature) for prediction of the target variable. The estimates of attributes’ utility are subsequently used in various important tasks, e.g., feature subset selection, feature weighting, feature ranking, feature construction, data transformation, decision and regression tree building, data discretization, visualization, and comprehension. These tasks frequently occur in data mining, robotics, and in the construction of intelligent systems in general. A majority of attribute evaluation measures used are myopic in a sense that they estimate the quality of one feature independently of the context of other features. In problems which possibly involve much feature interactions these measures are not appropriate. The measures which are historically based on the Relief algorithm (Kira & Rendell, 1992) take context into account through distance between the instances and are efficient in problems with strong dependencies between attributes.
Chapter Preview
Top

Background

The majority of feature evaluation measures are impurity based, meaning that they measure impurity of the class value distribution. These measures evaluate each feature separately by measuring impurity of the splits resulting from partition of the learning instances according to the values of the evaluated feature. Assuming the conditional independence of the features upon the class, these measures are myopic, as they do not take the context of other features into account. If the target concept is a discrete variable (the classification problem) well-known and used measures of these kind are information gain (Hunt et al., 1966), Gini index (Breiman et al., 1984), j-measure (Smyth & Goodman, 1990), Gain ration (Quinlan, 1993) and MDL (Kononenko, 1995). Large differences in the impurity of class values before the split, and after the split on a given feature, imply purer splits and therefore more useful features. We cannot directly apply these measures to numerical features, but we can use discretization techniques and then evaluate discretized features. If the target concept is presented as a real valued function (regression problem), the impurity based evaluation heuristics used are e.g., the mean squared and the mean absolute error (Breiman et al., 1984).

The term context here represents related features, which interact and only together contain sufficient information for classification of instances. Note that the relevant context may not be the same for all instances in a given problem. The measures which take the context into account through distance between the instances and are efficient in classification problems with strong dependencies between attributes are Relief (Kira & Rendell, 1992), Contextual Merit (Hong, 1997), and ReliefF (Robnik-Sikonja & Kononenko, 2003). RReliefF is a measure proposed to address regression problems (Robnik-Sikonja & Kononenko, 2003).

For a more thorough overview of feature qualityevaluation measures see (Kononenko & Kukar, 2007). Breiman (2001) has proposed random forest learning algorithm which, as a byproduct, can output the utility of the attributes. With large enough data sample which ensures sufficiently large and diverse trees in the forest these estimates are also context-sensitive. For an overview of other recent work, especially in the context of feature subset selection see (Guyon & Elisseeff, 2003). Note that this chapter is solely a machine learning view of feature selection and omits methods for model selection in regression that amount to feature selection. A recent work trying to bridge the two worlds is (Zhou et al., 2006).

Complete Chapter List

Search this Book:
Reset