Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.

Additionally, libraries can receive an extra 5% discount. Learn More

Additionally, libraries can receive an extra 5% discount. Learn More

Source Title: Feature Selection and Ensemble Methods for Bioinformatics: Algorithmic Classification and Implementations

Copyright: © 2011
|Pages: 17
DOI: 10.4018/978-1-60960-557-5.ch009

Top## Feature Selection Based On The Shapley Value

### Main Idea

*v*(*S*) is the reward associated with coalition *S*. The reward can be negative, zero, or positive. The negative or zero reward implies no benefits of inclusion of player *i* into the current coalition.*n* is the total number of players, is the set of permutations over *n*, and is the set of players appearing before player *i* in permutation . Thus, the Shapley value of a given player is the mean of its marginal importance averaged over all possible coalitions of players.*d*, and means the cardinality of a set.

This chapter describes an approach to feature selection originating from the principles of game theory, in particular, in the context of coalition games (Cohen, Ruppin, & Dror, 2005), (Cohen, Dror, & Ruppin, Feature selection via coalition game theory, 2007). Feature selection is iteratively done through the Contribution-Selection algorithm aiming at optimizing classifier performance on unseen data. This algorithm combines both filter and wrapper approaches to feature selection. In contrast to the conventional filters, features are iteratively re-ranked by a classifier. The ranking itself employs the Shapley value in order to estimate the importance of each feature for the given task.

The algorithm assumes that two sets are available: training set and validation set. The former is employed for training a classifier while the latter is used to evaluate the trained classifier performance, given a subset of features - candidates for selection. The classifier performance is measured by classification accuracy, though, in principle, any other suitable characteristic, such as the area under the Receiver Operating Characteristic (ROC) curve or False Acceptance Rate (FAR), can be utilized as well.

Coalition games belonging to cooperative game theory involve a set of players and a reward associated with different groups or coalitions of players. The reward of a certain coalition depends on individual contributions of players composing this coalition to the game. The larger the contribution of a player is, the higher the benefit of having this player in a coalition. Coalitions with high reward are naturally preferable over those with small reward.

The contribution is based on the Shapley value (Shapley, 1953). This value is defined as follows. Let the marginal importance of player *i* to a coalition *S* () be

The Shapley value is then defined as

whereCasting these game theory definitions into feature selection terminology maps the set of players into the set of features and the reward given to a coalition of players into the value of the classifier performance characteristic (here, the accuracy), attained with a certain subset of features. From now on, we will utilize feature selection terminology instead of game theory one.

As can be seen from the definition of the Shapley value, summing is done over all possible subsets of features, which is impractical. One solution is to devise a heuristic coupled with the limit on feature subset size, which corresponds to the fact that the number of significant interactions *d* among features is, in general, much smaller than the total number of features *n*. As the result, is approximated by

Since the number of possible permutations is large even for moderate values of *d*, we will replace sampling of permutations of size *d* with random sampling *d* features with replacement from the original set of features.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved