Variable Importance Evaluation for Machine Learning Tasks

Variable Importance Evaluation for Machine Learning Tasks

Martti Juhola, Tapio Grönfors
DOI: 10.4018/978-1-4666-5888-2.ch029
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

Background

Variables or features or attributes of a data set represent the properties of a data case or instance in the context of a phenomenon or an object to be studied. Such an object can vary from concrete visual patterns in digital images to abstract ones as diseases that appear in individual subjects. Variable values are measured in several ways depending on object types as patients, measured in physiological or other laboratory tests or collected with inquiries given to them. Sometimes data values are computed directly from a data source like occurrences of certain words from among an electronic document collection.

Variable types can vary how they are formed. For instance, in images features can be formed on the basis of geometric and statistical approaches. An object in an image is mapped with the length of its periphery, shape as compared to a circle, ellipse or square, its diameter and many other measures originated from its size and shape. Statistical features are associated with the distributions of intensities of grey levels or colours encoded numerically. For example, mean, variance, skewness and kurtosis being statistical moments can be calculated to characterize an object. Further, such features can also be measured that connect to other factors, say, the location of an object in an image.

Variable types are also seen according to how they are represented in computation. Measuring devices typically record signals of continuous phenomena, e.g., temperature, but output discrete values. For instance, an analog-digital converter may give signal data (perhaps voltage first amplified) values between -10 V and +10 V. These values are not continuous, but discrete, because they are digitized, e.g., according to 16 bit words into an interval such as [0,216-1]. This may be calibrated in a way or another to correspond to some property, for example, a subject's weight in kilograms. In the statistical sense variable types are nominal, ordinal, interval or scales ratio (absolute). All of these types may appear within a medical data set. For instance, the colour of eyes is nominal, and the grade of pain is ordinal, say ‘no pain’, ‘slight’ or ‘severe’. Usually these are encoded with non-negative integers. However, we have to know which statistics are possible to compute for them. For nominal variables we can compute modes. We cannot compute means, standard deviations and several other quantities for them, but only for interval and scales ratio. The difference of the two latter is that interval type has no fixed zero or measuring unit. For example, temperature can be measured in different ways.

Frequently, variables are not of the same importance for computation such as classification between different possible diseases of patients in a given medical specialty. It is perhaps important weight variables or discard less important. A great number of variables are nowadays common for several applications. For example, document classification or text categorization is an area where there may be thousands of variables, representing relative frequencies of different relevant words present in documents. Therefore, we have to somehow reduce huge numbers of variables to enable computation in sensible running times and, in general, to leave out unnecessary variables the usefulness of which is negligible for a current computational task.

Key Terms in this Chapter

Classification: A typical data mining task in which cases of a dataset are divided into different classes or groups according to similarity or distance.

Machine Learning: Computational methods that are used in data mining tasks such as clustering, classification and prediction.

Variable Selection: Used to recognize and evaluate the most important or useful ones among from perhaps a great number of all variables present in a data set.

Distance Measure: The quantity to compute similarity or dissimilarity of cases in a variable space determined by the variables used from a data set. Euclidean distance is obviously the best-known distance measure.

Preprocessing of Data: Statistical and other computational techniques applied to select and to modify to an appropriate form or in other way for actual data mining tasks.

Nearest Neighbour Searching: A computational method that searches for the nearest neighbour (case) for a given case in a data set on the basis of a distance measure. This technique is one of those simplest applied to classification.

Complete Chapter List

Search this Book:
Reset