Support Vector Machines

Support Vector Machines

Cecilio Angulo, Luis Gonzalez-Abril
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch222
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Support Vector Machines -- SVMs -- are learning machines, originally designed for bi-classification problems, implementing the well-known Structural Risk Minimization (SRM) inductive principle to obtain good generalization on a limited number of learning patterns (Vapnik, 1998). The optimization criterion for these machines is maximizing the margin between two classes, i.e. the distance between two parallel hyperplanes that split the vectors of each one of the two classes, since larger is the margin separating classes, smaller is the VC dimension of the learning machine, which theoretically ensures a good generalization performance (Vapnik, 1998), as it has been demonstrated in a number of real applications (Cristianini, 2000). In its formulation is applicable the kernel trick, which improves the capacity of these algorithms, learning not being directly performed in the original space of data but in a new space called feature space; for this reason this algorithm is one of the most representative of the called Kernel Machines (KMs). Main theory was originally developed on the sixties and seventies by V. Vapnik and A. Chervonenkis (Vapnik et al., 1963, Vapnik et al., 1971, Vapnik, 1995, Vapnik, 1998), on the basis of a separable binary classification problem, however generalization in the use of these learning algorithms did not take place until the nineties (Boser et al., 1992). SVMs has been used thoroughly in any kind of learning problems, mainly in classification problems, although also in other problems like regression (Schölkopf et al., 2004) or clustering (Ben-Hur et al., 2001). The fields of Optic Character Recognition (Cortes et al., 1995) and Text Categorization (Sebastiani, 2002) were the most important initial applications where SVMs were used. With the extended application of new kernels, novel applications have taken place in the field of Bioinformatics, concretely many works are related with the classification of data in Genetic Expression (Microarray Gene Expression) (Brown et al., 1997) and detecting structures between proteins and their relationship with the chains of DNA (Jaakkola et al., 2000). Other applications include image identification, voice recognition, prediction in time series, etc. A more extensive list of applications can be found in (Guyon, 2006).
Chapter Preview
Top

Background

Regularization Networks (RNs), obtained from the penalization inductive principle, are algorithms based on a deep theoretical background, but their purely asymptotic approximation properties and the expansion of the solution function on a large number of vectors convert them in a no practical choice in its original definition. Looking for a more reduced expansion of the solution some researchers observe the good behaviour of the SVM, being able to consider a finite training set as hypothesis in its theoretical discourse as well as building the final solution by considering nested approximation spaces.

Key Terms in this Chapter

Structural Risk Minimization (SRM) Inductive Principle: The main idea of the principle is to minimize a test error by controlling two contradictory factors: a risk functional from empirical data and a capacity for the set of real-valued functions (Vapnik, 1998).

Kernel Machine or Kernel Methods: Kernel machine owe their name to the use of kernel functions that enable them to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. Kernel functions have been introduced for sequence data, text, images, as well as vectors (Schölkopf et al., 2002).

The VC Dimension: (for Vapnik-Chervonenkis Dimension): It is a number which is defined as the cardinality of the largest set of points that an algorithm can shatter, that is it is a measure of the capacity of a statistical classification algorithm (Vapnik et al., 1971).

Mercer’s Kernel: A function k: X × X ? R is a Mercer’s kernel if it is a continuous, symmetric and positive semi-definite function (Cristianini, 2000).

Generalization: It is the process of formulating general concepts by abstracting common properties of instances. In the context-specific of the SVMs means that if a decision function h(x) is obtained from xi ? X ? Rd, this function is considered valid for all x ? Rd. It is the basis of all valid deductive inference and a process of verification is necessary to determine whether a generalization holds true for any new given instance.

Regularization: It is any method of preventing overfitting of data by a model and it is used for solving ill-conditioned parameter-estimation problems.

Kernel Trick: This procedure consists on substituting the inner product in the space of input variables by an appropriate function k(x,x’) that associates to each two inputs an real number real that is k(x,x’) ? R, for any x,x’ ? X. Thus, this is a method for converting a linear classifier algorithm into a non-linear one by using a non-linear function to map the original observations into a higher-dimensional space; this makes a linear classification in the new space equivalent to non-linear classification in the original space.

Complete Chapter List

Search this Book:
Reset