Cecilio Angulo (Technical University of Catalonia, Spain) and Luis Gonzalez-Abril (Technical University of Catalonia, Spain)

DOI: 10.4018/978-1-59904-849-9.ch222

Chapter Preview

Top**Regularization** Networks (RNs), obtained from the penalization inductive principle, are algorithms based on a deep theoretical background, but their purely asymptotic approximation properties and the expansion of the solution function on a large number of vectors convert them in a no practical choice in its original definition. Looking for a more reduced expansion of the solution some researchers observe the good behaviour of the SVM, being able to consider a finite training set as hypothesis in its theoretical discourse as well as building the final solution by considering nested approximation spaces.

Structural Risk Minimization (SRM) Inductive Principle: The main idea of the principle is to minimize a test error by controlling two contradictory factors: a risk functional from empirical data and a capacity for the set of real-valued functions (Vapnik, 1998).

Kernel Machine or Kernel Methods: Kernel machine owe their name to the use of kernel functions that enable them to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. Kernel functions have been introduced for sequence data, text, images, as well as vectors (Schölkopf et al., 2002).

The VC Dimension: (for Vapnik-Chervonenkis Dimension): It is a number which is defined as the cardinality of the largest set of points that an algorithm can shatter, that is it is a measure of the capacity of a statistical classification algorithm (Vapnik et al., 1971).

Mercer’s Kernel: A function k: X × X ? R is a Mercer’s kernel if it is a continuous, symmetric and positive semi-definite function (Cristianini, 2000).

Generalization: It is the process of formulating general concepts by abstracting common properties of instances. In the context-specific of the SVMs means that if a decision function h(x) is obtained from xi ? X ? Rd, this function is considered valid for all x ? Rd. It is the basis of all valid deductive inference and a process of verification is necessary to determine whether a generalization holds true for any new given instance.

Regularization: It is any method of preventing overfitting of data by a model and it is used for solving ill-conditioned parameter-estimation problems.

Kernel Trick: This procedure consists on substituting the inner product in the space of input variables by an appropriate function k(x,x’) that associates to each two inputs an real number real that is k(x,x’) ? R, for any x,x’ ? X. Thus, this is a method for converting a linear classifier algorithm into a non-linear one by using a non-linear function to map the original observations into a higher-dimensional space; this makes a linear classification in the new space equivalent to non-linear classification in the original space.

Search this Book:

Reset

Copyright © 1988-2019, IGI Global - All Rights Reserved