TopProblem Of Data Classification
Data classification is a scientific discipline researching the ways of assigning class membership values (labels) to unknown (in the sense that they have not previously be seen to an observer) observations or samples, based on a set of observations or samples provided with class membership values (labels). Each observation is represented by a feature vector associated with it. Unknown observations form a test set while their labeled counterparts together with class labels compose a training set. Labeling unknown observations is done by means of a classifier, which is a data classification algorithm implementing a mapping of feature vectors to class labels. An algorithm is a sequence of steps necessary for the solution of a data classification task at hand. Let us restrict ourselves to two-class problems.
Thus, the process of data classification involves either one or two following steps:
Many classifiers have one or several parameters to be pre-defined before classification will start. Without knowing the optimal values of these parameters, data classification would be akin to random walk in search for the right solution. The words ‘optimal values’ mean such parameter values that allow a classifier to learn the correct mapping from features, describing each observation, to class labels. This learning done from the training data is possible, because the learner can always check the answer: for this, it needs to compare its output and the correct result as specified by class labels assigned to the observations from the training set. If there is a mismatch (classification error) between the two, the learner knows there should be some work to do.
Typically, classification error on the training data is not precisely 0% and sometimes it simply cannot be so due to the finite (limited) size of the training set. On the contrary, the zero error rate may indicate that you over-trained the classifier so that it learned every minute detail, which is often nothing but noise (garbage)1. Such a classifier will be unable to properly generalize. In other words, when presented with previously unseen data, its classification performance will be very bad. The smaller the training set is, the higher your chances to over-train a classifier, because different classes are likely to be under-represented. The more sophisticated classifier is, the higher chances are for its over-training, since sophisticated classifiers are capable of partitioning data classes in more complex ways (decision boundaries) than simpler classifiers. So, the training could be both evil and blessing. Sometimes, a classifier does not need training at all2, which, however, does not automatically imply that this classifier will do its job well in all cases.
Independently of the fact whether classifier training is required or not, the testing phase has still to be carried out to complete the data classification task. This is done with the trained classifier applied to the test data. If no training was needed, then the word ‘trained’ is omitted before ‘classifier’ in the last sentence. As a result of testing, test error and other performance characteristics such Area Under the Receiver Operating Characteristic (ROC) Curve are computed, which can be further compared (by means of statistical tests) with errors/characteristics of other classifiers attained on the same test set.
Given that microarray gene expression data are high dimensional, it is advisable and even required reducing the number of features prior to data classification in order to alleviate the effect of classifier over-training. That is, dimensionality reduction should always precede classification when dealing with gene expression data sets.
There are many different classifiers that can be applied to gene expression data. In a series of chapters that follow we pay attention to the most common of them: Naïve Bayes, Nearest Neighbor, Decision Tree, and Support Vector Machine. These classifiers were also recently named among the top 10 algorithms in Data Mining . In addition, they form a bulk of base classifiers used in building classifier ensembles.
Finally, it is always good to know how to build a good classifier. We believe that the paper of Braga-Neto (Braga-Neto, 2007) can help you to avoid some common misunderstandings in the classifier design related to microarray data.