Article Preview
TopIntroduction
With the rapid growth of the Internet, there is an increasing need in information technology. Under this situation, creating large corpus becomes much easier than isolated works. In order to effectively deal with the news, organizing text documents is required. The automatic text categorization (TC) is the task of learning to recognize the class label given the testing document. Usually, a machine learning-based classifier is employed to predict the class label. It learns rules from the labeled training data and applied these rules to label testing document. Before applying machine learning algorithms, the document is firstly represented as vectors. In general, the vector is derived from a set of selected words, the so-called bag-of-word (BOW; Uma, Sankar, & Aghila, 2008; Yen, Lee, Wu, Ying, & Tseng, 2011; Galar, Fernandez, Barrenechea, Bustince, & Herrera, 2010; Katakis, Tsoumakas, & Vlahavas, 2010; Han & Karypis, 2000). However, the biggest challenge of this approach is that the unknown word is missing and the curse of high dimension. To solve this, a set of dimension reduction-based approaches (Sebastiani, 2002; Bekkerman, El-Yaniv, Tishby, & Winter, 2003; Pereira, Tishby, & Lee, 1993) were proposed over the past years. Examples include, latent semantic indexing (Sebastiani, 2002; Baker & McCallum, 1998), information theoretic clustering (Bekkerman, El-Yaniv, Winter, & Tishby, 2001; Pereira, Tishby, & Lee, 1993). Dhillon et al. (Dhillon, Mallela, & Kumar, 2003) also shows better results with term cluster representation.
Text categorization is a rich and wide research issue. It provides the fundamental step for text mining (Janev, Dudukovic, & Vraneš, 2009; Krogstie, Veres, & Sindre, 2007; Lee, Wu, & Yang, 2009; Nour & Mouakket, 2011; Wu & Chang, Efficient Text Chunking using Linear Kernel with Mask Method, 2007; Wu, Lee, & Yang, Robust and efficient multiclass SVM models for phrase pattern recognition, 2008; Yang, Huang, Tsai, Chung, & Wu, 2009). Some well-known machine learning methods, such as support vector machines (SVM) (Joachims, 1997), had received a great successful in this field. Several well-known machine learning algorithms had been widely used in recent years, for example, SVM (Joachims, 1997), linear classifiers (Widrow-Hoff weighting) (Lewis, Schapire, Callan, & Papka, 1996), Centroid-based learners (Han & Karypis, 2000), memory-based learning (k nearest neighbors kNN) (Uma, Sankar, & Aghila, 2008; Yang Y., 1999), generalized instant set Widrow (GIS-W) (Lam & Ho, 1998), decision tree (Apte, Damerau, & Weiss, 1994; Sebastiani, 2002); Naïve Bayes (Apte, Damerau, & Weiss, 1994; Bekkerman, El-Yaniv, Tishby, & Winter, 2003), Neural networks (Apte, Damerau, & Weiss, 1994; Uma, Sankar, & Aghila, 2008; Sebastiani, 2002; Yang Y., 1999).