Article Preview
Top1. Introduction
With the development of Internet information technology, there is an explosive growth of data. Among the massive data, text documents are a major data structure. Effectively mining the knowledge behind text documents has become popular and attracts more and more attention. For labeled text documents, automatic text categorization is a powerful technology to process them. Traditional single label text classification assumes that each text document can be classified into only one label. Some state-of-the-art text categorization technologies, such as SVM, can achieve an accuracy over 90%. However, text documents are usually associated with multiple topics. For example, if a document focusing on transgenic technology may be mainly related to the technological topic, but may also be relevant to the health topic. The multi-topic properties of text documents make the traditional single label text classification technologies ineffective when they are applied to text documents with multi-labels (Yamamoto and Satoh, 2014). At present, multi-label text classification is a challenging task, and there is a huge room remained for improving its performance, especially for the text documents with unbalanced labels (Hmeidi et al., 2016).
To our knowledge, multi-label classification algorithms are usually extended from single label algorithms. According to the expansion way, they can be divided into two categories (Tsoumakas et al., 2009): algorithm adaption (AA) and problem transformation (PT). The AA algorithms reform the existing label classification algorithm to make them capable to be applied to multi-label tasks. The representative AA algorithms include: Multi-label decision tree (MLDT) (Zhang and Zhou, 2007; Piyatumrong et al., 2012), Multi-label k nearest neighbor (MLkNN) (Zhang et al., 2007; Brinker and Hüllermeier, 2007; Savvas and Sofianidou, 2015), improved Adaboost algorithm (Schapire and Singer, 2000), improved algorithm based on support vector machine (SVM), and neural network (Elisseeff and Weston, 2001; Vinod and M.P., 2015; Zhang et al., 2006; Zhang, 2009). By contrast, in PT algorithms, the multi-label classification task is transformed into some single label classification tasks, and then combine all single label classification results to determine the final labels. For example, binary relevance (BR) (Shoji et al., 2016; Tsoumakas and Taniar, 2007) tries to learn a binary classifier for each label, and then determines the associated labels for each unseen instance in terms of the results of all the binary classifiers. However, the BR algorithm doesn’t consider the correlations between labels, although it’s straightforward and easy to implement. The label powerset (LP) algorithm (Boutell et al., 2004) incorporates the correlations between labels to improve the classification performance. It estimates the correlations of labels by regarding each distinct combination of labels in the training data set as a new label. However, there may be an explosive increase of new label numbers with the considerable number of labels and training examples. The random k-labelset (RAkLE) algorithm (Tsoumakas et al., 2011) aims to solve the problem of excessive number of labels in the LP algorithm. It divides the initial set of labels into a small number of groups to reduce the number of new labels. There are some other well-known PT algorithms, such as classifier chains (CC) (Read et al., 2011), multi-label learning by exploiting label dependency (LEAD) (Zhang and Zhang, 2010) algorithm, and conditional dependency network (CDN) (Guo and Gu, 2011).