Document Indexing Techniques for Text Mining

Document Indexing Techniques for Text Mining

José Ignacio Serrano
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-60566-010-3.ch111
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Owing to the growing amount of digital information stored in natural language, systems that automatically process text are of crucial importance and extremely useful. There is currently a considerable amount of research work (Sebastiani, 2002; Crammer et al., 2003) using a large variety of machine learning algorithms and other Knowledge Discovery in Databases (KDD) methods that are applied to Text Categorization (automatically labeling of texts according to category), Information Retrieval (retrieval of texts similar to a given cue), Information Extraction (identification of pieces of text that contains certain meanings), and Question/Answering (automatic answering of user questions about a certain topic). The texts or documents used can be stored either in ad hoc databases or in the World Wide Web. Data mining in texts, the well-known Text Mining, is a case of KDD with some particular issues: on one hand, the features are obtained from the words contained in texts or are the words themselves. Therefore, text mining systems faces with a huge amount of attributes. On the other hand, the features are highly correlated to form meanings, so it is necessary to take the relationships among words into account, what implies the consideration of syntax and semantics as human beings do. KDD techniques require input texts to be represented as a set of attributes in order to deal with them. The text-to-representation process is called text or document indexing, and the attributes and called indexes. Accordingly, indexing is a crucial process in text mining because indexed representations must collect, only with a set of indexes, most of the information expressed in natural language in the texts with the minimum loss of semantics, in order to perform as well as possible.
Chapter Preview
Top

Background

The traditional “bag-of-words” representation (Sebastiani, 2002) has shown that a statistical distribution of word frequencies, in many text classification problems, is sufficient to achieve high performance results. However, in situations where the available training data is limited by size or by quality, as is frequently true in real-life applications, the mining performance decreases. Moreover, this traditional representation does not take into account the relationships among the words in the texts so that if the data mining task required abstract information, the traditional representation would not afford it. This is the case of the textual informal information in web pages and emails, which demands a higher level of abstraction and semantic depth to perform successfully.

In the end-nineties, word hyperspaces appeared on the scene and they are still updating and improving nowadays. These kind of systems build a representation, a matrix, of the linguistic knowledge contained in a given text collection. They are called word hyperspaces because words are represented in a space of a high number of dimensions. The representation, or hyperspace, takes into account the relationship between words and the syntactic and semantic context where they occur and store this information within the knowledge matrix. This is the main difference with the common “bag of words” representation. However, once the hyperspace has been built, word hyperspace systems represent the text as a vector with a size equal to the size of the hyperspace by using the information hidden in it, and by doing operations with the rows and the columns of the matrix corresponding to the words in the texts.

LSA (Latent Semantic Analysis) (Landauer, Foltz & Laham, 1998; Lemaire & Denhière, 2003) was the first one to appear. Given a text collection, LSA constructs a term-by-document matrix. The Aij matrix component is a value that represents the relative occurrence level of term i in document j. Then, a dimension reduction process is applied to the matrix, concretely the SVD (Singular Value Decomposition) (Landauer, Foltz & Laham, 1998). This dimension-reduced matrix is the final linguistic knowledge representation and each word is represented by its corresponding matrix row of values (vector). After the dimension reduction, the matrix values contain the latent semantic of all the other words contained in all each document. A text is then represented as a weighted average of all the vectors corresponding to the words it contains and the similarity between two texts is given by the cosine distance between the vectors that represent them.

Complete Chapter List

Search this Book:
Reset