Hot Topic Sensing, Text Analysis, and Summarization

Hot Topic Sensing, Text Analysis, and Summarization

Guillaume Bouchard, Stephane Clinchant, William Darling
DOI: 10.4018/978-1-4666-6236-0.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Social monitoring platforms are software services that enable the rapid analysis of massive amounts of information (mostly text messages) expressed in social networks. Their usage today mostly focuses on marketing industries that need to reach their customers in a proper way. In the e-government domain, however, such tools have mostly been used by political parties as a support to election campaigns, but few have been actually used to provide a comprehensive understanding of the citizens' needs in everyday life. In this chapter, the authors present a set of data analytics tools that can help public authorities to extract and summarize textual content from Internet forums and social media feeds. There are many potential applications of these tools, such as the visualization of the main political discussion in the city, early detection of disagreement with the local politics, and city services connected to social media.
Chapter Preview
Top

Background

Computational parsing and understanding of natural language by machines is the goal of Natural Language Processing (NLP). To achieve this goal, a preliminary step is determining the computational representation of the data. Unfortunately, natural language exists in a very high dimensional space and is highly structured. Working in this space is therefore computationally expensive and dimensionality reduction – while preserving relevant information – is consequently an important issue in NLP. One of the o is by ignoring word order. This simplification, known as the bag-of-words (BOW) model (Salton & McGill, 1983), represents documents as attribute-value tables where each word in a fixed vocabulary is an attribute and its related value is the number of times that word appears in the given document. This representation is also known as the Vector Space Model (VSM) and is by far the most popular method to represent the content of a document in NLP.

The motivation underlying the VSM representation is that a vector of word counts corresponds to a lower-dimensional version of a document that clearly evocates – at least in an indirect manner – the theme of a document. Further, this allows us to quantitatively determine the similarity between two documents by, for example, determining the angle between their vector representations using cosine similarity. However, while intuition suggests that word dimensions with high value will help describe the theme of a document and help differentiate it from others, document vectors are often overwhelmed with low-content “stop”-words such as “the”, “of”, “and”, “to”, “a”, etc. that appear with high frequency but do not add any information to the thematic make-up of a document. This makes the representation less informative and pushes differentiating words to the background rendering comparison methods such as cosine similarity much less effective.

One solution to this problem is to re-weight the frequency terms to put more emphasis on differentiating – and therefore content-heavy – words. The most popular of these approaches is referred to as tf-idf weighting. Here, instead of containing the frequency of each word in a document vector, each word is represented by a value that is proportional to both its frequency in the document and its inverse-document frequency. This has the effect of minimizing the value of words that appear in many documents and are therefore not very differentiating. In addition to tf-idf weighting, a number of other term re-weighting schemes have been proposed in the literature.

While VSM succeeds in many respects at dimensionality reduction, further reduction is possible and often of great value. We naturally imagine documents to be about one or many discrete topics and representing a document in “topic”-space can be the most efficient depending on our ultimate goal. For example, document classification often seeks to classify documents into groups of texts that address similar topics. If this is the task, then a representation of a corpus of documents directly in this space would indeed be appropriate. A highly influential work is known as Latent Semantic Indexing (LSI) and consists of computing the Singular Value Decomposition (SVD) of a term document matrix to help uncover words that commonly co-occur. Following the decomposition, word co-occurrence patterns are projected along a singular vector (Deerwester, Dumais, Landauer, Furnas, & Harshman, 1990). Documents are then represented in a low-dimensional semantic space where the components can be seen as “concepts”. LSI is in many ways the “ideological” precursor to modern probabilistic topic models.

Complete Chapter List

Search this Book:
Reset