Unsupervised Segmentation of Bibliographic Elements with Latent Permutations

Unsupervised Segmentation of Bibliographic Elements with Latent Permutations

Tomonari Masada (Nagasaki University, Japan)
DOI: 10.4018/joci.2011040104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

This paper introduces a new approach for large-scale unsupervised segmentation of bibliographic elements. The problem is segmenting a citation given as an untagged word token sequence into subsequences so that each subsequence corresponds to a different bibliographic element (e.g., authors, paper title, journal name, publication year, etc.). The same bibliographic element should be referred to by contiguous word tokens. This constraint is called contiguity constraint. The authors meet this constraint by using generalized Mallows models, effectively applied to document structure learning by Chen, Branavan, Barzilay, and Karger (2009). However, the method works for this problem only after modification. Therefore, the author proposes strategies to make the method applicable to this problem.
Article Preview

Introduction

Multi-topic modeling, inaugurated by the proposal of latent Dirichlet allocation (LDA) (Blei, Ng, & Jordan, 2003), provides successful solutions to many applications. In this paper, we use multi-topic modeling for clustering word tokens so that the word tokens assigned to the same cluster (i.e., to the same topic) refer to the same real-world category.

The application this paper considers is segmentation of bibliographic elements. The input data is a set of citations. We assume that each citation is represented as a sequence of untagged word tokens. Our problem is to assign each word token to a topic so that the word tokens assigned to the same topic refer to the same bibliographic element, e.g. authors, paper title, journal name, publication year, etc. We solve this problem in an unsupervised manner. We do not assume any knowledge about transition patterns among bibliographic elements. We only assume that the number of different bibliographic elements is known. The number of topics can be set to be larger than that of different bibliographic elements, because we can identify multiple topics with the same bibliographic elements when we interpret topic assignments obtained with multi-topic modeling. Figure 1 presents an example of segmentation obtained with our method.

Figure 1.

An example of segmentation our method provides for DBLP dataset (see Table 1). Each line corresponds to a different citation. Long lines are cut off at the right side to present more citations with larger fonts. Each subsequence separated by υ corresponds to a set of contiguous word tokens all assigned to the same topic. In our experiment, the number of topics is set to be larger than the number of bibliographic elements by one. In this example, the number of topics is set to five, because we have four different bibliographic elements, i.e., author names, paper title, conference name (or journal name), and publication year.

Our target data is a set of citations obtained, for example, after an OCR processing of reference sections in printed academic papers. While correction of OCR errors is important and may be realized by introducing some extension to our model as Takasu (2003) did for hidden Markov model (HMM), we regard it as future work. This paper concentrates on segmentation of bibliographic elements based only on word frequencies by assuming that OCR errors are already corrected. Further, publication data presented by researchers on the Web can also be regarded as our target data, because they are often presented not as a segmented data, e.g. in BibTeX format, but as a sequence of untagged word tokens.

In any solution to our problem, each bibliographic element should be referred to by contiguous word tokens. In other words, the word tokens referring to the same bibliographic element should not be separated by the word tokens referring to other elements. We call this constraint contiguity constraint. Existing HMM-based methods often meet contiguity constraint by prescribing a set of transition patterns among hidden states each of which corresponds to a bibliographic element (Connan & Omlin, 2000; Hetzner, 2008; Takasu, 2003). In contrast, we provide a more flexible solution by inferring a permutation of latent topics in multi-topic modeling, where latent topics are the counterpart of hidden states in HMM. After inferring a topic permutation for each citation, we sort topic draws according to the permutation, where the number of topic draws is the same with the number of word tokens in each citation. In this manner, we can obtain an ordered sequence of topic draws satisfying contiguity constraint (see Figure 2). By interpreting each topic as one among the prepared bibliographic elements, we can obtain a segmentation of bibliographic elements.

Figure 2.

Procedure for obtaining a sequence of topic assignments satisfying contiguity constraint. We infer a topic permutation and sort topic draws according to the inferred permutation. In the resulting topic sequence, we interpret, for example, topic 2 as representing author names, topic 5 as publication year, etc., and obtain a segmentation.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing