Word Segmentation in Indo-China Languages for Digital Libraries
Jin-Cheon Na (Nanyang Technological University, Singapore), Tun Thura Thet (Nanyang Technological University, Singapore), Dion Hoe-Lian Goh (Nanyang Technological University, Singapore), Yin-Leng Theng (Nanyang Technological University, Singapore) and Schubert Foo (Nanyang Technological University, Singapore)
Copyright: © 2009
This chapter introduces word segmentation methods for Indo-China languages. It describes six different word segmentation methods developed for the Thai, Vietnamese, and Myanmar languages and compare different approaches in terms of their algorithms and results achieved. The discussion and comparison of these word segmentation methods will provide underlying views about how word segmentation can be achieved and employed in Indo-China languages to support search functionality in digital libraries.
The Thai Language
The Thai script is a member of the Indic family of scripts, descended from Brahmi. In the Thai language, a “word” is difficult to define, as it does not exhibit explicit word boundaries. Like many other Asian languages, the Thai language does not use white spaces for word boundaries. Each Thai letter is a consonant possessing an inherent vowel sound as well as inherent tones. Both the inherent vowel and tone can be modified by means of vowel signs and tone marks attached to the base consonant letter. All of the tone marks and some of the vowel signs are rendered in the script as diacritics attached above or below the base consonant. In the Unicode memory representation, these combining signs and marks are encoded after the modified consonant (Unicode Consortium, 2004). One main cause of the problems in Thai word segmentation is the lack of a clear definition of a Thai “word” (Wirot, 2002). Traditional methods of Thai word segmentation are based on unclear criteria and procedures, and have several limitations. Most of the word segmentation approaches use a dictionary for segmenting running texts.
A study conducted by Sornlertlamvanich, Potipiti, and Charoenporn (2000) used automatic corpus-based word extraction. It employed the C4.5 decision tree induction program (Quinlan, 1993) as a learning algorithm for word extraction. The induction algorithm evaluates the content of a series of attributes and interactively builds a tree. The leaves of the decision tree represent the values of the goal attributes. The method used C4.5 to prune the entire decision tree in order to reduce the effect of over fitting. It recursively traveled to each subtree to determine if the leaf or branch could reduce the expected error rate. The attributes of the learning algorithm are mutual information, entropy, frequency, and string length. Evaluation of the method was carried out with a corpus of size 1 MB, consisting of 75 articles from various fields. Thirty thousand strings were manually tagged and compared with the results produced by the method, which recorded a 84.1% accuracy for the test dataset.
Another study conducted by Wirot (2002) used a two-part approach: a syllable-based trigram model for syllable segmentation, and maximum collocation for syllable merging. Syllable segmentation was done on the basis of trigram statistics, whereas syllable merging was done on the basis of collocation between words. Many word segmentation ambiguities were resolved during the syllable segmentation process. Using a training corpus of 553,372 syllables, a newspaper was manually segmented by syllables. Witten-Bell discounting (Chen & Goodman, 1996) was used for smoothing and Viterbi algorithms were used for determining the best syllable segmentation. When tested on another corpus of 30,498 syllables, the results were 99.8% correct. After syllable segmentation, the strategy was to use collocation strength between syllables to merge syllables. The “longest matching” approach relies heavily on words listed in the dictionary and it always prefers compound words over simple words. However, the maximum collocation approach does not exhibit such preference.
Key Terms in this Chapter
Word Segmentation: A word is a linguistic unit made up of one or more morphemes. Word segmentation is the process of determining the word boundaries in a sentence or a document by computer algorithms.
Recall: Recall is the ratio of the number of correctly segmented words to the number of all the words in the documents.
Maximum Collocation: Syllable collocation refers to cooccurrence of syllables observed from the training corpus. If a word contains two or more syllables, those syllables will cooccur. Thus, the probability of cooccurrence will be much greater than by chance.
F-Measure: F-measure, also known as balanced F-score, is the weighted harmonic mean of precision and recall.
Brahmi Script: Brahmi script is the oldest member of the Brahmic family of alphabets which was related to most of the scripts of South Asia, Southeast Asia, and Tibet.
Simple Word: Simple words are words that can have one or more syllables, but in the case of a multisyllable word, the meaning of the word is not related to the meaning of any syllable.
Syllable Segmentation: A syllable is a unit of organization for a sequence of speech sounds. Syllable segmentation is the process of determining the syllable boundaries in a sentence or a document by computer algorithms.
Compound Word: Compound words are words that are composed of two or more simple words. The meaning of a compound word may not be the sum of the meanings of its parts, though it can be related to the meanings of its parts.
Precision: Precision is the ratio of the number of correctly segmented words to the number of all the segmented words.
ISO: The International Organization for Standardization (ISO) is an international standard-setting body composed of representatives from various national standards bodies.
Unicode: The Unicode standard is the industrial standard allowing computer systems to consistently represent and manipulate text written in any languages. The Unicode Consortium, the nonprofit organization, coordinates Unicode’s development.