A Quantitative Analysis of the English Lexicon in Wiktionaries and WordNet

A Quantitative Analysis of the English Lexicon in Wiktionaries and WordNet

Andrew Krizhanovsky
Copyright: © 2012 |Pages: 10
DOI: 10.4018/jiit.2012100102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A quantitative analysis of the English lexicon was done in the paper. The three electronic dictionaries are under examination: the English Wiktionary, WordNet, and the Russian Wiktionary. It was calculated the quantity of English words and meanings (senses) in these dictionaries. The distribution of words for each part of speech, the quantity of monosemous and polysemous words and the distribution of words by number of meanings were calculated and compared across these dictionaries. The analysis shows that the average polysemy, the number and the distribution of word senses follow similar patterns in both expert and collaborative resources with relatively minor differences.
Article Preview
Top

1. Introduction

The richness of the language is hidden in the lexicon, in multiple meanings and shades of meanings, which are constantly changing over time in a subtle manner. It is one of the reasons of the existence of a kind of dictionaries named thesaurus, word has Latin roots signifying a “treasure, hoard.” At a time when new big electronic dictionaries (containing tens and hundreds of thousands of entries) appeared, the real possibility to estimate these treasures numerically is brought into existence. The goal of this work will be to estimate numerically some properties of dictionaries, to find out some language regularities and to compare dictionaries themselves.

An analysis and comparison of lexical resources will provide (1) an indication of which kind of resource is more suitable for dictionary users and software developers; (2) an indication of gaps which can be presented in the source material and the dictionary itself. This information should help to authors to improve their dictionaries.

All investigations will be performed on the basis of three electronic dictionaries: the English Wiktionary, WordNet, and the Russian Wiktionary. WordNet is a dictionary and a thesaurus for the English language in a machine-readable form. It is based on psycholinguistic theories to define word meaning. The WordNet data was used to solve many linguistics problems, e.g., word sense disambiguation (Montoyo, Palomar, & Rigau, 2001; Resnik & Yarowsky, 2000; Yarowsky, 1995), text coherence analysis (Harabagiu & Moldovan, 1995; Teich & Fankhauser, 2004), knowledge bases construction.

The Wiktionary is a multilingual and multifunctional dictionary and thesaurus. The Wiktionary contains not only word’s definitions, semantically related words (synonyms, hypernyms, etc.), translations, but also the pronunciations (phonetic transcriptions, audio files), hyphenations, etymologies, quotations, parallel texts (quotations with translations), and figures (which illustrate meaning of the words).

Wiktionary is popular since it is freely available and contains a huge database of words with translations to many languages. The salient properties of the Wiktionary are the multilinguality, the size, and the speed of evolution. It is difficult to compare dictionaries with the Wiktionary, since data quickly become outdated, e.g., the PanDictionary was compared with the Wiktionary data obtained in the year 2008, when it had 403 413 translations (Mausam et al., 2010). Two years later, in 2010, the English Wiktionary contained twice as many translations (964 019) (Wikitionary, 2011c). So, the Wiktionary is permanently growing in number of entries and in the scope of languages. Now the English Wiktionary contains entries in about 800 different languages. There is an interesting paper by Meyer and Gurevych (in press) who investigated three Wiktionaries: English, German and Russian. The Wiktionary data are used:

  • In machine translation between Dutch and Afrikaans (Otte & Tyers, 2011);

  • In the text parsing system NULEX, where some Wiktionary data (verb tense) were integrated with WordNet and VerbNet (McFate & Forbus, 2011);

  • In a speech recognition and speech synthesis as a basis for the rapid pronunciation dictionary creation (He, 2009);

  • In ontology matching (Lin & Krizhanovsky, 2011).

The paper has the following structure. In Section 2, the quantity of English words and meanings, and the distribution of words for each part of speech are estimated. The question of “the ratio of polysemous and monosemous words” and the average polysemy across the three dictionaries is calculated in Section 3. Section 4 presents the distribution of words by number of meanings.

Top

2. Experiments: Parts Of Speech

There are two topics we will discuss in this section: (1) the quantity of English words and meanings and (2) the distribution of words for each part of speech. The following dictionaries are under consideration:

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing