Lexical Granularity for Automatic Indexing and Means to Achieve It: The Case of Swedish MeSH®

Lexical Granularity for Automatic Indexing and Means to Achieve It: The Case of Swedish MeSH®

Dimitrios Kokkinakis (University of Gothenburg, Sweden)
DOI: 10.4018/978-1-60566-274-9.ch002
OnDemand PDF Download:


The identification and mapping of terminology from large repositories of life science data onto concept hierarchies constitute an important initial step for a deeper semantic exploration of unstructured textual content. Accurate and efficient mapping of this kind is likely to provide better means of enhancing indexing and retrieval of text, uncovering subtle differences, similarities and useful patterns, and hopefully new knowledge, among complex surface realisations, overlooked by shallow techniques based on various forms of lexicon look-up approaches. However, a finer-grained level of mapping between terms as they occur in natural language and domain concepts is a cumbersome enterprise that requires various levels of processing in order to make explicit relevant linguistic structures. This chapter highlights some of the challenges encountered in the process of bridging free text to controlled vocabularies and thesauri and vice versa. The author investigates how the extensive variability of lexical terms in authentic data can be efficiently projected to hierarchically structured codes, while means to increase the coverage of the underlying lexical resources are also investigated.
Chapter Preview


Large repositories of life science data in the form of domain-specific literature, textual databases and other large specialised textual collections (corpora) in electronic form increase on a daily basis to a level beyond what the human mind can grasp and interpret. As the volume of data continues to increase, substantial support from new information technologies and computational techniques grounded in the form of the ever increasing applications of the mining paradigm is becoming apparent. In the biomedical domain, for instance, curators are struggling to effectively process tens of thousands of scientific references that are added monthly to the MEDLINE/PubMed database. While, in the clinical setting vast amounts of health-related data are collected on a daily basis. They constitute a valuable research resource particularly if they by effective automated processing could be better integrated and linked, and thus help scientists to locate and make better use of the knowledge encoded in the electronic repositories. One example would be the construction of hypotheses based upon associations between extracted information possibly overlooked by human readers. Web, Text and Data mining are therefore recognised as the key technologies for advanced, exploratory and quantitative data-analysis of large and often complex data in unstructured or semi-structured form in document collections. Text mining is the technology that tries to solve the problem of information overload by combining techniques from natural language processing (NLP), information retrieval, machine learning, visualization and knowledge management, by the analysis of large volumes of unstructured data and the development of new tools and/or integration/adaptation of state of the art processing components. “Text mining aims at extracting interesting non-trivial patterns of knowledge by discovering, extracting and linking sparse evidence from various sources” (Hearst, 1999) and is considered a variation of data mining, which tries to find interesting patterns in structured data, while in the same analogy, web mining is the analysis of useful information directly from web documents (Markellos et al., 2004). These emerging technologies play an increasingly critical role in aiding research productivity, and they provide the means for reducing the workload for information access and decision support and for speeding up and enhancing the knowledge discovery process (Kao & Poteet, 2007; Feldman& Sanger, 2007; Sirmakessis, 2004).

Complete Chapter List

Search this Book: