A Content-Based Approach to Medical Images Retrieval

A Content-Based Approach to Medical Images Retrieval

Mana Tarjoman, Emad Fatemizadeh, Kambiz Badie
DOI: 10.4018/jhisi.2013040102
(Individual Articles)
No Current Special Offers


Content-based image retrieval (CBIR) makes use of image features, such as color, texture or shape, to index images with minimal human intervention. Content-based image retrieval can be used to locate medical images in large databases. In this paper, the fundamentals of the key components of content-based image retrieval systems are introduced first to give an overview of this area. Then, a case study which describes the methodology of a CBIR system for retrieving human brain magnetic resonance images, is presented. The proposed method is based on Adaptive Neuro-fuzzy Inference System (ANFIS) learning and could classify an image as normal and tumoral. This research uses the knowledge of CBIR approach to the application of medical decision support and discrimination between the normal and abnormal medical images based on features. The experimental results indicate that the proposed method is reliable and has high image retrieval efficiency.
Article Preview

1. Introduction

Growth of medical image databases is accelerating in the past few years. In the medical field, digital images such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, nuclear medical imaging, endoscopy and microscopy, which were used for diagnostics or therapy, are produced in medical centers ever increasingly and have resulted in large volumes of data (Smeulder, Worring, Santini, Gupta, & Jain, 2000).

In order to deal with these data, it is necessary to develop appropriate information systems to efficiently manage these collections. Image searching is one of the most important services that need to be supported by such systems. In general, two different approaches have been applied to allow searching on image collections: one based on image textual meta data and another based on image content information.

The first retrieval approach is based on attaching textual metadata to each image and uses traditional techniques to retrieve them by keywords (Ogle & Stonebraker, 1995; Lieberman, Rosenzweig, & Singh, 2001). However, these systems require a previous annotation of the database images, which is a very laborious and time-consuming task. Furthermore, the annotation process is usually inefficient because users, generally, do not make the annotation in a systematic way. In fact, different users tend to use different words to describe the same image characteristic. The lack of systematization in the annotation process decreases the performance of the keyword-based image search.

These shortcomings have been addressed by the so-called Content-Based Image Retrieval (CBIR) systems (Smeulder, Worring, Santini, Gupta, & Jain, 2000; Flickner et al., 1995; Rui, Huang, & Chang, 1999). Systems for content-based image retrieval have been introduced in the early 1990s (Muller, Michoux, Bandon, & Geissbuhler, 2004). In these systems, image processing algorithms are used to extract feature vectors that represent image properties such as color, texture, and shape. In this approach, it is possible to retrieve images similar to one chosen by the user (query-by-example). One of the main advantages of this approach is the possibility of an automatic retrieval process, contrasting to the effort needed to annotate images. Generally speaking, CBIR aims at developing techniques that support effective searching and browsing of large image digital libraries on the basis of automatically derived image features (Chen, Wang, & Krovetz, 2003).

Images are particularly complex to manage. Besides the volume they occupy; retrieval is an application-and-context-dependent task (Rui, Huang, & Chang, 1999). It requires the translation of high-level user perceptions into low-level image features (this is the so-called “semantic gap” problem). Moreover, image indexing is not just an issue of string processing (which is the case of standard textual databases). To index visual features, it is common to use numerical values for the n features and then to represent the image or object as a point in an n-dimensional space (Aslandogan & Yu, 1999). Multi-dimensional indexing techniques (Gaede & Gunther, 1998; Bohm, Berchtold, & Keim, 2001) and common similarity metrics (Weber, Schek, & Blott, 1998) are factors to be taken into account. In this context, the main challenges faced are the specification of indexing structures to speed up image retrieval and the query specification as a whole. Furthermore, query processing also depends on cognitive aspects related to visual interpretation. Several other problems – query languages, data mining – contribute to attract computer scientists to this area.

Complete Article List

Search this Journal:
Volume 18: 1 Issue (2023)
Volume 17: 2 Issues (2022)
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing