Association-Based Image Retrieval

Association-Based Image Retrieval

Arun Kulkarni (The University of Texas at Tyler, USA) and Leonard Brown (The University of Texas at Tyler, USA)
Copyright: © 2009 |Pages: 28
DOI: 10.4018/978-1-60566-188-9.ch016
OnDemand PDF Download:
No Current Special Offers


With advances in computer technology and the World Wide Web there has been an explosion in the amount and complexity of multimedia data that are generated, stored, transmitted, analyzed, and accessed. In order to extract useful information from this huge amount of data, many content-based image retrieval (CBIR) systems have been developed in the last decade. A Typical CBIR system captures image features that represent image properties such as color, texture, or shape of objects in the query image and try to retrieve images from the database with similar features. Recent advances in CBIR systems include relevance feedback based interactive systems. The main advantage of CBIR systems with relevance feedback is that these systems take into account the gap between the high-level concepts and low-level features and subjectivity of human perception of visual content. CBIR systems with relevance feedback are more efficient than conventional CBIR systems; however, these systems depend on human interaction. In this chapter, we describe a new approach for image storage and retrieval called association-based image retrieval (ABIR). The authors try to mimic human memory. The human brain stores and retrieves images by association. They use a generalized bi-directional associative memory (GBAM) to store associations between feature vectors that represent images stored in the database. Section I introduces the reader to the CBIR system. In Section II, they present architecture for the ABIR system, Section III deals with preprocessing and feature extraction techniques, and Section IV presents various models of GBAM. In Section V, they present case studies.
Chapter Preview


The rapid growth in the number of large-scale image repositories in many domains such as medical image management, multimedia libraries, document archives, art collection, geographical information the systems, law enforcement management, environmental monitoring, biometrics, and journalism has brought the need for efficient mechanisms for managing the storage and retrieval of images. Effective retrieval of image data is an important building block for general multimedia information management. DataBase Management Systems (DBMSs) typically have a wide variety of features and tools supporting various aspects of data management. Two such features, however, are essential. A DBMS must be able to store information about data objects efficiently, and it must facilitate user-driven searching and retrieval of that information. It follows, then, that a MultiMedia DataBase Management System (MMDBMS) must provide similar capabilities while handling images and other types of multimedia data, such as audio and video. Unlike traditional simple textual data elements, images are considered to have content when they are displayed to users. Consequently, one of the goals of an MMDBMS is to allow users to search the database utilizing that visual content. This goal is commonly referred to as Content-Based Image Retrieval (CBIR). The key idea from the above goal, called Content-Based Image Retrieval (CBIR), is that searches are performed on the visual content of the database images rather than the actual images themselves. So, for an image to be searchable, it has to be indexed by its content. This goal is nontrivial for an MMDBMS to achieve because of the difficulties associated with representing the visual content of an image in a searchable form. Consequently, one of the most critical issues for any MMDBMS supporting CBIR user queries is deciding how to best represent and extract that content. Many ideas from fields including computer vision, database management, image processing, and information retrieval are used to address this issue.

Many systems connect text-based keywords with each image in the database so that users can search the database by submitting simple text queries (Yoshitaka & Ichikawa, 1999). However, text-based keywords have to be attached manually, which is not only time-consuming but often leads to having incorrect, inconsistent, and incomplete descriptions associated with the images in the system. Text-based image retrieval using keyword annotation can be traced back to the 1970s. The keyword annotation method involves a large amount of manual effort. Furthermore, the keyword annotation depends upon human interpretation of image content, and it may not be consistent. In the early 1990s, because of the emergence of large-scale image collections, the difficulties faced by the manual annotation approach became more and more acute, and to overcome these difficulties, content-based image retrieval (CBIR) was proposed. In CBIR instead of being manually annotated by keyword, images are indexed by their own visual content.

Because it is often difficult to describe the visual content of an image with only textual keywords and phrases, a user may not be able to accurately describe his or her desired content solely using a text-based query. As an alternative, the system should provide facilities for a user to indicate the desired content visually. One method of achieving this is to support a Query-By-Example (QBE) interface where a user presents an image or a sketch to the system as a query object representing an example of his or her desired content. The system should search for images within its underlying database that have matching content. Unfortunately, it is not realistic for the user to expect to find database images that have identical content to the query object. So, the typical environment is for a user to present a query object, Q, to the MMDBMS requesting the system to search for and retrieve all images in the database that are similar to Q. Queries of this type are called similarity-based searches or similarity searches, for short. Note that after executing one of these queries, some of the retrieved images can be used as the bases for subsequent queries allowing the user to refine his or her search.

Complete Chapter List

Search this Book: