Interactive Visualization Tool for Analysis of Large Image Databases

Interactive Visualization Tool for Analysis of Large Image Databases

Anca Doloc-Mihu
DOI: 10.4018/978-1-60960-102-7.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Navigation and interaction are essential features for an interface that is built as a help tool for analyzing large image databases. A tool for actively searching for information in large image databases is called an Image Retrieval System, or its more advanced version is called an Adaptive Image Retrieval System (AIRS). In an Adaptive Image Retrieval System (AIRS) the user-system interaction is built through an interface that allows the relevance feedback process to take place. In this chapter, the author identifies two types of users for an AIRS: a user who seeks images whom the author refers to as an end-user, and a user who designs and researches the collection and the retrieval systems whom the author refers to as a researcher-user. In this context, she describes a new interactive multiple views interface for an AIRS (Doloc-Mihu, 2007), in which each view illustrates the relationships between the images from the collection by using visual attributes (colors, shapes, proximities). With such views, the interface allows the user (both end-user and researcher-user) a more effective interaction with the system, which, further, helps during the analysis of the image collection. The author‘s qualitative evaluation of these multiple views in AIRS shows that each view has its own limitations and benefits. However, together, the views offer complementary information that helps the user in improving his or her search effectiveness.
Chapter Preview
Top

Introduction

This chapter focuses on visualization techniques used by Web-based Adaptive Image Retrieval Systems to allow different users to efficiently navigate, search and analyze large image databases.

In a Web-based Adaptive Retrieval System, the goal is to answer as fast and accurate as possible with data (documents, images) that meet the user's request. Recent advances in Internet technology require the development of advanced Web-based tools for efficiently accessing images from tremendously large, and continuously growing, image collections. One such tool for actively searching for information is an Image Retrieval System. The aim of an Image Retrieval System is to retrieve images that are relevant to the user's request from a large image collection. In this task, the visualization component of the system is responsible for conveying this information to the user, which makes it a key component of the retrieval system.

An Adaptive Image Retrieval System (AIRS) consists of several components, one for each of the following tasks: processing, indexing, retrieval, learning, fusion, and visualization. An Adaptive Image Retrieval System is an Image Retrieval System that is able to automatically adapt to the user's needs. This adaptation could be performed in the system by using a learning component. The adaptive (learning) component is used in an image retrieval system as a solution to address the semantic gap problem, which is the difference between what the retrieval system can distinguish (low-level features describing the images) and what people perceive from images (high-level semantic concepts given as query images). Again, this information must be properly conveyed by the system to the user via its visualization component.

A variety of Image Retrieval Systems have been developed during the past decade of research, all having the same goal: to return images that are similar to the query according to the user's perception. These systems rely on different approaches for representing the contents of the image collection(s), such as content-based features (color, shape, texture, and layout), keywords, or both. For searching the collection, the user may specify either feature representations of images or entire image(s), called query-by-example approach, or both. The closeness between images (image semantics) is determined by the specific query that the user is asking. The process of query formulation is a ``conversational'' activity between the user and the system during which the meaning of an image is created (Santini, Gupta, & Ramesh, 1999). This user-system interaction process takes place through the visualization component or the system’s interface. However, many of these image retrieval systems focus on improving the performance of their retrieval component and disregard the special custom visualization needs of a user, who is the main beneficiary of the system.

The research presented in this chapter describes several visualization techniques for two types of users of an AIRS: a user who seeks images whom we refer to as an end-user, and a user who designs and researches the collection and the retrieval systems whom we refer to as a researcher-user. The focus of this chapter is on interfaces that include multiple views, in which each view illustrates different relations between images at different levels of detail and can be selected by the users according to their informational needs. With such views, the interface allows user (end-user or researcher-user) more effective interaction with the system; by seeing more information about the request sent to the system as well as by better understanding of the results, the user is able to refine his/her query iteratively, which further improves significantly the retrieval results. We make a direct correspondence between the types of information needed by the different types of users and the visual information that is displayed for them. This correspondence will help us build an interface that reflects only as much detail as necessary for the user to get the appropriate content information that will help throughout the searching process.

Complete Chapter List

Search this Book:
Reset