Navigating Through Video Stories Using Clustering Sets

Navigating Through Video Stories Using Clustering Sets

Sheila M. Pinto-Cáceres, Jurandy Almeida, Vânia P. A. Neris, M. Cecília C. Baranauskas, Neucimar J. Leite, Ricardo da S. Torres
DOI: 10.4018/jmdem.2011070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The fast evolution of technology has led to a growing demand for video data, increasing the amount of research into efficient systems to manage those materials. Making efficient use of video information requires that data be accessed in a user-friendly way. Ideally, one would like to perform video search using an intuitive tool. Most of existing browsers for the interactive search of video sequences, however, have employed a too rigid layout to arrange the results, restricting users to explore the results using list- or grid-based layouts. This paper presents a novel approach for the interactive search that displays the result set in a flexible manner. The proposed method is based on a simple and fast algorithm to build video stories and on an effective visual structure to arrange the storyboards, called Clustering Set. It is able to group together videos with similar content and to organize the result set in a well-defined tree. Results from a rigorous empirical comparison with a subjective evaluation show that such a strategy makes the navigation more coherent and engaging to users.
Article Preview
Top

Background

The exploration of large collections of video data is non-trivial. When a user requests a search, the query formulation (search criterion) can be quite difficult.

Most of search systems are based on textual metadata, which leads to several problems when searching for visual content. Generally, the user lacks information about which keywords best represent the content in which he/she is interested. In fact, different users tend to use different words to describe a same visual content. The lack of systematization in choosing query words can significantly affect the search results (De Rooij et al., 2008).

Modern systems have addressed those shortcomings by automatically detecting visual concepts derived from visual properties, such as color, texture, and shape. However, a minimum knowledge about the concept vocabulary is needed for performing a query, which is not appropriate for non-expert users (Zavesky & Chang, 2008).

Fully automated approaches have combined descriptors of multiple modalities (textual metadata, visual properties, and visual concepts). In spite of all the advances, the formulation of a query using such features is a difficult task for a human interested in a specific video (De Rooij & Worring, 2010).

Once the search results are returned, we can explore many different directions based on query type and user intention. Several visualization techniques have been proposed to assist users in the exploration of result sets (De Rooij et al., 2008; De Rooij & Worring, 2010; Zavesky & Chang, 2008; Zavesky et al., 2008).

Those methods often employ dimensionality reduction algorithms to map the high-dimensional feature space of visual properties into a fixed display. Afterwards, a display strategy is applied for producing user-browsable content (Zavesky et al., 2008).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing