Analyzing Animated Movie Contents for Automatic Video Indexing

Analyzing Animated Movie Contents for Automatic Video Indexing

Bogdan Ionescu (University Politehnica of Bucharest, Romania), Patrick Lambert (University of Savoie, France), Didier Coquin (University of Savoie, France), Alexandru Marin (University Politehnica of Bucharest, Romania) and Constantin Vertan (University Politehnica of Bucharest, Romania)
DOI: 10.4018/978-1-61692-859-9.ch011


In this chapter the authors tackle the analysis and characterization of the artistic animated movies in view of constituting an automatic content-based retrieval system. First, they deal with temporal segmentation, and propose cut, fade and dissolve detection methods adapted to the constraints of this domain. Further, they discuss a fuzzy linguistic approach for automatic symbolic/semantic content annotation in terms of color techniques and action content and we test its potential in automatic video classification. The browsing issue is dealt by providing methods for both, static and dynamic video abstraction. For a quick browse of the movie’s visual content the authors create a storyboard-like summary, while for a “sneak peak” of the movie’s exciting action content they propose a trailer-like video skim. Finally, the authors discuss the architecture of a prototype client-server 3D virtual environment for interactive video retrieval. Several experimental results are presented.
Chapter Preview


Recent advances in multimedia technology, especially in high-speed networking, storage devices and portable devices, have determined an exponential increase in popularity of digital video libraries. Accessing the relevant information proves to be a difficult and time consuming task, considering that usually video databases may contain as many as thousands and thousands of videos. To cope with this issue, content-based video indexing systems are specially designed to provide efficient content-based retrieval facilities, ideally in a manner close to human perception.

Video indexing primarily involves content annotation, which basically means adding some extra content-related information to the actual data (i.e. indexes/attributes). This information provides key-cues about the data content, allowing thus the automatic cataloging. Content annotation is mandatory, as non-indexed data is practically inexistent for the system (and eventually for the user) since there is no trace of it. Besides the content annotation, a video indexing system provides also searching capabilities, i.e. retrieving data according to the user specifications, which is done by comparing data indexes from the database against the ones extracted from the user's query; and browsing capabilities, i.e. providing a visual interface for accessing and visualizing data contents, which is usually performed with the help of automatic content abstraction techniques. Most of the research in the field addresses mainly the data annotation task, which is also the most difficult to perform (Naphade, Huang, 2002; Snoek, Worring, 2005). The challenge is to find methods to extract meaningful attributes, which tend to maximize the relevance and the information coverage, while minimizing the amount of data to deal with, and thus the dimensionality of the data feature space. Nevertheless, in order to be useful and efficient, the annotation must be performed automatically, without the human intervention.

Unfortunately, due to the diversity of the existing video materials, which involves a large variety of specific processing constraints, the issue of automatic understanding of video contents is still an open issue. Despite some few attempts (Qian, Haering, Sezan, 1999; Chan, Qing, Yi, Yueting, 2001; Kim, Frigui, Fadeev, 2008), there is still no generic solution available for indexing all kind of video materials. The chosen compromise consists in reducing the high complexity of this task by adopting some simplifying assumptions, e.g. particular setups, “a priori” information, hypothesis, etc., which are facilitated by the specificity of each application domain. This makes the existing systems highly application dependent. Many domains have been addressed, while new ones are still emerging, e.g. basketball sequences (Saur, Tan, Kulkarni, Ramadge, 1997), soccer sequences (Leonardi, Migliorati, Prandini, 2004), medical sequences (Fan, Luo, Elmagarmid, 2004), news footage (Lu, King, Lyu, 2003), TV programs (Kawai, Sumiyoshi, Yagi, 2007), animal hunt in wildlife documentaries (Haering, Qian, Sezan, 2000), etc.

In this chapter we address the indexing issue for a new application domain, which becomes more and more popular: the animated movie entertainment industry. While the very few existing approaches are limited to dealing either with the analysis of classic cartoons or with cartoon genre detection (Roach, Mason, Pawlewski, 2001; Snoek, Worring, 2005; Ianeva, Vries, Rohrig, 2003; Geetha, Palanivel, 2007), our approach is different, as it addresses the artistic animated movies. One reference in the field is IAFF - The International Animated Film Festival (CITIA, 2009), which stood as validation platform for our approaches. CITIA, the company managing the festival, has composed one of the world's first digital animated movie libraries. Today, this library accounts for more than 31.000 movie titles, 22.924 companies and 60.879 professionals, which are to be available online for a general and professional use. Managing thousands of videos is a tedious task; therefore an automatic content-based retrieval system is required. For the moment, the existing indexing capabilities for animated movies (the CITIA Animaquid Indexing System) are limited to use only textual information (e.g. synopsis, descriptions, etc.), provided mainly by movie authors, which in many cases do not totally apply to the rich artistic content of the animated movies.

Complete Chapter List

Search this Book: