Statistical Audio-Visual Data Fusion for Video Scene Segmentation

Statistical Audio-Visual Data Fusion for Video Scene Segmentation

Vyacheslav Parshin (Ecole Centrale de Lyon, France) and Liming Chen (Ecole Centrale de Lyon, France)
Copyright: © 2007 |Pages: 22
DOI: 10.4018/978-1-59904-370-8.ch004


Automatic video segmentation into semantic units is important to organize an effective content based access to long video. In this work we focus on the problem of video segmentation into narrative units called scenes - aggregates of shots unified by a common dramatic event or locale. In this work we derive a statistical video scene segmentation approach which detects scenes boundaries in one pass fusing multi-modal audio-visual features in a symmetrical and scalable manner. The approach deals properly with the variability of real-valued features and models their conditional dependence on the context. It also integrates prior information concerning the duration of scenes. Two kinds of features extracted in visual and audio domain are proposed. The results of experimental evaluations carried out on ground truth video are reported. They show that our approach effectively fuse multiple modalities with higher performance as compared with an alternative rule-based fusion technique.

Complete Chapter List

Search this Book: