Archive Film Comparison

Archive Film Comparison

Maia Zaharieva, Matthias Zeppelzauer, Dalibor Mitrovic, Christian Breiteneder
DOI: 10.4018/jmdem.2010070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, the authors present an approach for video comparison, in which an instantiated framework allows for the easy comparison of different methods that are required at each step of the comparison process. The authors’ approach is evaluated based on a real world scenario of challenging video data of archive documentaries. In this paper, the performed experiments aim at the evaluation of the performance of established shot boundary detection algorithms, the influence of keyframe selection, and feature representation.
Article Preview
Top

Introduction

Video copy detection is an active research area driven by ever-growing video collections. The detection of video duplicates allows for the efficient search and retrieval of video content. Existing applications for content-based video copy detection comprise video content identification (Yuan, Duan, Tian, & Xu, 2004), copyright protection (Joly, Frélicot, & Buisson, 2003; Ke, Sukthankar, & Huston, 2004), identification of duplicated news stories (Zhang & Chang, 2004), and TV broadcast monitoring and detection of commercials (Shen, Zhou, Huang, Shao, & Zhou, 2007). Presented experiments are often limited to high quality video clips of pre-defined fixed length and synthetically generated transformations such as resizing, frame shifting, contrast and gamma modification, Gaussian noise additions, etc.

In contrast, film and video comparison reaches beyond the boundaries of a single shot and aims at the identification of both reused and unique film material in two video versions. The compared videos can be two versions of the same feature film, e.g., director’s cut and original cut, or two different movies that share a particular amount of film material, such as documentary films and compilation films. Archive film material additionally challenges existing approaches for video analysis by the state and the nature of the material. The analysis of archive film material is often impeded by the loss of the original film versions. Remaining copies are usually low-quality backup copies from film archives and museums. Different versions vary significantly not only by the actual content (e.g., loss of frames/shots due to censorship or re-editing) but also due to material-specific artifacts such as mold, film tears, flicker, and low contrast. The movies are often monochromatic and silent which limits the set of available modalities and feasible techniques. Furthermore, existing algorithms often provide only limited robustness to illumination changes, affine transformation, cropping, and partial occlusions, which restricts their applicability for low-quality archive films. Archive film material is well-suited for the evaluation of video comparison techniques since it contains a large number of natural (not synthetically generated) transformations among different film versions and represents a complex real world scenario for film comparison and copy detection.

In general, a video comparison process passes well-defined steps from a shot boundary detection to shot representation and matching. At each step different algorithms can be applied. The combination of and the interaction between the selected methods are crucial for the overall comparison process. In this paper, we shortly describe a methodology for video comparison that accounts for the overall video structure at frame, shot and video level as presented in (Zaharieva, Zeppelzauer, Mitrović, & Breiteneder, 2009). The approach allows for the selection of the appropriate hierarchical level for a given task and, thus, enables different application scenarios such as the identification of missing shots or the reconstruction of the original film version.

In this paper we extend our previous work in the following aspects: First, we account for the temporal ordering of corresponding keyframes from matched shots. Second, to further increase the performance, we additionally investigate shots that are labeled as unknown by the system. Third, we extend the performed experiments. We evaluate the performance of established shot boundary detection algorithms on a larger set of archive documentaries and investigate the influence of keyframe selection on the video comparison. Finally, we extend the video data and account for four different type of artifacts:

  • 1.

    Artifacts originating from the analog filmstrips, e.g., contrast and exposure changes, blurring, frame shift, dirt, film tears;

  • 2.

    Digitization artifacts, e.g., coding transformations;

  • 3.

    Technical transformations, e.g., changes in video format, resizing, cropping; and

  • 4.

    Editorial operations such as frame/shot insertion and frame/shot deletion.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing