Generating Semantic Annotation of Video for Organizing and Searching Traffic Resources

Generating Semantic Annotation of Video for Organizing and Searching Traffic Resources

Zheng Xu (The Third Research Institute of Ministry of Public Security, Shanghai, China), Fenglin Zhi (The Third Research Institute of Ministry of Public Security, Shanghai, China), Chen Liang (The Third Research Institute of Ministry of Public Security, Shanghai, China), Lin Mei (The Third Research Institute of Ministry of Public Security, Shanghai, China) and Xiangfeng Luo (Shanghai University, Shanghai, China)
DOI: 10.4018/ijcini.2014010104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Image and video resources play an important role in traffic events analysis. With the rapid growth of the video surveillance devices, large number of image and video resources is increasing being created. It is crucial to explore, share, reuse, and link these multimedia resources for better organizing traffic events. Most of the video resources are currently annotated in an isolated way, which means that they lack semantic connections. Thus, providing the facilities for annotating these video resources is highly demanded. These facilities create the semantic connections among video resources and allow their metadata to be understood globally. Adopting semantic technologies, this paper introduces a video annotation platform. The platform enables user to semantically annotate video resources using vocabularies defined by traffic events ontologies. Moreover, the platform provides the search interface of annotated video resources. The result of initial development demonstrates the benefits of applying semantic technologies in the aspects of reusability, scalability and extensibility.
Article Preview

1. Introduction

Nowadays, traffic events have become more important in the emergency events field. Traffic jam, crash, and other traffic events influence daily life of almost all persons. With the rapid growth of the video surveillance devices such as cameras (some statistics suggest that the number of linking cameras for traffic is 630 thousand in China), large number of image and video resources is increasing being created. The data volume of all video surveillance devices in Shanghai is up to 1 TB every day. Thus, it is important to accurately describe the video content and enable the organizing and searching potential videos in order to detect and analyze traffic events.

The Ministry of Public Security is the management department of traffic events in China. Different provinces or cities of the Ministry of Public Security manage their own resources separately because the resources, especially video resources, are provided by different cameras under different spatial and times. However, some resources are related to one another and can serve multiple traffic events. Therefore, it is crucial to annotate these video resources with useful content. The appropriate annotations can create the semantic connections among video resources and allow their metadata to be understood globally. To this end, this paper has identified the following primary challenges.

  • 1.

    Video resources should be annotated precisely. It is important to use the appropriate concepts to annotate the video resources. Especially in the traffic events case, the standard concepts should provide to the users to annotate video resources. Moreover, it is difficult to use only one general description to tell the whole story of a video resource because one section of the video stream may have plenty of information but some may not related to the main points of the video when it was created. Therefore, besides the standard concepts, a more accurate annotation mechanism, based on the timeline of the video stream should be required. For example, given a car in an image, different users may give different annotation of this car such as “car”, “vehicle”, and “SUV”.

  • 2.

    The annotations of the video resources should be accurate and machine-understandable, to support related organizing and searching functionality. Though the standard and supervised terminology can provide accurate and machine-understandable vocabularies, it is impossible to build such a unified terminology to satisfied different description requirement for different traffic events. For example, the annotation of a red traffic light in an image may be helpful for detecting cross red lights.

  • 3.

    Linking video resources using the annotations. Of course, the web resources are not separately. It is crucial to explore, share, reuse, and link these multimedia resources for better organizing traffic events. For example, a video resource about a crash event can be linked to a video resource about a traffic jam at the close timestamp.

This paper adopts Semantic Web (Berners-Lee, Hendler, & Lassila, 2001; Zhuge, 2009; Luo, Xu, Yu, & Chen, 2011; Luo, Wei, & Zhang, 2011; Luo, Fang, Hu, Yan, & Xiao, 2008) technology to address the above challenges. The following lists the major contributions of the proposed method.

  • 1.

    A video annotation ontology (Wang, Tian, & Hu, 2011) is designed by following the traffic law of China. The ontology provides the foundation for annotation videos based on both timestamp in the video streams. The ontology can not only provide the precise description details but also the standard machine-understandable data.

  • 2.

    A semantic video annotation tool is implemented for annotating and organizing video resources based on the video annotation ontology. The annotation tool allows annotators to use domain specific vocabularies from traffic field to describe the video resources. These annotated video resources are managed based on the semantic relations between annotations.

  • 3.

    A semantic-based video organizing platform (Reeve, & Han, 2005; Popov, Kirayakov, Kirilov, Manov, Ognyanoff, & Goranov, 2003) is provided for searching videos. It supports reasoning operation of the annotations of video resources.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 11: 4 Issues (2017): 3 Released, 1 Forthcoming
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing