Content-Based Video Semantic Analysis

Content-Based Video Semantic Analysis

Shuqiang Jiang (Chinese Academy of Sciences, China), Yonghong Tian (Peking University, China), Qingming Huang (Graduate University of Chinese Academy of Sciences, China), Tiejun Huang (Peking University, China) and Wen Gao (Peking University, China)
Copyright: © 2009 |Pages: 25
DOI: 10.4018/978-1-60566-188-9.ch009

Abstract

With the explosive growth in the amount of video data and rapid advance in computing power, extensive research efforts have been devoted to content-based video analysis. In this chapter, the authors will give a broad discussion on this research area by covering different topics such as video structure analysis, object detection and tracking, event detection, visual attention analysis, and so forth. In the meantime, different video representation and indexing models are also presented.
Chapter Preview
Top

Low Level Video Feature Extraction And Representation

Generally speaking, low-level feature representation includes visual feature extraction, description, dimension reduction and indexing. After the video is segmented and key frames are chosen, low-level image features can be extracted from these key frames. Low-level visual features such as color, texture, edge and shapes can be extracted from the key frame set in video and represented as feature descriptors. After post-processing on the feature descriptors such as dimension reduction, they can be stored in the database using indexing models for future queries. There are two categories of visual features: global features that are extracted from a whole image, and local or regional features that describe the chosen patches of a given image.

Complete Chapter List

Search this Book:
Reset