Spatio-Temporal Analysis for Human Action Detection and Recognition in Uncontrolled Environments

Spatio-Temporal Analysis for Human Action Detection and Recognition in Uncontrolled Environments

Dianting Liu, Yilin Yan, Mei-Ling Shyu, Guiru Zhao, Min Chen
DOI: 10.4018/ijmdem.2015010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Understanding semantic meaning of human actions captured in unconstrained environments has broad applications in fields ranging from patient monitoring, human-computer interaction, to surveillance systems. However, while great progresses have been achieved on automatic human action detection and recognition in videos that are captured in controlled/constrained environments, most existing approaches perform unsatisfactorily on videos with uncontrolled/unconstrained conditions (e.g., significant camera motion, background clutter, scaling, and light conditions). To address this issue, the authors propose a robust human action detection and recognition framework that works effectively on videos taken in controlled or uncontrolled environments. Specifically, the authors integrate the optical flow field and Harris3D corner detector to generate a new spatial-temporal information representation for each video sequence, from which the general Gaussian mixture model (GMM) is learned. All the mean vectors of the Gaussian components in the generated GMM model are concatenated to create the GMM supervector for video action recognition. They build a boosting classifier based on a set of sparse representation classifiers and hamming distance classifiers to improve the accuracy of action recognition. The experimental results on two broadly used public data sets, KTH and UCF YouTube Action, show that the proposed framework outperforms the other state-of-the-art approaches on both action detection and recognition.
Article Preview
Top

Introduction

Automatic human action detection and recognition systems have broad applications in areas ranging from public security control, patient monitoring, to human-computer interaction (Poppe, 2010). After years of research and development, many existing algorithms can achieve a reasonably good performance for human detection and recognition in action videos captured in controlled/constrained environments (Danafar & Gheissari, 2007). However, their performance is far from satisfactory on videos with uncontrolled/unconstrained conditions, such as videos recorded by an amateur using a handheld camera that results in significant camera motion, background clutter, and changes in action appearance, scale, and light conditions (Wang, Klaser, Schmid, & Liu, 2011). For example, as can be seen in Figure 1, two frames within the same category (e.g., Basketball, etc.) are quite different from each other in terms of illumination, background and foreground, and/or camera motion, etc.

Figure 1.

Example of UCF11 (UCF YouTube action) data set with approximately 1,168 videos in 11 categories

ijmdem.2015010101.f01

In general, existing action detection and recognition models have certain restrictions in order to performance well. For instance, most existing models require static cameras or approximate compensation of camera motion. The work in (Mahadevan & Vasconcelos, 2010) restricted foreground actions to move in a consistent direction or have faster variations in appearance than the background. In addition, background learning requires either a training set of “background-only” images (Zivkovic, 2004) or batch processing, such as median filtering (Cucchiara, Grana, Piccardi, & Prati, 2003), of a large number of video frames. The latter must be repeated for each scene and is difficult for dynamic scenes where the background changes continuously. Therefore, videos with uncontrolled recording conditions pose significant challenges to the existing state-of-the-art work.

In our study, we improve the action detection and recognition performance in an uncontrolled/unconstrained environment by fully exploring the regions of action (ROAs), i.e., the regions where each corresponds to an action that is meaningful to the human vision system, such as swing or diving. Features extracted from the ROAs will be used for video content representation because they contain more specific action related information. We also note that temporal features extracted from video sequences enable us to better estimate the ROAs and recognize the category of the actions in each of them. Therefore, we investigate the ideas of motion detectors and propose a framework that detects ROAs by integrating multiple spatial-temporal cues and recognizes actions by using static and motion features on the ROAs. The main contributions of this paper are summarized as follows.

  • 1.

    Propose a feature representation method to integrate spatial-temporal information from the optical flow field and the Harris3D detector into a new motion representation. The method is proven robust to video sequences captured in uncontrolled/unconstrained environments.

  • 2.

    Utilize the new motion representation in an unsupervised action detection method based on the idea of integral density with a high density of motion.

  • 3.

    Learn a universal background model for video representation using features from the ROAs instead of the whole feature set. A boosting classifier is trained for action recognition by assembling sparse representation classifiers and hamming distance classifiers.

The rest of this paper is organized as follows. The related work is reviewed first, followed by the discussion of the proposed framework for action detection and recognition. The effectiveness of the proposed framework is verified via experiments on the KTH (Schuldt, Laptev, & Caputo, 2004) and UCF11 (Liu et al., 2009) data sets. Finally, a conclusion is drawn to summarize the paper.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing