Article Preview
TopIntroduction
Automatic human action detection and recognition systems have broad applications in areas ranging from public security control, patient monitoring, to human-computer interaction (Poppe, 2010). After years of research and development, many existing algorithms can achieve a reasonably good performance for human detection and recognition in action videos captured in controlled/constrained environments (Danafar & Gheissari, 2007). However, their performance is far from satisfactory on videos with uncontrolled/unconstrained conditions, such as videos recorded by an amateur using a handheld camera that results in significant camera motion, background clutter, and changes in action appearance, scale, and light conditions (Wang, Klaser, Schmid, & Liu, 2011). For example, as can be seen in Figure 1, two frames within the same category (e.g., Basketball, etc.) are quite different from each other in terms of illumination, background and foreground, and/or camera motion, etc.
Figure 1. Example of UCF11 (UCF YouTube action) data set with approximately 1,168 videos in 11 categories
In general, existing action detection and recognition models have certain restrictions in order to performance well. For instance, most existing models require static cameras or approximate compensation of camera motion. The work in (Mahadevan & Vasconcelos, 2010) restricted foreground actions to move in a consistent direction or have faster variations in appearance than the background. In addition, background learning requires either a training set of “background-only” images (Zivkovic, 2004) or batch processing, such as median filtering (Cucchiara, Grana, Piccardi, & Prati, 2003), of a large number of video frames. The latter must be repeated for each scene and is difficult for dynamic scenes where the background changes continuously. Therefore, videos with uncontrolled recording conditions pose significant challenges to the existing state-of-the-art work.
In our study, we improve the action detection and recognition performance in an uncontrolled/unconstrained environment by fully exploring the regions of action (ROAs), i.e., the regions where each corresponds to an action that is meaningful to the human vision system, such as swing or diving. Features extracted from the ROAs will be used for video content representation because they contain more specific action related information. We also note that temporal features extracted from video sequences enable us to better estimate the ROAs and recognize the category of the actions in each of them. Therefore, we investigate the ideas of motion detectors and propose a framework that detects ROAs by integrating multiple spatial-temporal cues and recognizes actions by using static and motion features on the ROAs. The main contributions of this paper are summarized as follows.
- 1.
Propose a feature representation method to integrate spatial-temporal information from the optical flow field and the Harris3D detector into a new motion representation. The method is proven robust to video sequences captured in uncontrolled/unconstrained environments.
- 2.
Utilize the new motion representation in an unsupervised action detection method based on the idea of integral density with a high density of motion.
- 3.
Learn a universal background model for video representation using features from the ROAs instead of the whole feature set. A boosting classifier is trained for action recognition by assembling sparse representation classifiers and hamming distance classifiers.
The rest of this paper is organized as follows. The related work is reviewed first, followed by the discussion of the proposed framework for action detection and recognition. The effectiveness of the proposed framework is verified via experiments on the KTH (Schuldt, Laptev, & Caputo, 2004) and UCF11 (Liu et al., 2009) data sets. Finally, a conclusion is drawn to summarize the paper.