Action Recognition

Action Recognition

Qingdi Wei (National Laboratory of Pattern Recognition, China), Xiaoqin Zhang (National Laboratory of Pattern Recognition, China) and Weiming Hu (National Laboratory of Pattern Recognition, China)
Copyright: © 2010 |Pages: 16
DOI: 10.4018/978-1-60566-900-7.ch012
OnDemand PDF Download:
No Current Special Offers


Action recognition is one of the most active research fields in computer vision. This chapter first reviews the action recognition methods in literature from two aspects: action representation and recognition strategy. Then, a novel method for classifying human actions from image sequences is investigated. In this method, each human action is represented by a sequence of shape context features of human silhouette during the action, and a dominant set-based approach is employed to classify the action to the predefined classes. The dominant set-based approach to classification is compared with K-means, mean shift, and Fuzzy-Cmean approaches.
Chapter Preview

1. Introduction

Action recognition has been extensively studied in the computer vision and pattern recognition community due to its crucial value in numerous applications including video surveillance and monitoring, human-computer interaction, video indexing and browsing. Despite the increasing amount of work done in this field in recent years, human action recognition remains a challenging task for several reasons: (i) it is hard to find a general descriptor for human action, because human body is non-rigid with many degrees of freedom, (ii) the time taken for an action is variable, which poses the problem of action segmentation, (iii) nuisance factors, such as environment, self-occlusion, low quality video and irregularity of camera parameters often cause more difficulties to the problem.

In this chapter, a novel action recognition approach is introduced. Unlike traditional methods, it does not require the segmentation of actions. In more detail, first, a set of shape context (Serge, Jitendra, & Jan, 2002; Wang, Jiang, Drew, Li, & Mori, 2006; de Campos, & Murray, 2006; Ling, & Jacobs, 2007) features are extracted to represent the human action, and then a special clustering approach- Dominant Set (Pavan, & Pelillo, 2003; Hu, & Hu, 2006) is employed to cluster action features into an action database. Finally, the testing video sequence is classified by comparing them to the action database.

The rest of the paper is organized as follows. Related work is discussed in Section 2. Section 3 will deal with the major problem, in which action feature and action recognition method are respectively presented in Section 3.1 and Section 3.2. Experimental results are shown in Section 4, and Section 5 is devoted to conclusion.

Figure 1.

Examples of video sequences and extracted silhouettes (Adapted from 2007 weizmann action dataset)


Generally, there are two main parts in a human action recognition system: human action representation and recognition strategy.

Human action representation model is a basic issue in an action recognition system. Human contours, shapes (Elgammal, Shet, Yacoob, & Davis, 2003), and silhouettes (Weinland, Boyer, 2008; Cheung, Baker, & Kanade, 2003) of the human body contain rich information about human action. Liu & Ahuja (2004) used shape Fourier descriptors (SFD) to describe each silhouette. Gorelick, Meirav, Sharon, Basri, & Brandt (2006) suggested using Poisson equation to represent each contour. Kale, Sundaresan, Rajagopalan, Cuntoor, Roy-Chowdhury, Kruger, Chellappa, (2004) utilized a vector of widths to reach the same objective. Carlsson, & Sullivan, (2001) extracted shape information from individual frames to construct prototypes representing key frames of the action. Goldenberg, Kimmel, Rivlin, & Rudzsky (2005) adopted principal component analysis (PCA) to extract eigenshapes from silhouette images for behavior classification. Ikizler, & Duygulu (2007) extracted the rectangular regions from a human silhouette and formed a spatial oriented histogram of these rectangles.

Complete Chapter List

Search this Book: