Vision Based Hand Posture Recognition

Vision Based Hand Posture Recognition

Kongqiao Wang (Nokia Research Center, China), Yikai Fang (Nokia Research Center, China) and Xiujuan Chai (Chinese Academy of Sciences, China)
Copyright: © 2011 |Pages: 16
DOI: 10.4018/978-1-60960-024-2.ch011


Vision based gesture recognition is a hot research topic in recent years. Many researchers focus on how to differentiate various hand shapes, e.g. the static hand gesture recognition or hand posture recognition. It is one of the fundamental problems in vision based gesture analysis. In general, most frequently used visual cues human uses to describe hand are appearance and structure information, while the recognition with such information is difficult due to variant hand shapes and subject differences. To have a good representation of hand area, methods based on local features and texture histograms are attempted to represent the hand. And a learning based classification strategy is designed with different descriptors or features. In this chapter, we mainly focus on 2D geometric and appearance models, the design of local texture descriptor and semi-supervised learning strategy with different features for hand posture recognition.
Chapter Preview


Methods for vision based hand gesture recognition fall into two categories: 3D model based methods and appearance model based methods. 3D model may exactly describe hand movement and its shape, but most of them are computational expensive to use. Recently there are some methods to obtain 3D model with 2D appearance model such as ISOSOM (Haiying, Rogerio & Matthew, 2006) and PCA-ICA (Makoto, Yenwei and Gang, 2006). For the consideration of easy implementation and non-intrusive characteristic, we prefer to use 2D model and appearance based methods for hand posture recognition in this chapter.

Freeman and Weissman (1995) recognized gestures for television control using normalized correlation. This technique is efficient but may be sensitive to different users, deformations of the pose and changes in scale, and background. Cui and Weng (1996) proposed a hand tracking and sign recognition method using appearance based method. Although its accuracy was satisfactory, the performance was far from real-time. Elastic graphs were applied to represent hands in different hand gestures in Triesch’s work with local jets of Gabor filters (Triesch & Malsburg 1996). It locates hands without separate segmentation mechanism and the classifier is learned from a small set of image samples, so the generalization is very limited. These model based methods are intuitive for hand representation. With elaborate design of local features, it’s not necessary to use complicated classification strategy. Bretzner, Laptev & Lindeberg (2002) use scale-space feature detection to decompose hand into palm and fingers. The decomposition is intuitive and effective. However, the detection involves lots of Gaussian convolution across images and brings high time consumption in practice.

Complete Chapter List

Search this Book: