Shopping Cart | Login | Register | Language: English

Machine Learning for Human Motion Analysis: Theory and Practice

Release Date: December, 2009. Copyright © 2010. 318 pages.
Select a Format:
Hardcover
$196.00
List Price: $245.00
Current Promotions:
20% Online Bookstore Discount*
In Stock. Have it as soon as Apr. 22 with express shipping*.
DOI: 10.4018/978-1-60566-900-7, ISBN13: 9781605669007, ISBN10: 1605669008, EISBN13: 9781605669014
Cite Book

MLA

Wang, Liang, Li Cheng and Guoying Zhao. "Machine Learning for Human Motion Analysis: Theory and Practice." IGI Global, 2010. 1-318. Web. 17 Apr. 2014. doi:10.4018/978-1-60566-900-7

APA

Wang, L., Cheng, L., & Zhao, G. (2010). Machine Learning for Human Motion Analysis: Theory and Practice (pp. 1-318). Hershey, PA: IGI Global. doi:10.4018/978-1-60566-900-7

Chicago

Wang, Liang, Li Cheng and Guoying Zhao. "Machine Learning for Human Motion Analysis: Theory and Practice." 1-318 (2010), accessed April 17, 2014. doi:10.4018/978-1-60566-900-7

Export Reference

Mendeley
Favorite
Machine Learning for Human Motion Analysis: Theory and Practice
Access on Platform
More Information
Browse by Subject
Top

Description

With the ubiquitous presence of video data and its increasing importance in a wide range of real-world applications, it is becoming increasingly necessary to automatically analyze and interpret object motions from large quantities of footage.

Machine Learning for Human Motion Analysis: Theory and Practice highlights the development of robust and effective vision-based motion understanding systems. This advanced publication addresses a broad audience including practicing professionals working with specific vision applications such as surveillance, sport event analysis, healthcare, video conferencing, and motion video indexing and retrieval.

Top

Table of Contents and List of Contributors

Search this Book: Reset
Table of Contents
Preface
Liang Wang, Li Cheng, Guoying Zhao
Chapter 1
Tony Tung, Takashi Matsuyama
This chapter presents a new formulation for the problem of human motion tracking in video. Tracking is still a challenging problem when strong... Sample PDF
Human Motion Tracking in Video: A Practical Approach
$37.50
Chapter 2
Olusegun T. Oshin, Andrew Gilbert, John Illingworth, Richard Bowden
In this chapter, we present a generic classifier for detecting spatio-temporal interest points within video, the premise being that, given an... Sample PDF
Learning to Recognise Spatio-Temporal Interest Points
$37.50
Chapter 3
Pradeep Natarajan, Ramakant Nevatia
Building a system for recognition of human actions from video involves two key problems - 1) designing suitable low-level features that are both... Sample PDF
Graphical Models for Representation and Recognition of Human Actions
$37.50
Chapter 4
Ronald Poppe
We present a discriminative approach to human action recognition. At the heart of our approach is the use of common spatial patterns (CSP), a... Sample PDF
Common Spatial Patterns for Real-Time Classification of Human Actions
$37.50
Chapter 5
Therdsak Tangkuampien, David Suter
A marker-less motion capture system, based on machine learning, is proposed and tested. Pose information is inferred from images captured from... Sample PDF
KSM Based Machine Learning for Markerless Motion Capture
$37.50
Chapter 6
YingLi Tian, Rogerio Feris, Lisa Brown, Daniel Vaquero, Yun Zhai, Arun Hampapur
Visual processing of people, including detection, tracking, recognition, and behavior interpretation, is a key component of intelligent video... Sample PDF
Multi-Scale People Detection and Motion Analysis for Video Surveillance
$37.50
Chapter 7
Lei Zhang, Jixu Chen, Zhi Zeng, Qiang Ji
Upper body tracking is a problem to track the pose of human body from video sequences. It is difficult due to such problems as the high... Sample PDF
A Generic Framework for 2D and 3D Upper Body Tracking
$37.50
Chapter 8
Vassilis Syrris
This work describes a simple and computationally efficient, appearance-based approach both for human pose recovery and for real-time recognition of... Sample PDF
Real-Time Recognition of Basic Human Actions
$37.50
Chapter 9
Konrad Schindler, Luc van Gool
Visual categorisation of human motion in video clips has been an active field of research in recent years. However, most published methods either... Sample PDF
Fast Categorisation of Articulated Human Motion
$37.50
Chapter 10
Wanqing Li, Zhengyou Zhang, Zicheng Liu, Philip Ogunbona
This chapter first presents a brief review of the recent development in human action recognition. In particular, the principle and shortcomings of... Sample PDF
Human Action Recognition with Expandable Graphical Models
$37.50
Chapter 11
Scott Blunsden, Robert Fisher
This chapter presents a way to classify interactions between people. Examples of the interactions we investigate are: people meeting one another... Sample PDF
Detection and Classification of Interacting Persons
$37.50
Chapter 12
Action Recognition  (pages 228-243)
Qingdi Wei, Xiaoqin Zhang, Weiming Hu
Action recognition is one of the most active research fields in computer vision. This chapter first reviews the action recognition methods in... Sample PDF
Action Recognition
$37.50
Chapter 13
Dong Seon Cheng, Marco Cristani, Vittorio Murino
Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing... Sample PDF
Distillation: A Super-Resolution Approach for the Selective Analysis of Noisy and Unconstrained Video Sequences
$37.50
Top

Reviews and Testimonials

This book contains an excellent collection of theoretical and technical chapters written by different authors who are worldwide-recognized researchers on various aspects of human motion understanding using machine learning methods.

– Liang Wang, Li Cheng, Guoying Zhao
Top

Topics Covered

  • 2D and 3D upper body tracking
  • Detection and classification of interacting persons
  • Fast categorization of articulated human motion
  • Graphical models for human actions
  • Human action recognition
  • Human motion tracking in video
  • Machine learning for motion capture
  • Motion analysis for video surveillance
  • Multi-scale people detection
  • Real-time recognition of human actions
  • Spatial patterns for real-time classification
  • Spatio-temporal interest points
Top

Preface

TENTATIVE

The goal of vision-based motion analysis is to provide computers with intelligent perception capacities, so they can sense the objects and understand their behaviors from video sequences. With the ubiquitous presence of video data and the increasing importance in a wide range of applications such as visual surveillance, human-machine interfaces and sport event interpretation, it is becoming increasingly demanding to automatically analyze and understand object motions from large amount of video footage.

Not surprisingly, this exciting research area has received growing interest in recent years. Although there has been significant progress in the past decades, many challenging problems remain unsolved, e.g., robust object detection and tracking, unconstrained object activity recognition, etc. The field of machine learning, on the other hand, is driven by the idea that the essential rules or patterns behind data can be automatically learned by a computer or a system. Statistical learning approach is one major frontier for computer vision research. We have evidenced in recent years a growing number of successes of machine learning applications to certain vision problems. It is fully believed that machine learning technologies is going to significantly contribute to the development of practical systems for vision-based motion analysis.

This edited book presents and highlights a collection of recent developments along this direction. A brief summary of each chapter is presented as follow:

Chapter 01, “Human Motion Tracking in Video: A Practical Approach”, presents a new formulation for the problem of human motion tracking in video. Tracking is still a challenging problem when strong appearance changes occur as in videos of humans in motion. A solution is to use an online method that updates iteratively a subspace of reference target models, integrating color and motion cues in a particle filter framework to track human body parts. The algorithm process consists of two modes, switching between detection and tracking. The detection steps involve trained classifiers to update estimated positions of the tracking windows, whereas tracking steps rely on an adaptive color-based particle filter coupled with optical flow estimations. The Earth Mover distance is used to compare color models in a global fashion, and constraints on flow features avoid drifting effects. The proposed method has revealed its efficiency to track body parts in motion and can cope with full appearance changes.

Chapter 02, “Learning to Recognise Spatio-Temporal Interest Points”, presents a generic classifier for detecting spatio-temporal interest points within video. The premise being that, given an interest point detector, a classifier is learnt that duplicates its functionality, which is both accurate and computationally efficient. This means that interest point detection can be achieved independent of the complexity of the original interest point formulation. The naive Bayesian classifier of Ferns is extended to the spatio-temporal domain and learn classifiers that duplicate the functionality of common spatio-temporal interest point detectors. Results demonstrate accurate reproduction of results with a classifier that can be applied exhaustively to video at frame-rate, without optimisation, in a scanning window approach.

Chapter 03, “Graphical Models for Representation and Recognition of Human Actions”, reviews graphical models that provide a natural framework for representing state transitions in events and also the spatio-temporal constraints between the actors and events. Hidden Markov Models (HMMs) have been widely used in several action recognition applications but the basic representation has three key deficiencies: These include unrealistic models for the duration of a sub-event, not encoding interactions among multiple agents directly and not modeling the inherent hierarchical organization of these activities. Several extensions have been proposed to address one or more of these issues and have been successfully applied in various gesture and action recognition domains. More recently, Conditional Random Fields (CRF) are becoming increasingly popular since they allow complex potential functions for modeling observations and state transitions, and also produce superior performance to HMMs when sufficient training data is available. This chapter first reviews the various extensions of these graphical models, then presents the theory of inference and learning in them and finally discusses their applications in various domains.

Chapter 04, “Common Spatial Patterns for Real-time Classification of Human Actions”, presents a discriminative approach to human action recognition. At the heart of the approach is the use of common spatial patterns (CSP), a spatial filter technique that transforms temporal feature data by using differences in variance between two classes. Such a transformation focuses on differences between classes, rather than on modeling each class individually. The most likely class is found by pairwise evaluation of all discriminant functions, which can be done in real-time. Image representations are silhouette boundary gradients, spatially binned into cells. The method achieves scores of approximately 96% on the Weizmann human action dataset, and shows that reasonable results can be obtained when training on only a single subject.

Chapter 05, “KSM-based Machine Learning for Markless Motion Capture”, proposes a marker-less motion capture system, based on machine learning. Pose information is inferred from images captured from multiple (as few as two) synchronized cameras. The central concept of which, they call: Kernel Subspace Mapping (KSM). The images-to-pose learning could be done with large numbers of images of a large variety of people (and with the ground truth poses accurately known). What makes machine learning viable for human motion capture is that a high percentage of human motion is coordinated. Indeed, it is now relatively well known that there is large redundancy in the set of possible images of a human (these images form some sort of relatively smooth lower dimensional manifold in the huge dimensional space of all possible images) and in the set of pose angles (again, a low dimensional and smooth sub-manifold of the moderately high dimensional space of all possible joint angles). KSM, is based on the KPCA (Kernel PCA) algorithm, which is costly. They show that the Greedy Kernel PCA (GKPCA) algorithm can be used to speed up KSM, with relatively minor modifications. At the core, then, is two KPCA’s (or two GKPCA’s) - one for the learning of pose manifold and one for the learning image manifold. Then they use a modification of Local Linear Embedding (LLE) to bridge between pose and image manifolds.

Chapter 06, “Multi-Scale People Detection and Motion Analysis for Video Surveillance”, addresses visual processing of people, including detection, tracking, recognition, and behavior interpretation, a key component of intelligent video surveillance systems. Computer vision algorithms with the capability of “looking at people” at multiple scales can be applied in different surveillance scenarios, such as far-field people detection for wide-area perimeter protection, midfield people detection for retail/banking applications or parking lot monitoring, and near-field people/face detection for facility security and access. In this chapter, they address the people detection problem in different scales as well as human tracking and motion analysis for real video surveillance applications including people search, retail loss prevention, people counting, and display effectiveness.

Chapter 07, “A Generic Framework for 2D and 3D Upper Body Tracking”, targets upper body tracking, a problem to track the pose of human body from video sequences. It is difficult due to such problems as the high dimensionality of the state space, the self-occlusion, the appearance changes, etc. In this chapter, they propose a generic framework that can be used for both 2D and 3D upper body tracking and can be easily parameterized without heavily depending on supervised training. They first construct a Bayesian Network (BN) to represent the human upper body structure and then incorporate into the BN various generic physical and anatomical constraints on the parts of the upper body. They also explicitly model part occlusion in the model, which allows to automatically detect the occurrence of self-occlusion and to minimize the effect of measurement errors on the tracking accuracy due to occlusion. Using the proposed model, upper body tracking can be performed through probabilistic inference over time. A series of experiments were performed on both monocular and stereo video sequences to demonstrate the effectiveness and capability of the model in improving upper body tracking accuracy and robustness.

Chapter 08, “Real-time Recognition of Basic Human Actions”, describes a simple and computationally efficient, appearance-based approach for real-time recognition of basic human actions. They apply a technique that depicts the differences between two or more successive frames accompanied by a threshold filter to detect the regions of the video frames where some type of human motion is observed. From each frame difference, the algorithm extracts an incomplete and unformed human body shape and generates a skeleton model which represents it in an abstract way. Eventually, the recognition process is formulated as a time-series problem and handled by a very robust and accurate prediction method (Support Vector Regression). The proposed technique could be employed in applications such as vision-based autonomous robots and surveillance systems.

Chapter 09, “Fast Categorisation of Articulated Human Motion”, exploits the problem of visual categorisation of human motion in video clips. Most published methods either analyse an entire video and assign it a single category label, or use relatively large look-ahead to classify each frame. Contrary to these strategies, the human visual system proves that simple categories can be recognised almost instantaneously. Here they present a system for categorisation from very short sequences (“snippets”) of 1–10 frames, and systematically evaluate it on several data sets. It turns out that even local shape and optic flow for a single frame are enough to achieve 80-90% correct classification, and snippets of 5-7 frames (0.2-0.3 seconds of video) yield results on par with the ones state-of-the-art methods obtain on entire video sequences.

Chapter 10, “Human Action Recognition with Expandable Graphical Models”, proposes an action recognition system that is independent of the subjects who perform the actions, independent of the speed at which the actions are performed, robust against noisy extraction of features used to characterize the actions, scalable to large number of actions and expandable with new actions. In this chapter, they describe a recently proposed expandable graphical model of human actions that has the promise to realize such a system. This chapter first presents a brief review of the recent development in human action recognition. Then, the expandable graphical model is presented in detail and a system that learns and recognizes human actions from sequences of silhouettes using the expandable graphical model is developed.

Chapter 11, “Detection and Classification of Interacting Persons”, presents a way to classify interactions between people. Examples of the interactions they investigate are; people meeting one another, walking together and fighting. A new feature set is proposed along with a corresponding classification method. Results are presented which show the new method performing significantly better than the previous state of the art method as proposed by Oliver et al.

Chapter 12, “Action Recognition”, first reviews the current action recognition methods from the following two aspects: action representation and recognition strategy. Then, a novel method for classifying human actions from image sequences is investigated. In this method, the human action is represented by a set of shape context features of human silhouette, and a dominant sets-based approach is employed to classify the predefined actions. The comparison between the dominant sets-based approach with K-means, mean shift, and Fuzzy-Cmean is also discussed.

Chapter 13, “Distillation: A Super-resolution Approach for the Selective Analysis of Noisy and Unconstrained Video Sequences”, argues that image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how they can overcome these problems. They propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences, called Distillation. With Distillation, they extend and generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. They also extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. The approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.

In summary, this book contains an excellent collection of theoretical and technical chapters written by different authors who are worldwide-recognized researchers on various aspects of human motion understanding using machine learning methods. The targeted audiences are mainly researchers, engineers as well as graduate students in the areas of computer vision and machine learning. The book is also intend to be accessible to a broader audience including practicing professionals working with specific vision applications such as video surveillance, sport event analysis, healthcare, video conferencing, motion video indexing and retrieval. We wish this book would help toward the development of robust yet flexible vision systems.

    Dr. Liang Wang
    The University of Melbourne, Australia

    Li Cheng
    TTI-Chicago, USA

    Guoying Zhao
    University of Oulu, Finland

Top

Author(s)/Editor(s) Biography

Liang Wang obtained the BEng and MEng degrees in electronic engineering from Anhui University and PhD in pattern recognition and intelligent system from National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. From July 2004 to January 2007, he worked at Imperial College London (UK), and at Monash University (Australia), respectively. He is currently working as a research fellow at The University of Melbourne (Australia). His main research interests include pattern recognition, machine learning, computer vision, and data mining. He has widely published at IEEE TPAMI, TIP, TKDE, TCSVT, TSMC, CVIU, PR, CVPR, ICCV, and ICDM. He serves for many major international journals and conferences as AE, reviewer, or PC member. He is currently an associate editor of IEEE TSMC-B, IJIG and Signal Processing. He is a co-editor of four books to be published by IGI Global and Springer, and a guest editor of three special issues for the international journals PRL, IJPRAI and IEEE TSMC-B, as well as co-chairing a special session and three workshops for VM’08, MLVMA’08 and THEMIS’08.
Li Cheng received the BS degree from Jilin University, China, the ME degree from Nankai University, and the PhD degree from the Department of Computing Science, University of Alberta, Canada, in 2004. He worked as a research associate in the same department at the University of Alberta, and now he is with the Machine Learning group, NICTA Australia, and TTI-Chicago USA as a PostDoc. He has published about 25 research papers. Together with A. Smola and M. Hutter, he co-organized a machine learning summer school (MLSS08 @ mlss08.rsise.anu.edu.au, also see www.mlss.cc). His research interests are mainly on image and video understanding, computer vision and machine learning.
Guoying Zhao received the PhD degree in computer science from the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China in 2005. Since July 2005, she has been a Postdoctoral Research Fellow in Machine Vision Group at the University of Oulu. Her research interests include gait analysis, dynamic texture recognition, facial expression recognition, human motion analysis, and person identification. She has authored over 50 papers in journals and conferences, and has served as a reviewer for many journals and conferences. She gave an invited talk “Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions” in Institute of Computing Technology, Chinese Academy of Sciences, July 2007. With Prof. Pietikäinen, she gave a tutorial: “Local Binary Pattern Approach to Computer Vision” in 18th ICPR, Aug. 2006, Hong Kong. She is authoring/editing three books to be published (IGI or Springer). She is guest editor of the special issue New Advances in Video-based Gait Analysis and Applications: Challenges and Solutions on IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics. She was a co-chair of ECCV 2008 Workshop on Machine Learning for Vision-based Motion Analysis (MLVMA), and is a co-chair of MLVMA workshop at ICCV 2009.