Classifying Behaviours in Videos with Recurrent Neural Networks

Classifying Behaviours in Videos with Recurrent Neural Networks

Javier Abellan-Abenza, Alberto Garcia-Garcia, Sergiu Oprea, David Ivorra-Piqueres, Jose Garcia-Rodriguez
Copyright: © 2017 |Pages: 15
DOI: 10.4018/IJCVIP.2017100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article describes how the human activity recognition in videos is a very attractive topic among researchers due to vast possible applications. This article considers the analysis of behaviors and activities in videos obtained with low-cost RGB cameras. To do this, a system is developed where a video is input, and produces as output the possible activities happening in the video. This information could be used in many applications such as video surveillance, disabled person assistance, as a home assistant, employee monitoring, etc. The developed system makes use of the successful techniques of Deep Learning. In particular, convolutional neural networks are used to detect features in the video images, meanwhile Recurrent Neural Networks are used to analyze these features and predict the possible activity in the video.
Article Preview
Top

State Of The Art

Human Behaviour Analysis (HBA) involves a wide range of applications: Video Surveillance, Ambient-Assisted Living, etc. All these applications have in common the need of creating an artificial intelligence that understands the body of a person and its natural movement for different activities. Human activities, such as “walking” or “running,” are relatively easy to recognize. On the other hand, more complex activities, such as “peeling an apple,” are more difficult to identify. Complex activities may be decomposed into other simpler activities, which are generally easier to recognize. Therefore, it is necessary to understand the different HBA levels that exist. Moeslund, Hilton, and Krüger (2006) defined a classification of the different action taxonomies that have been adopted later in many other works. It defines three levels of abstractions from smallest to biggest: 1. Action primitive: Basic motion recognition that represents the atomic movement out of which actions are built. 2. Action: Composed of different action primitives. 3. Activity: A higher level of abstraction which requires the semantic notion of the context and the involved objects.

Although this taxonomy is highly used among researchers, some of them use their own taxonomies, for example Ji, Liu, Li, and Brown (2008) include a higher level of abstraction called behaviour. They defined behaviour as” human motion patterns involving high-level description of actions and interactions”.

Motion recognition is the fundament for detecting human activities or behaviours. Motion is decomposed in a series of poses through time. A pose can be described as the state of the body posture that can be represented by an articulated system of rigid segments connected by joints, like the model described in Andriluka, Roth, and Schiele (2009); Sapp, Toshev, and Taskar (2010).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing