Reference Hub11
Human Action Recognition Using Median Background and Max Pool Convolution with Nearest Neighbor

Human Action Recognition Using Median Background and Max Pool Convolution with Nearest Neighbor

Bagavathi Lakshmi, S.Parthasarathy
Copyright: © 2019 |Volume: 10 |Issue: 2 |Pages: 14
ISSN: 1941-6237|EISSN: 1941-6245|EISBN13: 9781522565079|DOI: 10.4018/IJACI.2019040103
Cite Article Cite Article

MLA

Lakshmi, Bagavathi, and S.Parthasarathy. "Human Action Recognition Using Median Background and Max Pool Convolution with Nearest Neighbor." IJACI vol.10, no.2 2019: pp.34-47. http://doi.org/10.4018/IJACI.2019040103

APA

Lakshmi, B. & S.Parthasarathy. (2019). Human Action Recognition Using Median Background and Max Pool Convolution with Nearest Neighbor. International Journal of Ambient Computing and Intelligence (IJACI), 10(2), 34-47. http://doi.org/10.4018/IJACI.2019040103

Chicago

Lakshmi, Bagavathi, and S.Parthasarathy. "Human Action Recognition Using Median Background and Max Pool Convolution with Nearest Neighbor," International Journal of Ambient Computing and Intelligence (IJACI) 10, no.2: 34-47. http://doi.org/10.4018/IJACI.2019040103

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Discovering human activities on mobile devices is a challenging task for human action recognition. The ability of a device to recognize its user's activity is important because it enables context-aware applications and behavior. Recently, machine learning algorithms have been increasingly used for human action recognition. During the past few years, principal component analysis and support vector machines is widely used for robust human activity recognition. However, with global dynamic tendency and complex tasks involved, this robust human activity recognition (HAR) results in error and complexity. To deal with this problem, a machine learning algorithm is proposed and explores its application on HAR. In this article, a Max Pool Convolution Neural Network based on Nearest Neighbor (MPCNN-NN) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics. The MPCNN-NN framework for HAR consists of three steps. In the first step, for each activity, the features of interest or foreground frame are detected using Median Background Subtraction. The second step consists of organizing the features (i.e. postures) that represent the strongest generic discriminating features (i.e. postures) based on Max Pool. The third and the final step is the HAR based on Nearest Neighbor that postures which maximizes the probability. Experiments have been conducted to demonstrate the superiority of the proposed MPCNN-NN framework on human action dataset, KARD (Kinect Activity Recognition Dataset).

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.