Spatio-Temporal Deep Feature Fusion for Human Action Recognition

Spatio-Temporal Deep Feature Fusion for Human Action Recognition

Indhumathi C., Murugan V., Muthulakshmi G.
Copyright: © 2022 |Pages: 13
DOI: 10.4018/IJCVIP.296584
Article PDF Download
Open access articles are freely available for download

Abstract

Action Recognition plays a vital role in many secure applications. The objective of this paper is to identify actions more accurately. This paper focuses on the two stream network in which keyframe extraction method is utilized before extracting spatial features. The temporal features are extracted using Attentive Correlated Temporal Feature (ACTF) which uses Long Short Term Memory (LSTM) for deep features. The spatial and temporal features are fused and classified using multi Support Vector Machine (multiSVM) classifier. Experiments are done on HMDB51 and UCF101 datasets. The results of the proposed method are compared with recent methods in terms of accuracy. The proposed method is proved to work better than other methods by achieving an accuracy of 96% for HMDB51 dataset and 98% for UCF101 dataset.
Article Preview
Top

2.Related Works

This section discusses related works that are used for comparing the performance of the proposed method. The discussion starts with two stream network, followed by 3D networks and finally discusses recurrent networks. Simonyan et al. (2014) developed a method based on a two-stream network which outperforms previous network architecture of human action recognition. Although this method used temporal information in the video, only short-term movement changes are used, without capturing long-range temporal information of the video.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing