A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis

A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis

Benjamin Ghansah, Ben-Bright Benuwa, Augustine Monney
Copyright: © 2021 |Pages: 24
DOI: 10.4018/IJIIT.2021010105
Article PDF Download
Open access articles are freely available for download

Abstract

Video semantic concept analysis has received a lot of research attention in the area of human computer interactions in recent times. Reconstruction error classification methods based on sparse coefficients do not consider discrimination, essential for classification performance between video samples. To further improve the accuracy of video semantic classification, a video semantic concept classification approach based on sparse coefficient vector (SCV) and a kernel-based weighted KNN (KWKNN) is proposed in this paper. In the proposed approach, a loss function that integrates reconstruction error and discrimination is put forward. The authors calculate the loss function value between the test sample and training samples from each class according to the loss function criterion, and then vote on statistical results. Finally, this paper modifies the vote results combined with the kernel weight coefficient of each class and determine the video semantic concept. The experimental results show that this method effectively improves the classification accuracy for video semantic analysis and shorten the time used in the semantic classification compared with some baseline approaches.
Article Preview
Top

1. Introduction

Video semantic content analysis is currently receiving a lot of research attention from the sports world, which is facilitating the work of sports experts, content providers and end users (Babu, Tom, & Wadekar, 2016; Jiang, 2016). The key concept of video semantic analysis is the exploitation of an effective mapping between the low-level visual features and the high-level semantic concepts from multimedia datasets, to efficiently extract the high-level semantic concepts from video data. Recently, video semantic analysis has become a blooming research area by many scholars and a significant progress has been made in the field in recent times (Deng, Hu, & Guo, 2012; Fu, Hu, Chen, & Ren, 2012; Huang, Shih, & Chao, 2006; Song, Shao, Yang, & Wu, 2017). For instance, a VSA approach based on fusion and interaction of multi-features and multi-models for sports semantic analysis was presented in (iaqi Fu, Hu, Chen, & Ren, 2012). This was done using a semantic color ratio that classified video shots arbitrarily into in-shots, global shots and out-shots for effective classification of sports video. In bridging the gap between low-level features and high-level semantic information, an ontology model based on semantic video object was proposed in (Liang, Xiangming, Bo, & Wei, 2010). A video semantics approach for events detection and weakly genre classification was also proposed in (You, Liu, & Perkis, 2010). This utilized the naïve Bayesian classifier and Hidden Markov Model (HMM) for video classification.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing