Article Preview
Top1. Introduction
Video semantic content analysis is currently receiving a lot of research attention from the sports world, which is facilitating the work of sports experts, content providers and end users (Babu, Tom, & Wadekar, 2016; Jiang, 2016). The key concept of video semantic analysis is the exploitation of an effective mapping between the low-level visual features and the high-level semantic concepts from multimedia datasets, to efficiently extract the high-level semantic concepts from video data. Recently, video semantic analysis has become a blooming research area by many scholars and a significant progress has been made in the field in recent times (Deng, Hu, & Guo, 2012; Fu, Hu, Chen, & Ren, 2012; Huang, Shih, & Chao, 2006; Song, Shao, Yang, & Wu, 2017). For instance, a VSA approach based on fusion and interaction of multi-features and multi-models for sports semantic analysis was presented in (iaqi Fu, Hu, Chen, & Ren, 2012). This was done using a semantic color ratio that classified video shots arbitrarily into in-shots, global shots and out-shots for effective classification of sports video. In bridging the gap between low-level features and high-level semantic information, an ontology model based on semantic video object was proposed in (Liang, Xiangming, Bo, & Wei, 2010). A video semantics approach for events detection and weakly genre classification was also proposed in (You, Liu, & Perkis, 2010). This utilized the naïve Bayesian classifier and Hidden Markov Model (HMM) for video classification.