Weighted Association Rule Mining for Video Semantic Detection

Weighted Association Rule Mining for Video Semantic Detection

Lin Lin, Mei-Ling Shyu
DOI: 10.4018/jmdem.2010111203
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Semantic knowledge detection of multimedia content has become a very popular research topic in recent years. The association rule mining (ARM) technique has been shown to be an efficient and accurate approach for content-based multimedia retrieval and semantic concept detection in many applications. To further improve the performance of traditional association rule mining technique, a video semantic concept detection framework whose classifier is built upon a new weighted association rule mining (WARM) algorithm is proposed in this article. Our proposed WARM algorithm is able to capture the different significance degrees of the items (feature-value pairs) in generating the association rules for video semantic concept detection. Our proposed WARM-based framework first applies multiple correspondence analysis (MCA) to project the features and classes into a new principle component space and discover the correlation between feature-value pairs and classes. Next, it considers both correlation and percentage information as the measurement to weight the feature-value pairs and to generate the association rules. Finally, it performs classification by using these weighted association rules. To evaluate our WARM-based framework, we compare its performance of video semantic concept detection with several well-known classifiers using the benchmark data available from the 2007 and 2008 TRECVID projects. The results demonstrate that our WARM-based framework achieves promising performance and performs significantly better than those classifiers in the comparison.
Article Preview
Top

1. Introduction

Managing multimedia databases requires the ability to retrieve meaningful information from the digital data, in order to help users find relevant multimedia data more effectively and to facilitate better ways of entertainment. Motivated by a large number of requirements and applications such as sport highlighters, movie recommenders, image search engines, and music libraries, multimedia retrieval and semantic detection have become very popular research topics in recent years (Lew, Sebe, Djeraba & Jain, 2006; Shyu, Chen, Sun & Yu, 2007; Snoek & Worring, 2008). The general steps for supervised content-based multimedia retrieval consist of the segmentation of the multimedia data (i.e., detecting the basic units for processing), the representation of the multimedia data (i.e., extracting low-level features per unit), the model training using the low-level features, and the classification of the testing data using the trained model.

The most frequently used features for image retrieval are low-level features such as color, texture, and shape (Datta, Joshi, Li & Wang, 2008); while for video retrieval, the features are these visual features as well as some low-level audio and motion features (Lew, Sebe, Djeraba & Jain, 2006). One of the biggest challenges of multimedia retrieval is that it is hard to bridge the semantic gaps between the low-level features and the high-level features/concepts. Traditionally, these low-level features are considered contributing equally to the models, and the models are trained by using all the features they are provided. Later, the models are required to have the ability to select the features that better represent a certain concept class. In this manner, the features are selected before the model training process, and hence the models do not necessary benefit from the feature selection process (Lin, Ravitz, Shyu, & Chen, 2008; Liu & Motoda, 1998). From another point of view, the importance of the features is not considered equally anymore, but is considered as “good” or “bad” while performing the feature selection.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing