Rule-Based Semantic Concept Classification from Large-Scale Video Collections

Rule-Based Semantic Concept Classification from Large-Scale Video Collections

Lin Lin (Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA), Mei-Ling Shyu (Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA) and Shu-Ching Chen (School of Computing and Information Sciences, Florida International University, Miami, FL, USA)
DOI: 10.4018/jmdem.2013010103
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The explosive growth and increasing complexity of the multimedia data have created a high demand of multimedia services and applications in various areas so that people can access and distribute the data easily. Unfortunately, traditional keyword-based information retrieval is no longer suitable. Instead, multimedia data mining and content-based multimedia information retrieval have become the key technologies in modern societies. Among many data mining techniques, association rule mining (ARM) is considered one of the most popular approaches to extract useful information from multimedia data in terms of relationships between variables. In this paper, a novel rule-based semantic concept classification framework using weighted association rule mining (WARM), capturing the significance degrees of the feature-value pairs to improve the applicability of ARM, is proposed to deal with major issues and challenges in large-scale video semantic concept classification. Unlike traditional ARM that the rules are generated by frequency count and the items existing in one rule are equally important, our proposed WARM algorithm utilizes multiple correspondence analysis (MCA) to explore the relationships among features and concepts and to signify different contributions of the features in rule generation. To the authors best knowledge, this is one of the first WARM-based classifiers in the field of multimedia concept retrieval. The experimental results on the benchmark TRECVID data demonstrate that the proposed framework is able to handle large-scale and imbalanced video data with promising classification and retrieval performance.
Article Preview

1. Introduction

The development and advancement of digital recording techniques, communication platforms, and storage systems have made it much easier for people to access, collect, share, and distribute multimedia data in various services and applications such as entertainment, distant education, e-commerce, social networks, homeland security, surveillance, and medicine. Multimedia services and applications enable a simple terminal unit (like a cell phone or computer screen) to utilize the multimedia data. In addition, multimedia systems can also provide convenience for terminal users. For example, many users benefit from the utilization of intelligent multimedia applications such as digital libraries, image or music search engines, movie or game recommenders, sports or news highlighters, and personalized picture or video collection sites.

At the same time, the amounts of multimedia data have also increased tremendously in the recent years, measuring from gigabytes (GB) to terabytes (TB). Regardless of which data model or which storage device is used, the most critical functionality of a multimedia database or a multimedia system is to provide effective and efficient search and retrieval of multimedia data with a short real-time constraint whenever applicable. The advanced database and data warehouse technologies enable the management of multimedia data, and the traditional keyword-based search and/or retrieval frameworks allow the users to query the data on demand. However, it does not work well since it requires heavy human efforts for annotation, indexing, browsing, and performance evaluation of the retrieved results. This calls for the development of the content-based techniques to effectively reduce manual efforts in multimedia indexing, to efficiently search the data from the multimedia database, and to automatically retrieve accurate and meaningful information from the data (Chen, Zhang, Chen, & Chen, 2005; Chen, Rubin, Shyu, & Zhang, 2006; Huang, Chen, Shyu, & Zhang, 2002; Shyu et al., 2003; Shyu, Chen, Chen, & Zhang, 2004; Zhang et al., 2005).

Differing from keyword-based search technologies, content-based video concept classification and retrieval approaches automatically extract feature data and provide more powerful search abilities for semantics (Chen, Rubin, Shyu, & Zhang, 2006; Chen, Zhang, Chen, & Rubin, 2009; Chen, 2010; Datta, Joshi, Li, & Wang, 2008; Jiang, Yang, Ngo, & Hauptmann, 2010; Lew, Sebe, Djeraba, & Jain, 2006; Liu, Weng, Tseng, Chuang, & Chen, 2008; Shyu, Chen, Sun, & Yu, 2007; Snoek & Worring, 2008). Though many programs have been developed with complex mathematical algorithms to allow the statistical analysis of media data and search, they become inefficient and difficult when dealing with large amounts of data, and there is a lack of true semantics of the multimedia data. Therefore, researchers are becoming increasingly interested in exploring multimedia data mining for retrieval since data mining is an important tool for transforming raw data into useful information and patterns (Witten, Frank, & Hall, 2011).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing