Video Summarization Based on Multimodal Features

Video Summarization Based on Multimodal Features

Yu Zhang, Ju Liu, Xiaoxi Liu, Xuesong Gao
DOI: 10.4018/IJMDEM.2020100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this manuscript, the authors present a keyshots-based supervised video summarization method, where feature fusion and LSTM networks are used for summarization. The framework can be divided into three folds: 1) The authors formulate video summarization as a sequence to sequence problem, which should predict the importance score of video content based on video feature sequence. 2) By simultaneously considering visual features and textual features, the authors present the deep fusion multimodal features and summarize videos based on recurrent encoder-decoder architecture with bi-directional LSTM. 3) Most importantly, in order to train the supervised video summarization framework, the authors adopt the number of users who decided to select current video clip in their final video summary as the importance scores and ground truth. Comparisons are performed with the state-of-the-art methods and different variants of FLSum and T-FLSum. The results of F-score and rank correlation coefficients on TVSum and SumMe shows the outstanding performance of the method proposed in this manuscript.
Article Preview
Top

1. Introduction

Nowadays, information overload has become an increasingly serious topic. By 2021, videos will account for more than 80% of all global Internet traffic (Fajtl, Sokeh, Argyriou, Monekosso, & Remagnino, 2019; Jiang, Cui, Peng, & Xu, 2019). So that, efficient video retrieval as well as video storage methods are needed to manage the growing information. However, the unedited original video is too long and redundant, which is a big problem for most video tasks. Consequently, how to quickly select the most useful video content for users is very important.

Video summarization is a task where the aim is to extract keyframes or keyshots from the original videos. It serves as an important way of comprehensively understanding videos while saving time on information acquisition (Jiang, Cui, Peng, & Xu, 2019). According to its final presentation form, video summarization can be divided into keyframe based static video summarization and keyshots based dynamic video summarization. In this paper, we focus on the keyshots based dynamic video summarization.

Because video data is the richest and most diverse data format in multimedia data, video summarization has become a difficult task in the field of visual comprehension. In order to make the keyshots selection from original video sequences, the main challenge in video summarization is to identify the most important video contents. Many researchers have worked on how to predict the importance of video content (Fajtl, Sokeh, Argyriou, Monekosso, & Remagnino, 2019; Zhou, Qiao, & Xiang, 2018; Ji, Xiong, Pang, & Li, 2020). The first step of video summarization is visual feature extraction. However, we have observed that many summarization methods only use simple image features of video frames, which cannot reflect the multi-level information of the video content accurately. We agree that learning good visual representation can help video summarization architecture improve visual comprehension ability (Jiang, Cui, Peng, & Xu, 2019).

Consequently, inspired by the novel framework that utilizes multi-modal features for efficient video-text retrieval (Mithun, Li, Metze, Roy-Chowdhury, 2018) and the algorithm that ranked first in the video summarization competition of CoView 2019 (Jiang, Cui, Peng, & Xu, 2019), we decided to consider appearance features of video frames, motion features of video clips as well as textual features of video title in this framework. Then, we implement fusion strategy to obtain multimodal features that can be used to predict the importance of video contents.

After that, we formulate video summarization as a sequence to sequence problem and build a summarization framework based on the recurrent encoder-decoder architecture with bi-directional Long Short-Term Memory (bi-LSTM). The input of this framework is the multimodal feature of video and the output is predicted importance score of each video clips. Finally, the Kernel Temporal Segmentation (KTS) method (Potapov, Douze, Harchaoui, & Schmid, 2014) and Knapsack algorithm (Zhou, Qiao, & Xiang, 2017) are used to select a subset of video clips by maximizing the total importance scores while constraining the total summary length. Overall structure of the proposed video summarization framework is shown in Figure 1.

Figure 1.

Overall structure of the proposed video summarization framework

IJMDEM.2020100104.f01

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing