Article Preview
Top1. Introduction
Nowadays, information overload has become an increasingly serious topic. By 2021, videos will account for more than 80% of all global Internet traffic (Fajtl, Sokeh, Argyriou, Monekosso, & Remagnino, 2019; Jiang, Cui, Peng, & Xu, 2019). So that, efficient video retrieval as well as video storage methods are needed to manage the growing information. However, the unedited original video is too long and redundant, which is a big problem for most video tasks. Consequently, how to quickly select the most useful video content for users is very important.
Video summarization is a task where the aim is to extract keyframes or keyshots from the original videos. It serves as an important way of comprehensively understanding videos while saving time on information acquisition (Jiang, Cui, Peng, & Xu, 2019). According to its final presentation form, video summarization can be divided into keyframe based static video summarization and keyshots based dynamic video summarization. In this paper, we focus on the keyshots based dynamic video summarization.
Because video data is the richest and most diverse data format in multimedia data, video summarization has become a difficult task in the field of visual comprehension. In order to make the keyshots selection from original video sequences, the main challenge in video summarization is to identify the most important video contents. Many researchers have worked on how to predict the importance of video content (Fajtl, Sokeh, Argyriou, Monekosso, & Remagnino, 2019; Zhou, Qiao, & Xiang, 2018; Ji, Xiong, Pang, & Li, 2020). The first step of video summarization is visual feature extraction. However, we have observed that many summarization methods only use simple image features of video frames, which cannot reflect the multi-level information of the video content accurately. We agree that learning good visual representation can help video summarization architecture improve visual comprehension ability (Jiang, Cui, Peng, & Xu, 2019).
Consequently, inspired by the novel framework that utilizes multi-modal features for efficient video-text retrieval (Mithun, Li, Metze, Roy-Chowdhury, 2018) and the algorithm that ranked first in the video summarization competition of CoView 2019 (Jiang, Cui, Peng, & Xu, 2019), we decided to consider appearance features of video frames, motion features of video clips as well as textual features of video title in this framework. Then, we implement fusion strategy to obtain multimodal features that can be used to predict the importance of video contents.
After that, we formulate video summarization as a sequence to sequence problem and build a summarization framework based on the recurrent encoder-decoder architecture with bi-directional Long Short-Term Memory (bi-LSTM). The input of this framework is the multimodal feature of video and the output is predicted importance score of each video clips. Finally, the Kernel Temporal Segmentation (KTS) method (Potapov, Douze, Harchaoui, & Schmid, 2014) and Knapsack algorithm (Zhou, Qiao, & Xiang, 2017) are used to select a subset of video clips by maximizing the total importance scores while constraining the total summary length. Overall structure of the proposed video summarization framework is shown in Figure 1.
Figure 1. Overall structure of the proposed video summarization framework