Attention-Based Multimodal Neural Network for Automatic Evaluation of Press Conferences

Attention-Based Multimodal Neural Network for Automatic Evaluation of Press Conferences

Shengzhou Yi, Koshiro Mochitomi, Isao Suzuki, Xueting Wang, Toshihiko Yamasaki
DOI: 10.4018/IJMDEM.2020070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the study, a multimodal neural network is proposed to automatically predict the evaluation of a professional consultant team for press conferences using text and audio data. Seven publicly available press conference videos were collected, and all the Q&A pairs between speakers and journalists were annotated by the consultant team. The proposed multimodal neural network consists of a language model, an audio model, and a feature fusion network. The word representation is made up by a token embedding using ELMo and a type embedding. The language model is an LSTM with an attention layer. The audio model is based on a six-layer CNN to extract segmental feature as well as an attention network to measure the importance of each segment. Two approaches of feature fusion are proposed: a shared attention network and the production of text features and audio features. The former can explain the importance between speech content and speaking style. The latter achieved the best performance with the average accuracy of 60.1% for all evaluation criteria.
Article Preview
Top

Introduction

Press conferences are held in important occasions such as new political actions, new inauguration of presidents/governors/CEOs, and scandals. During press conferences, several speakers need to answer questions raised by the journalists. Because press conferences have the power to change the public opinions, it is important that speakers are trained in advance, often with the help of consulting firms. However, the service of the consulting firms is relatively expensive.

In order to provide convincing analysis and feedback with higher convenience and effectiveness for this kind of service, there are many researches that have worked on the automatic analysis of speech data using machine learning technologies. Speech data includes multimodal information such as the text data of the script, audio data of the speaking, and gestures during a speech. Danner et al. (2018) focused on the technology of gestures detection during a speech to quantitatively analyze a speech. Although speeches are typically accompanied by gestures, there are usually no obvious gestures on the press conferences. There is a research (Yi et al., 2020) that proposed an automatic evaluation system for press conferences based on the text contents of the speakers’ speeches. However, the speech contents are insufficient for evaluating the speakers because the speakers’ speaking styles also affect the evaluation. To overcome this problem, this study takes the multimodal data into consideration, including both text data and audio data of a press conference speech, to simulate the evaluation of the professional consultant team.

This study proposes a multimodal neural network to automatically predict the evaluation of a speech on a press conference simulating the professional consultant team. It contains a language model and an audio model to learn the representation of text features and audio features respectively, as well as a feature fusion network for feature fusion and classification. The language model includes a long short-term memory (LSTM) with an attention layer. Each word is represented by a token embedding, using embeddings from language models (ELMo), and a type embedding. The audio model consists of a shared convolution neural network (CNN) to extract segmental features, and the audio feature of a sample is represented by the weighted sum of these segmental features. Two approaches of feature fusion are proposed: a shared attention network and the product of text features and audio features. The former provides important explanation by dynamically considering the importance between text features and audio features for different criteria. The latter can achieve better predicting performance.

The proposed automatic evaluation system is constructed and verified on a real-world press conference dataset collected from YouTube. In the dataset, the ground truth labels that evaluate the speakers were collected from a professional consultant team. The evaluation of the professional consultant team was based on several criteria rather than simply giving scores. For each criterion, the speech was classified into three categories, namely, positive, neutral, and negative. Consequently, the proposed multimodal neural network achieved the average accuracy of 60.1% for the prediction of 11 evaluation criteria.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing