Unsupervised Estimation of Facial Expression Intensity for Emotional Scene Retrieval in Lifelog Videos

Unsupervised Estimation of Facial Expression Intensity for Emotional Scene Retrieval in Lifelog Videos

Shota Sakaue, Hiroki Nomiya, Teruhisa Hochin
Copyright: © 2018 |Volume: 6 |Issue: 4 |Pages: 16
ISSN: 2166-7160|EISSN: 2166-7179|EISBN13: 9781522546863|DOI: 10.4018/IJSI.2018100103
Cite Article Cite Article

MLA

Sakaue, Shota, et al. "Unsupervised Estimation of Facial Expression Intensity for Emotional Scene Retrieval in Lifelog Videos." IJSI vol.6, no.4 2018: pp.30-45. http://doi.org/10.4018/IJSI.2018100103

APA

Sakaue, S., Nomiya, H., & Hochin, T. (2018). Unsupervised Estimation of Facial Expression Intensity for Emotional Scene Retrieval in Lifelog Videos. International Journal of Software Innovation (IJSI), 6(4), 30-45. http://doi.org/10.4018/IJSI.2018100103

Chicago

Sakaue, Shota, Hiroki Nomiya, and Teruhisa Hochin. "Unsupervised Estimation of Facial Expression Intensity for Emotional Scene Retrieval in Lifelog Videos," International Journal of Software Innovation (IJSI) 6, no.4: 30-45. http://doi.org/10.4018/IJSI.2018100103

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

This article describes how in order to facilitate the retrieval of impressive scenes from lifelog videos, a method to estimate the intensity of a facial expression of a person in a lifelog video is proposed. The previous work made it possible to estimate the facial expression intensity, but the previous method requires some training samples which should be manually and carefully selected. This makes the previous method quite inconvenient. This article attempts to solve this problem by introducing an unsupervised learning method. The proposed method estimates the facial expression intensity via a clustering on the basis of several facial features computed from the positional relationship of a number of facial feature points. For the evaluation of the proposed method, an experiment to estimate the facial expression intensity is performed using a lifelog video data set. The estimation performance of the proposed method is compared with that of the previous method.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.