Incorporation of Human Facial Expression Into Robot Control

Incorporation of Human Facial Expression Into Robot Control

Naoya Hasegawa (Kanagawa Institute of Technology, Japan) and Yoshihiko Takahashi (Kanagawa Institute of Technology, Japan)
DOI: 10.4018/978-1-7998-0137-5.ch012
OnDemand PDF Download:
No Current Special Offers


An amusement robot that can recognize and respond to human facial expressions is proposed. The integration of human facial recognition into the control systems of robots is necessary to improve their entertainment value. We are proposing a new technology that uses a camera to read and respond to human facial expressions. As the preliminary research, a portable bubble ejection robot, which produces soap bubbles like a fountain, has been fabricated. The ejected soap bubbles are illuminated by LEDs and the direction of the ejection nozzle and the amount of the ejected bubbles are controlled by using an Android tablet. Subsequently, human facial expressions are read by a facial expression detection system. This paper will explain the design of the bubble ejection robot, its control system, which reads human facial expressions, and the experimental results of human reaction to the bubble ejection. Consequently, the relationship between the soap bubble ejection and the human emotions is revealed.
Chapter Preview


At various exhibitions, amusement robots are able to interact with humans. Questionnaires and interviews are used to evaluate how a robot performs and how humans feel (Odashima, Hata, & Goto, 2014). There are examples of human facial expression studies (Kitamura, Takemura, Iwai, & Sato, 2014) or human reactions to robot facial expressions (Takahashi, Goto, & Shigemoto, 2006) employing this strategy. However, the long periods of time required for feedback make short-term or real-time feedback.

On the other hand, GSR is often used as a method to measure human emotion; several basic research examples exist from a long time ago (Darrow & Gullickson, 1970; Yokota, Takahashi, Kondo, & Fujimori, 1959; Fujimori & Yokota, 1962; Iwama & Abe, 1953; Umehara & Sekido, 1976; Yokota, Sato, & Fujimori, 1963). Furthermore, GSR is applied in various fields, such as music (Zimny & Weidenfeller, 1963), medicine (Lorens, Jr., & Darrow, 1962; Volavka, Matousek, & Roubicek, 1967), education (Watanabe & Amano, 1994; Inoshita, Ogata, Tokunaga, Bando, Yamada, & Marumoto, 1993; Kai, 1962), and robotic nursing (Takahashi, Hasegawa, Takahashi, & Hatakeyama, 2001; Takahashi, Hasegawa, Takahashi, & Hatakeyama, 2002; Shirai, Takahashi, 2018). Although GSR can detect if a human is upset, it cannot measure emotions such as joyfulness or sadness.

Complete Chapter List

Search this Book: