A Human Affect Recognition System for Socially Interactive Robots

A Human Affect Recognition System for Socially Interactive Robots

Derek McColl, Goldie Nejat
Copyright: © 2014 |Pages: 20
DOI: 10.4018/978-1-4666-4607-0.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents a real-time robust affect classification methodology for socially interactive robots engaging in one-on-one human-robot-interactions (HRI). The methodology is based on identifying a person’s body language in order to determine how accessible he/she is to a robot during the interactions. Static human body poses are determined by first identifying individual body parts and then utilizing an indirect 3D human body model that is invariant to different body shapes and sizes. The authors implemented and tested their technique using two different sensory systems in social HRI scenarios to motivate its robustness for the proposed application. In particular, the experiments consisted of integrating the proposed body language recognition and affect classification methodology with imaging-based sensory systems onto the human-like socially interactive robot Brian 2.0 in order for the robot to recognize affective body language during one-on-one interactions.
Chapter Preview
Top

Introduction

Socially interactive robots are currently being designed to engage in convincing and natural social interactions with people for a wide variety of everyday applications. Namely, emerging applications for these robots include assistants in health/elderly care (Chan et al., 2011; Tapus et al., 2009); helpers in the home/workplace (Hashimoto & Kobayashi, 2009); tour guides and greeters in museums, hospitals and shopping malls (Haasch et al., 2004); and aids in security and defense (Belkhouche et al., 2006). In order for socially interactive robots to be effectively integrated and accepted within society, they must be able to communicate, function and interact with people. Namely, effective human-robot interaction (HRI) is highly dependent on a robot’s ability to recognize various social spaces and social cues. Thus, an important design issue that needs to be addressed for social robots operating in person-centered environments is their ability to recognize and identify a person in an environment, and judge that person’s intent and behavior in order to respond appropriately during interactions. By being able to detect a person’s mannerisms and actions during HRI, a social robot can aim to obtain the person’s acceptance in order to create a long-term relationship between a user and itself. Observing this relationship can bring insight into how humans adapt to and interact with social robots.

While both verbal and non-verbal (i.e., facial expressions, body gestures and tone of voice) communication between humans can be used to convey affect, it has been found that non-verbal communication is more meaningful than verbal content, particularly in Western cultures, in demonstrating affective qualities during one-on-one interactions (Mehrabian & Ferris, 1968; Argyle et al., 1971; Haase & Tepper, 1972; Tepper & Haase, 1978; Davis & Hadiks, 1994). To date, a great deal of work has been conducted in the development of automated affect recognition techniques utilized in determining human affect through paralanguage (pitch and volume of voice) (Sundberg et al., 2011; Hyun et al., 2007) and facial expressions (Mingli et al., 2010; Tian et al., 2005). Little attention has been placed on the development of automated affect recognition systems that utilize body language, mainly due to the complexity and high number of degrees of freedom of the human body. However, body language has been found to play a vital role in conveying human intent, moods, attitudes and affect (Gong et al., 2007). Thus, it is important that during social HRI, a robot have the ability to recognize human body language in order to better engage a person in an interaction through its own appropriate display of behaviors. In our work, we focus on developing a body language affect recognition technique for social robots in order to promote natural one-on-one interactions.

In this chapter, we present a unique robust automated methodology for the identification and categorization of human body language in order to determine how accessible a person is to a social robot during natural real-time HRI. The methodology is based on the utilization of imaging sensors (that can be placed directly on the robot) for perception and characterization of 3D upper body poses. Body poses are determined by first identifying individual body parts and then utilizing an indirect 3D human body model that is invariant to different body shapes and sizes. Once a 3D pose of a person is identified, the Davis Nonverbal States Scale (DNSS) is used to determine the degree of accessibility (i.e., openness and rapport) of a person towards a robot. In particular, degree of accessibility is based on the static body poses naturally displayed by a person relative to the robot during one-on-one HRI. We have implemented and tested our technique using two different sensory systems in social HRI scenarios to motivate its robustness for the proposed application. In particular, two sets of social HRI experiments are presented herein consisting of one-on-one interaction scenarios between a person and a human-like socially interactive robot. Each experiment utilizes different types of imaging sensors placed on the robot for body language recognition and classification.

Complete Chapter List

Search this Book:
Reset