An Integrated Framework for Robust Human-Robot Interaction

An Integrated Framework for Robust Human-Robot Interaction

Mohan Sridharan
DOI: 10.4018/978-1-4666-2672-0.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.
Chapter Preview
Top

Introduction

Mobile robots are increasingly being used in real-world application domains such as surveillance, navigation and healthcare due to the availability of high-fidelity sensors and the development of state of the art algorithms to process sensory inputs. As we move towards deploying robots in our homes and offices, i.e., domains with a significant amount of uncertainty, there is a need for enabling robots to learn from sensory cues and limited feedback from non-expert human participants. Human-robot interaction (HRI) poses many challenges such as autonomous operation, safety, engagement, robot design and interaction protocol design (Tapus, Mataric, & Scassellati, 2007). The focus of this chapter is on robust autonomy using sensory cues and high-level feedback from non-expert human participants. Many algorithms have been developed for autonomous operation based on sensory inputs, and for learning from manual training and domain knowledge. Real-world domains characterized by partial observability, non-deterministic action outcomes and unforeseen dynamic changes make it difficult for a robot to operate without any human feedback. At the same time, human participants may not have the expertise and time to provide elaborate and accurate feedback in complex domains (Fong, Nourbakhsh, & Dautenhahn, 2003; Thrun, 2004). Recent research has hence focused on enabling a robot to acquire human feedback when needed and merge human inputs with the information extracted from sensory cues. However, these algorithms require elaborate domain knowledge or fail to model the unreliability of human inputs, limiting their use to simplistic simulated domains or specific real-world applications (Knox & Stone, 2010; Rosenthal, Veloso, & Dey, 2011).

As an illustrative example, consider the robots in Figure 1(left) deployed to interact with humans in offices and homes. Such real-world domains are characterized by unforeseen dynamic changes, e.g., existing objects move, novel objects are introduced and the environmental factors change unpredictably. Assume that sensory cues consist primarily of vision (monocular and stereo) in conjunction with range data and simplistic verbal inputs. Also assume that the robots do not manipulate domain objects and do not have physical contact with humans. Each robot is equipped with core algorithms to process sensory cues with varying levels of reliability and computational complexity. Non-expert human participants provide limited high-level feedback in the form of simplistic verbal inputs that reinforce the robot’s actions or resolve ambiguities identified by the robot. Although it is not feasible to process all inputs or model the entire domain and still respond to dynamic changes, each robot has to exploit relevant sensory cues to operate reliably. Given such a scenario, this chapter focuses on the following key questions:

Figure 1.

(Left) examples of robot platforms relevant to the research described in this chapter; (right) integrated framework that uses the dependencies between learning, adaptation and interaction to achieve synergetic autonomy in real-world HRI

978-1-4666-2672-0.ch016.f01
  • How to best enable a robot to adapt learning, sensing and processing to different scenarios and participants?

  • How to best enable a robot to seek limited high-level feedback from non-expert human participants, and robustly merge human inputs with the information extracted from sensory inputs?

While sophisticated algorithms have been developed for the learning, adaptation and interaction challenges in isolation, the integration of these subfields to enable robust HRI remains an open problem, even as it presents new opportunities to address the existing challenges in the subfields (AAAI Symposium, 2012). This chapter describes a novel probabilistic framework that seeks to answer the questions listed above by jointly addressing the associated learning, adaptation and interaction challenges. The framework is composed of the following components:

Complete Chapter List

Search this Book:
Reset