Shopping Cart | Login | Register | Language: English

Gesture Learning by Imitation Architecture for a Social Robot

Copyright © 2013. 20 pages.
OnDemand Chapter PDF Download
Download link provided immediately after order completion
$37.50
Available. Instant access upon order completion.
DOI: 10.4018/978-1-4666-2672-0.ch013
Sample PDFCite

MLA

Bandera, J.P., J.A. Rodríguez, L. Molina-Tanco and A. Bandera. "Gesture Learning by Imitation Architecture for a Social Robot." Robotic Vision: Technologies for Machine Learning and Vision Applications. IGI Global, 2013. 211-230. Web. 23 Jul. 2014. doi:10.4018/978-1-4666-2672-0.ch013

APA

Bandera, J., Rodríguez, J., Molina-Tanco, L., & Bandera, A. (2013). Gesture Learning by Imitation Architecture for a Social Robot. In J. Garcia-Rodriguez, & M. Cazorla Quevedo (Eds.) Robotic Vision: Technologies for Machine Learning and Vision Applications (pp. 211-230). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-2672-0.ch013

Chicago

Bandera, J.P., J.A. Rodríguez, L. Molina-Tanco and A. Bandera. "Gesture Learning by Imitation Architecture for a Social Robot." In Robotic Vision: Technologies for Machine Learning and Vision Applications, ed. Jose Garcia-Rodriguez and Miguel A. Cazorla Quevedo, 211-230 (2013), accessed July 23, 2014. doi:10.4018/978-1-4666-2672-0.ch013

Export Reference

Mendeley
Favorite
Gesture Learning by Imitation Architecture for a Social Robot
Access on Platform
Browse by Subject
Top

Abstract

This description is based on the identification of a set of generic components, which can be found in any learning by imitation architecture. It highlights the main contribution of the proposed architecture: the use of an inner human model to help perceiving, recognizing and learning human gestures. This allows different robots to share the same perceptual and knowledge modules. Experimental results show that the proposed architecture is able to meet the requirements of learning by imitation scenarios. It can also be integrated in complete software structures for social robots, which involve complex attention mechanisms and decision layers.
Chapter Preview
Top

Introduction

Robots have been massively used in industrial environments for the last fifty years. Industrial robots are designed to perform repetitive, predictable tasks, but are not able to easily adapt or learn new behaviours (Craig, 1986). In order to execute their programmed tasks, they have to sense only a constrained set of environmental parameters, thus perceptual systems mounted on industrial robots are simple, practical and task-oriented. On the other hand, they are designed to work in environments in which human presence is limited and controlled, if allowed. Thus, while their usefulness is evident, industrial robots are strongly limited. In order to remove these limitations, a new generation of robots began to appear more than thirty-five years ago (Inoue, Tachi, Nakamura, Hirai, Ohyu, Hirai, Tanie Yokoi & Hirukawa, 2001). These robots were designed to cooperate with people in everyday activities, to adapt to uncontrolled environments and new tasks, and to become engaging companions for people to interact with. They usually benefit from sharing human perceptual and motor capabilities, and thus the term humanoid robot was used to name these agents. In the last decade, however, the difficulties of creating robots that resemble human beings have favoured the use of the more generic term social robot. Thus, today it is accepted that, although humanoid robots are certainly designed to be social, social robots do not need to be humanoid.

According to an early definition of social robot (Dautenhahn & Billard, 1999) social robots are agents designed to be part of an heterogeneous group. They should be able to recognize, explicitly communicate with and learn from other individuals in this group. They also possess history (i.e. they sense and interpret their environment in terms of their own experience). While this is a generic definition, in practice social robots are designed to work in human societies. Thus, later definitions of social robots present them as agents that have to interact with people (Breazeal, Brooks, Gray, Hancher, McBean, Stiehl & Strickon 2003). In this chapter the same ideas are followed, and social robots are understood as “robots that work in real social environments, and that are able to perceive, interact with and learn from other individuals, being these individuals people or other social agents” (Bandera, 2010, pp. 9).

Social robots have different options to achieve learning. Individual learning mechanisms (e.g. trial-and-error, imprinting, classical conditioning, etc.) are one of these options. However, their application to a social robot may lead it to learn incorrect, disturbing or even dangerous behaviours. Thus, they should be restricted to specific scenarios and tasks (e.g. games based on controlled stigmergy) (Breazeal et al., 2003; Bandera, 2010). Social learning mechanisms are a different option, which allows the human teacher to supervise the learning process avoiding most issues of individual learning. Among different social learning strategies, learning by imitation appears as one of the most intuitive and powerful ones.

This chapter describes a RLbI architecture that provides a social robot with the ability to learn and imitate upper-body social gestures. This architecture, that is the main topic of the first author's Thesis (Bandera, 2010), uses an interface based on a pair of stereo cameras, and a model-based perception component to capture human movements from input image data. Perceived human motion is segmented into discrete gestures and represented using features. These features are subsequently employed to recognize and learn gestures. One of the main differences of this proposal with respect to previous approaches is that all these processes are executed in the human motion space, not in the robot motion space. This strategy avoids constraining the perceptual capabilities of the robot due to its physical limitations. It also eases sharing knowledge among different robots. Only if the social robot needs to perform physical imitation, a translation module is used that combines different strategies to produce valid robot motion from learned human gestures.

Top

Complete Chapter List

Search this Book: Reset
Table of Contents
Foreword
Eduardo Nebot
Preface
José García-Rodríguez, Miguel Cazorla
Chapter 1
Patrycia Barros de Lima Klavdianos, Lourdes Mattos Brasil, Jairo Simão Santana Melo
Recognition of human faces has been a fascinating subject in research field for many years. It is considered a multidisciplinary field because it... Sample PDF
Face Recognition with Active Appearance Model (AAM)
$37.50
Chapter 2
Xavier Perez-Sala, Laura Igual, Sergio Escalera, Cecilio Angulo
Different methodologies of uniform sampling over the rotation group, SO(3), for building unbiased 2D shape models from 3D objects are introduced and... Sample PDF
Uniform Sampling of Rotations for Discrete and Continuous Learning of 2D Shape Models
$37.50
Chapter 3
Marcelo Saval-Calvo, Jorge Azorín-López, Andrés Fuster-Guilló
In this chapter, a comparative analysis of basic segmentation methods of video sequences and their combinations is carried out. Analysis of... Sample PDF
Comparative Analysis of Temporal Segmentation Methods of Video Sequences
$37.50
Chapter 4
Sreela Sasi
Computer vision plays a significant role in a wide range of homeland security applications. The homeland security applications include: port... Sample PDF
Security Applications Using Computer Vision
$37.50
Chapter 5
Jose Manuel Lopez-Guede, Borja Fernandez-Gauna, Ramon Moreno, Manuel Graña
In this chapter, a system to identify the different elements of a Linked Multi-Component Robotic System (L-MCRS) is specified, designed, and... Sample PDF
Visual Detection in Linked Multi-Component Robotic Systems
$37.50
Chapter 6
Raed Almomani, Ming Dong
Video tracking systems are increasingly used day in and day out in various applications such as surveillance, security, monitoring, and robotic... Sample PDF
Building a Multiple Object Tracking System with Occlusion Handling in Surveillance Videos
$37.50
Chapter 7
Ramón Moreno, Manuel Graña, Kurosh Madani
The representation of the RGB color space points in spherical coordinates allows to retain the chromatic components of image pixel colors, pulling... Sample PDF
A Robust Color Watershed Transformation and Image Segmentation Defined on RGB Spherical Coordinates
$37.50
Chapter 8
José García-Rodríguez, Juan Manuel García-Chamizo, Sergio Orts-Escolano, Vicente Morell-Gimenez, José Antonio Serra-Pérez, Anatassia Angelolopoulou, Alexandra Psarrou, Miguel Cazorla, Diego Viejo
This chapter aims to address the ability of self-organizing neural network models to manage video and image processing in real-time. The Growing... Sample PDF
Computer Vision Applications of Self-Organizing Neural Networks
$37.50
Chapter 9
Vicente Morell-Gimenez, Sergio Orts-Escolano, José García-Rodríguez, Miguel Cazorla, Diego Viejo
The task of registering three dimensional data sets with rigid motions is a fundamental problem in many areas as computer vision, medical images... Sample PDF
A Review of Registration Methods on Mobile Robots
$37.50
Chapter 10
Ivan Cabezas, Maria Trujillo
The use of disparity estimation algorithms is required in the 3D recovery process from stereo images. These algorithms tackle the correspondence... Sample PDF
Methodologies for Evaluating Disparity Estimation Algorithms
$37.50
Chapter 11
Ashwin P. Dani, Zhen Kan, Nic Fischer, Warren E. Dixon
In this chapter, an online method is developed for estimating 3D structure (with proper scale) of moving objects seen by a moving camera. In... Sample PDF
Real-Time Structure Estimation in Dynamic Scenes Using a Single Camera
$37.50
Chapter 12
Lazaros Nalpantidis, Ioannis Kostavelis, Antonios Gasteratos
Traversability estimation is the process of assessing whether a robot is able to move across a specific area. Autonomous robots need to have such an... Sample PDF
Intelligent Stereo Vision in Autonomous Robot Traversability Estimation
$37.50
Chapter 13
J.P. Bandera, J.A. Rodríguez, L. Molina-Tanco, A. Bandera
This description is based on the identification of a set of generic components, which can be found in any learning by imitation architecture. It... Sample PDF
Gesture Learning by Imitation Architecture for a Social Robot
$37.50
Chapter 14
Renato Ramos da Silva, Roseli Aparecida Francelin Romero
Computer vision is essential to develop a social robotic system capable to interact with humans. It is responsible to extract and represent the... Sample PDF
Computer Vision for Learning to Interact Socially with Humans
$37.50
Chapter 15
Wenjie Yan, Elena Torta, David van der Pol, Nils Meins, Cornelius Weber, Raymond H. Cuijpers, Stefan Wermter
This chapter presents an overview of a typical scenario of Ambient Assisted Living (AAL) in which a robot navigates to a person for conveying... Sample PDF
Learning Robot Vision for Assisted Living
$37.50
Chapter 16
Mohan Sridharan
Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are... Sample PDF
An Integrated Framework for Robust Human-Robot Interaction
$37.50
Chapter 17
Domenec Puig
This chapter focuses on the study of SLAM taking into account different strategies for modeling unknown environments, with the goal of comparing... Sample PDF
Collaborative Exploration Based on Simultaneous Localization and Mapping
$37.50
Chapter 18
P. Cavestany Olivares, D. Herrero-Pérez, J. J. Alcaraz Jiménez, H. Martínez Barberá
In this chapter, the authors describe their vision system used in the Standard Platform League (SPL), one of the official leagues in RoboCup... Sample PDF
An Embedded Vision System for RoboCup
$37.50
Chapter 19
L. M. Alkurdi, R. B. Fisher
The problem of visual control of an autonomous indoor blimp is investigated in this chapter. Autonomous aerial vehicles have been an attractive... Sample PDF
Visual Control of an Autonomous Indoor Robotic Blimp
$37.50
Chapter 20
Juan F. García, Francisco J. Rodríguez, Vicente Matellán
The purpose of this chapter is both to review some of the most representative visual attention models, both theoretical and practical, that have... Sample PDF
Selective Review of Visual Attention Models
$37.50
Chapter 21
Julio Vega, Eduardo Perdices, José María Cañas
Cameras are one of the most relevant sensors in autonomous robots. Two challenges with them are to extract useful information from captured images... Sample PDF
Attentive Visual Memory for Robot Localization
$37.50
Chapter 22
E. Antúnez, Y. Haxhimusa, R. Marfil, W. G. Kropatsch, A. Bandera
Computer vision systems have to deal with thousands, sometimes millions of pixel values from each frame, and the computational complexity of many... Sample PDF
Artificial Visual Attention Using Combinatorial Pyramids
$37.50