Direct Perception and Action Decision for Unknown Object Grasping

Direct Perception and Action Decision for Unknown Object Grasping

Hiroyuki Masuta, Tatsuo Motoyoshi, Kei Sawai, Ken'ichi Koyanagi, Toru Oshima, Hun-Ok Lim
Copyright: © 2017 |Pages: 14
DOI: 10.4018/IJALR.2017010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper discusses the direct perception of an unknown object and the action decision to grasp an unknown object using depth sensor for social robots. Conventional methods estimate the accurate physical parameters when a robot wants to grasp an unknown object. Therefore, we propose a perceptual system based on an invariant concept in ecological psychology, which perceives the information relevant to the action of the robot. Firstly, we proposed the plane detection based approach for perceiving an unknown object. In this paper, we propose the sensation of grasping which is expressed by using inertia tensor, and applied with fuzzy inference using the relation between principle moment of inertia. The sensation of grasping encourages the decision for the grasping action directly without inferring from physical value such as size, posture and shape. As experimental results, we show that the sensation of grasping expresses the relative position and posture between the robot and the object, and the embodiment of the robot arm by one parameter. And, we verify the validity of the action decision from the sensation of grasping.
Article Preview
Top

Introduction

Recently, various types of social robots have been developed for the next generation society of humankind (Beetz, 2011). For example, rehabilitation robots, education robots and therapy robots will be expected for the progress of society (Robins, 2005). Moreover, amusement robots, service robots and partner robots are expected in order to have a comfortable life. The social robots have to work not only in specific environments like factories, but also in general environments such as public facilities and homes (Mitsunaga, 2008). In a general environment, a social robot should take action flexibly in order to fulfill specific tasks, even in an unknown environment. A social robot needs human like perception and action decision in unknown environment. To discuss social robots, there are important consideration such as embodiment, social interaction, perception, interpretation, communication, experience, learning and so on (Fong, 2003). Especially, Wainer et al. insists that robots with physical bodies has a measurable effect on performance and perception of social interaction (Wainer, 2006). A perception of social robots cannot separate with the embodiment of a robot. Therefore, we focus on the discussion of the perception of a robot which has an embodiment to realize a social robot working in an unknown environment.

There are many previous researches that focus on the unknown object recognition method in a real environment (Bay, 2008; Comaniciu, 2001). To perceive an unknown object, F. Jurie et al. had proposed the real-time 3D template matching to detect objects in 3D space (Jurie, 2001). However, this method needs a minimal level of predefined knowledge which is given by the operator, or achieved by oneself. Therefore, a robot cannot perceive the environment and take suitable action immediately in an unexpected situation. All of the 3D template data makes nearly impossible to provide beforehand by human operator in real environment. On the other, there are segmentation methods for detecting an unknown object. Random sample consensus (RANSAC) is a popular segmentation method (Fischler, 1981). RANSAC and progressive RANSAC (PROSAC) (Chum, 2005) are installed in the Point Cloud Library (PCL) (Aldoma, 2012). These methods can detect same features from point cloud data using depth sensor. However, these methodologies require a significant volume of a computational cost to obtain accurate results.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 2 Issues (2018)
Volume 7: 2 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 1 Issue (2015)
Volume 4: 1 Issue (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing