Fingers' Angle Calculation Using Level-Set Method

Fingers' Angle Calculation Using Level-Set Method

Ankit Chaudhary, Jagdish Lal Raheja, Karen Das, Shekhar Raheja
DOI: 10.4018/978-1-4666-6030-4.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the current age, use of natural communication in human-computer interaction is a known and well-installed thought. Hand gesture recognition and gesture-based applications have gained a significant amount of popularity amongst people all over the world. They have a number of applications ranging from security to entertainment. These applications generally are real time applications and need fast, accurate communication with machines. On the other end, gesture-based communications have few limitations, but bent finger information is not provided in vision-based techniques. In this chapter, a novel method for fingertip detection and for angle calculation of both hands' bent fingers is discussed. Angle calculation has been done before with sensor-based gloves/devices. This study has been conducted in the context of natural computing for calculating angles without using any wired equipment, colors, marker, or any device. The pre-processing and segmentation of the region of interest is performed in a HSV color space and a binary format, respectively. Fingertips are detected using level-set method and angles are calculated using geometrical analysis. This technique requires no training for the system to perform the task.
Chapter Preview
Top

1. Introduction

Robust and natural hand gesture recognition from video or in real time is one of the most important challenges for researchers working in the area of computer vision. Gesture recognition systems are very helpful in general purpose life as they can be used by general people without any training as everybody know how to use hand and what sign would make what mean. So, if computers can understand gestures efficiently, computers would be more useful for all. It can also help in controlling devices, interacting with machine interfaces, monitoring human activities and in many other applications. Generally defined as any meaningful body motion, gestures play a central role in everyday communication and often convey emotional information about the gesticulating person. There are some specific gestures which are pre-defined in a particular community or society as sign language, but many gestures made by hand are just a random shape. Precisely all shapes made by hand gesture are not defined, so one need to track all shapes to efficiently control machines by hand gesture.

During the last few decades researchers have been interested in recognizing automatically human gestures for several applications like sign language recognition, socially assistive robotics, directional indication through pointing, control through gestures, alternative computer interfaces, immersive game technology, virtual controllers, affective computing and remote controlling. For further details on gesture applications see (Chaudhary et al., 2011)(Mitra & Acharya, 2007). Mobile companies are also trying to make handsets which can recognize gestures and operate over small distances (Kroeker, 2010)(Tarrataca, Santos, & Cardoso, 2009). There have been many non-natural methods using devices and color papers/rings. In the past, researchers have employed gloves (Sturman, & Zeltzer, 1994), color strips (Do et al., 2006)(Premaratne, & Nguyen, 2007) (Kohler, 1996)(Bretzner et al., 2001) or full sleeve shirt (Kim & Fellner, 2004)(Sawah et al., 2007) in image processing based methods to obtain better segmentation results. A preliminary part of this work has been published in (Chaudhary et al., 2012).

It was a well-known fact in advance that natural computing methods will take over other technologies. Pickering (2005) stated “initially touch-based gesture interfaces would be popular, but non-contact gesture recognition technologies would be more attractive finally”. Recently human gesture recognition catches the peak attention of the research in both software and hardware environments. Many mobile companies like Samsung, Micromax have implemented hand gesture as a way to control mobile applications, which make it more popular in public domain. It can be used for controlling a robotic hand which can mimic the human hand actions and can secure human life by being used in many commercial and military operations. One such robotic hand is Dexterous from Shadow Robot®.

Many mechanical (Huber & Grupen, 2002)(Lim et al., 2000) and image processing (Nolker & Ritter, 2002) based techniques are available in the literature to interpret single hand gesture. However, in generic scenario humans express their actions with both hands along. it is a new challenge to take into account the gestures depicted by both hands simultaneously. The computational time for both hand gestures would be more compared to that required for a single hand. The approach employed for a single hand can also be used for this purpose with a slight modification in the algorithm for both hands. However, the process may consumes twice time that is required for single hand gesture recognition in serial implementation.

Complete Chapter List

Search this Book:
Reset