Both Hands’ Fingers’ Angle Calculation from Live Video

Both Hands’ Fingers’ Angle Calculation from Live Video

Ankit Chaudhary (Birla Institute of Technology and Science, Pilani, India), Jagdish L. Raheja (Machine Vision Lab, CSIR-Central Electronic Engineering Research Institute, India), Karen Das (Assam Don Bosco University, India) and Shekhar Raheja (Digital System Group, CSIR-Central Electronic Engineering Research Institute, India)
Copyright: © 2012 |Pages: 11
DOI: 10.4018/ijcvip.2012040101

Abstract

In the last few years gesture recognition and gesture-based human computer interaction has gained a significant amount of popularity amongst researchers all over the world. It has a number of applications ranging from security to entertainment. Gesture recognition is a form of biometric identification that relies on the data acquired from the gesture depicted by an individual. This data, which can be either two-dimensional or three-dimensional, is compared against a database of individuals or is compared with respective thresholds based on the way of solving the riddle. In this paper, a novel method for angle calculation of both hands’ bended fingers is discussed and its application to a robotic hand control is presented. For the first time, such a study has been conducted in the area of natural computing for calculating angles without using any wired equipment, colors, marker or any device. The system deploys a simple camera and captures images. The pre-processing and segmentation of the region of interest is performed in a HSV color space and a binary format respectively. The technique presented in this paper requires no training for the user to perform the task.
Article Preview

1. Introduction

Gesture recognition from video sequences or interactive input in real time, is one of the most important challenges for scientists and researchers working in the area of computer vision and image understanding. Gesture recognition systems are very helpful in general purpose life as they offer to the machine the ability to identify, recognize and interpret the human gestures and emotions accordingly. It can also help in controlling devices, interacting with machine interfaces, monitoring human activities and in many other applications. Generally defined as any meaningful body motion, gestures play a central role in everyday communication and often convey emotional information about the gesticulating person.

During the last few decades researchers have been interested in recognizing automatically human gestures for several applications like sign language recognition, socially assistive robotics, directional indication through pointing, control through gestures, alternative computer interfaces, immersive game technology, virtual controllers, affective computing and remote controlling. For further details on gesture applications see Mitra and Acharya (2007) and Chaudhary, Raheja, Das, and Raheja (2011). Mobile companies are also trying to make handsets which can recognize gestures and operate over small distances (Kroeker, 2010; Tarrataca, Santos, & Cardoso, 2009). In the past, researchers have employed gloves (Sturman & Zeltzer, 1994), color strips (Do, Jung, Jung, Jang, & Bien, 2006; Premaratne & Nguyen, 2007; Kohler, 1996; Bretzner, Laptev, Lindeberg, Lenman, & Sundblad, 2001) or full sleeve shirt (Sawah, Joslin, Georganas, & Petriu, 2007; Kim & Fellner, 2004) in image processing based methods to obtain better segmentation results.

In 2005, Pickering stated that initially touch-based gesture interfaces would be popular, but non-contact gesture recognition technologies would be more attractive finally. Today, Pickering’s statement is identified to be absolutely true. Recently human gesture recognition catches the peak attention of the research in both software and hardware environments. Hand gesture can be very useful, especially for giving a command to a computer or to a robotic system. It can be used for making a robotic hand which can mimic the human hand actions and can secure human life by being used in many commercial and military operations.

Many mechanical (Huber & Grupen, 2002; Lim, Oh, Son, You, & Kim, 2000) and image processing based techniques are available in the literature to interpret single hand gesture. However, as humans express their actions with both hands along with words, it is a new challenge to take into account the gestures depicted by both hands simultaneously. For both hands it is obvious that the computational time required would be more compared to that required for a single hand. The approach employed for a single hand can also be used for this purpose with a slight modification in the algorithm for both hands. However, the process consumes time that is almost twice of what is required for single hand gesture recognition.

It is not always true that this approach will consume double the time to calculate both the finger angles for both hands. If the directions of both the hands are the same, the computational time will be similar to the single hand computational time but in real life, it is not always possible that both hands always point towards the same direction. Hence, we have to apply this algorithm twice on the image frame. That will cause extra expense of time and in real time applications it is very much essential that the computational time should be very small. Therefore, a new approach is required for parallel angle approximation of the fingers of both hands. Figure 1 shows the block diagram flow of our approach.

Figure 1.

Algorithmic flow for angle approximation for both hands

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing