Wheelchair Control Based on Facial Gesture Recognition

Wheelchair Control Based on Facial Gesture Recognition

J. Emmanuel Vázquez (Benemérita Universidad Autónoma de Puebla, Faculty of Computer Science, Puebla, Mexico), Manuel Martin-Ortiz (Benemérita Universidad Autónoma de Puebla, Laboratory of Supercomputing (LNS-BUAP- CONACYT), Puebla, Mexico), Ivan Olmos-Pineda (Benemérita Universidad Autónoma de Puebla, Faculty of Computer Science, Puebla, Mexico) and Arturo Olvera-Lopez (Benemérita Universidad Autónoma de Puebla, Faculty of Computer Science, Puebla, Mexico)
DOI: 10.4018/IJITSA.2019070106

Abstract

In this article, an approach for controlling a wheelchair using gestures from the user's face is presented, particularly some commands for the basic control operations required for driving a wheelchair are recognized. In order to recognize the face gestures an Artificial Neural Network which is trained since it is one of the most successful classifiers in Pattern Recognition. In particular, the authors' proposed method is useful for controlling a wheelchair when the user has restricted (or zero) mobility in some parts of the body such as: legs, arms or hands. According to their experimental results, the proposed approach provides a successful tool for controlling a wheelchair through a Natural User Interface based on machine learning.
Article Preview
Top

1. Introduction

A common way for controlling devices is by using some parts of the body such as hands, legs or even by voice, but when restricted mobility users are facing to control a system or device, their disability must be considered when designing interfaces. In particular a way for controlling interfaces for these restricted mobility users is related to automatically analyzing the natural users’ movements or gestures from some part(s) of the body such as face, head, arms, hands, fingers, among others; these kind of interfaces are commonly known as Natural User Interfaces (NUI) and currently they have been an interesting topic for the restricted mobility users applications field (Kawarazaki et al., 2014; Lopes, 2017).

In computer Science, the analysis in real time of gestures is basically related to the Motion Capture process (MoCap) which mainly consists in reading and capturing the movements from a digital sensor and their analysis and recognition are carried out.

In particular, a wheelchair control can be operated not only by a joystick but also by the gesture processing from some part of the body and usually in the literature; these kinds of wheelchairs are named smart wheelchairs. When developing approaches for smart wheelchairs the operating model is based in some of the following modes (Leaman & La, 2017): Machine Learning, Following, Localization and Mapping, and Navigational assistance:

  • Machine Learning: Includes specialized computer algorithms, which are previously trained using some descriptive examples and according to the obtained training model (rules, separation hyper-planes, density functions, etc.) new cases are recognized;

  • Following: It is focused on tracking the user’s body in order to detect the position and analyze the behavior. Commonly Bayesian methods (Kalman Filters, Hidden Markov models, etc.) are used for estimating a trajectory during the tracking process according to maximum a posteriori joint probability schemes;

  • Localization and Mapping: Since wheelchairs needs safely navigate in either indoor or outdoor spaces, this mode is related to the development of systems for estimating coordinates over the real environment (using Global Positioning Systems GPS, depth cameras, odometers, etc.) and corresponding them to the virtual space processed by the wheelchair system perception;

  • Navigational assistance: The goal in this mode is concerned to provide obstacle avoidance systems to wheelchairs in order to help when some collision or obstacle is hard to detect in the path by the user. Commonly, algorithms for the assistance are based on reading information from sensors such as: Laser Imaging Detection and Ranging (LIDAR), infrared cameras, stereoscopic cameras, depth cameras, etc.

In this work, we propose a NUI based on Machine Learning and Following modes for controlling a wheelchair by restricted mobility users through face gesture, which are automatically detected via Pattern Recognition through Artificial Neural Networks (ANN). This paper is organized as follows: the first section describes some related works about NUIs for controlling wheelchairs, the second section provides a descriptive analysis of related works; after, the research approach and artifact design are presented followed by the proof of concept about the NUI proposed and finally conclusions are drawn.

Top

In this section, we describe in a general way some of the relevant approaches for smart wheelchair control development based on NUIs. An interesting and detailed survey about different approaches about smart wheelchairs can be found in (Leaman & La, 2017) and (Williams & Scheutz, 2017).

The earlier efforts to develop electric systems for moving wheelchairs are reported by George Klein during the Second World War age and since then wheelchair systems have evolved for developing new methodologies not only for moving but also for controlling wheelchairs based on NUIs from several kinds of input data according to the users’ restrictions due their disabilities.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 13: 2 Issues (2020): 1 Released, 1 Forthcoming
Volume 12: 2 Issues (2019)
Volume 11: 2 Issues (2018)
Volume 10: 2 Issues (2017)
Volume 9: 2 Issues (2016)
Volume 8: 2 Issues (2015)
Volume 7: 2 Issues (2014)
Volume 6: 2 Issues (2013)
Volume 5: 2 Issues (2012)
Volume 4: 2 Issues (2011)
Volume 3: 2 Issues (2010)
Volume 2: 2 Issues (2009)
Volume 1: 2 Issues (2008)
View Complete Journal Contents Listing