Article Preview
Top1. Introduction
A common way for controlling devices is by using some parts of the body such as hands, legs or even by voice, but when restricted mobility users are facing to control a system or device, their disability must be considered when designing interfaces. In particular a way for controlling interfaces for these restricted mobility users is related to automatically analyzing the natural users’ movements or gestures from some part(s) of the body such as face, head, arms, hands, fingers, among others; these kind of interfaces are commonly known as Natural User Interfaces (NUI) and currently they have been an interesting topic for the restricted mobility users applications field (Kawarazaki et al., 2014; Lopes, 2017).
In computer Science, the analysis in real time of gestures is basically related to the Motion Capture process (MoCap) which mainly consists in reading and capturing the movements from a digital sensor and their analysis and recognition are carried out.
In particular, a wheelchair control can be operated not only by a joystick but also by the gesture processing from some part of the body and usually in the literature; these kinds of wheelchairs are named smart wheelchairs. When developing approaches for smart wheelchairs the operating model is based in some of the following modes (Leaman & La, 2017): Machine Learning, Following, Localization and Mapping, and Navigational assistance:
- •
Machine Learning: Includes specialized computer algorithms, which are previously trained using some descriptive examples and according to the obtained training model (rules, separation hyper-planes, density functions, etc.) new cases are recognized;
- •
Following: It is focused on tracking the user’s body in order to detect the position and analyze the behavior. Commonly Bayesian methods (Kalman Filters, Hidden Markov models, etc.) are used for estimating a trajectory during the tracking process according to maximum a posteriori joint probability schemes;
- •
Localization and Mapping: Since wheelchairs needs safely navigate in either indoor or outdoor spaces, this mode is related to the development of systems for estimating coordinates over the real environment (using Global Positioning Systems GPS, depth cameras, odometers, etc.) and corresponding them to the virtual space processed by the wheelchair system perception;
- •
Navigational assistance: The goal in this mode is concerned to provide obstacle avoidance systems to wheelchairs in order to help when some collision or obstacle is hard to detect in the path by the user. Commonly, algorithms for the assistance are based on reading information from sensors such as: Laser Imaging Detection and Ranging (LIDAR), infrared cameras, stereoscopic cameras, depth cameras, etc.
In this work, we propose a NUI based on Machine Learning and Following modes for controlling a wheelchair by restricted mobility users through face gesture, which are automatically detected via Pattern Recognition through Artificial Neural Networks (ANN). This paper is organized as follows: the first section describes some related works about NUIs for controlling wheelchairs, the second section provides a descriptive analysis of related works; after, the research approach and artifact design are presented followed by the proof of concept about the NUI proposed and finally conclusions are drawn.
TopIn this section, we describe in a general way some of the relevant approaches for smart wheelchair control development based on NUIs. An interesting and detailed survey about different approaches about smart wheelchairs can be found in (Leaman & La, 2017) and (Williams & Scheutz, 2017).
The earlier efforts to develop electric systems for moving wheelchairs are reported by George Klein during the Second World War age and since then wheelchair systems have evolved for developing new methodologies not only for moving but also for controlling wheelchairs based on NUIs from several kinds of input data according to the users’ restrictions due their disabilities.