Artificial Intelligence Biosensing System on Hand Gesture Recognition for the Hearing Impaired

Artificial Intelligence Biosensing System on Hand Gesture Recognition for the Hearing Impaired

Kayal Padmanandam, Rajesh M. V., Ajay N. Upadhyaya, K, Ramesh Chandra, Chandrashekar B,, Swati Sah
DOI: 10.4018/IJORIS.306194
Article PDF Download
Open access articles are freely available for download

Abstract

AI technologies have the potential to help deaf individuals communicate. Due to the complexity of sign fragmentation and the inadequacy of capturing hand gestures, the authors present a sign language recognition (SLR) system and wearable surface electromyography (sEMG) biosensing device based on a Deep SLR that converts sign language into printed message or speech, allowing people to better understand sign language and hand motions. On the forearms, two armbands containing a biosensor and multi-channel sEMG sensors are mounted to capture quite well arm and finger actions. Deep SLR was tested on an Android and iOS smartphone, and its usefulness was determined by comprehensive testing. Sign Speaker has a considerable limitation in terms of recognising two-handed signs with smartphone and smartwatch. To solve these issues, this research proposes a new real-time end-to-end SLR method. The average word error rate of continuous sentence recognition is 9.6%, and detecting signals and recognising a sentence with six sign words takes less than 0.9 s, demonstrating Deep SLR's recognition.
Article Preview
Top

Introduction

The major mode of communication between hearing-impaired persons and other populations is sign language (SL), which is represented through both manual and non-manual elements. The purpose for creating sign language tools to enhance communication in hearing-impaired people has long been recognised by the scholarly community. The implementation of applications can be difficult due to the great number of sign languages recent breakthroughs in AI and ML have helped to automate and improve such systems. The expansion of sophisticated ML algorithms that reliably identify human actions to isolated signs or continuous phrases is known as sign language recognition (SLR).

Because of advancements in size and comfort, wearable sensors are becoming more common in applications to monitor health (Kim-Campbell, et al., 2019). Wearable biosensors can use ML algorithms for processing signal to deliver real-time monitoring of signals. The advantages of local (in-sensor) signal processing in lower communication connection bandwidth and radio power requirements are advantages of wirelessly streaming raw data to an external compute unit (Liu-Sacks, et al., 2017). Whenever the basic method of a classiðer fails to acknowledge a broad number of constraints, the model's classification accuracy degrades (Milosevic, Farella and Benaui, 2018). Furthermore, in-sensor model updates are not supported by systems capable of in-sensor training (Pancholi and Joshi, 2019).

A gesture is a physical movement of the hands, fingers, arms, and other parts of the human body that allows people to communicate meaning and information with one another. The data gloves method and the vision-based approach are two alternative approaches for human–computer interactions. The detection and classification of hand motions were among the investigations that looked into the vision-based approach. A One of the logical methods to create a convenient and adaptable interface between devices and users is to use hand gestures. HCI systems can use applications like virtual object manipulation, gaming, and gesture recognition. Hand tracking is a theoretical area of computer vision that deals with three key elements: hand segmentation, hand part identification, and hand tracking. Hand gestures are the best communicating approach and the most popular notion in a gesture recognition system. Hand gestures can be identified using one of the following methods: posture is a static hand form ratio without movement, and gesture is a dynamic hand motion with or without movement. Any camera may detect any form of hand gesture; keep in mind, however, that different cameras have varied resolution qualities. Most finger gestures can be detected by two-dimensional cameras in a continuous surface termed 2D.

One of the most common instances of a hand gesture system is sign language. It's a linguistic system that uses hand motions in addition to other motions. For example, most hearing-impaired people utilise universal sign language all across the world. The three basic components of sign language are word level sign vocabulary, non-manual characteristics, and finger spelling. Sign language is one of the most effective ways to communicate with hearing-impaired people.

Object detection and object motions were among the experiments given by the researchers. Three-Dimensional (3D) hand tracking is a hot topic in the gaming world. Recent film releases, such as Avatar, revolutionised cinema at the start of the decade by integrating content development and 3D technology with real performers, resulting in the birth of a new genre. Following the breakthrough of 3D movie, various electrical businesses concentrated their efforts on developing Three-Dimensional Television (3DTV) technology. The dome auto stereoscopic display was proposed by the researchers and is used to observe the position that is still constrained. Stereo and multi-view are two separate technologies that rely on the brain to merge the two views to produce the illusion of 3D.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 2 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing