Conversion of Tactile Sign Language into English for Deaf/Dumb Interaction

Conversion of Tactile Sign Language into English for Deaf/Dumb Interaction

Urmila Shrawankar, Sayli Dixit
Copyright: © 2017 |Pages: 15
DOI: 10.4018/IJNCR.2017010104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Natural language is the way of communication for normal human beings which includes spoken language, written and body gestures i.e. head gesture, hand gestures, facial expressions and lip motion etc. On the other hand, speech and hearing-impaired people uses sign language for communication which is not understandable for normal people thus they face problems of communication in society. In this problem interpreters are required but the human interpreters are costly and are not an efficient solution. Thus, there is a need of system which will translate the sign language into normal language which will be understandable by normal. The system proposed and explained in the paper is an efficient solution to this problem. In the system, sign recognition is done using CAMSHIFT and P2DHHM algorithm followed by Haar Cascade Classifier. After sign recognition, the language technology techniques of POS tagging and LALR parser are used to convert recognized sign words into English sentence. Till date no any system has worked on sentence framing. Results shows that this system produces 92% of accurate result which will bridge the gap between impaired and Normal people.
Article Preview
Top

Implementation Details

The complete system (Figure 1) is divided into two parts as follows:

  • 1.

    Identification of respective words from Tactile Sign language

  • 2.

    Using NLP for sentence formation

Figure 1.

General Workflow of System

IJNCR.2017010104.f01
Top

Phase I: Identification Of Respective Words From Tactile Sign Language

Most of the work is already done in the field of sign language detection and conversion, using image processing techniques with machine intelligence. All these works are only identifying words. The following steps are followed in Phase I:

  • 1.

    Tactile language video as input

  • 2.

    Framing

  • 3.

    Frame Segmentation

  • 4.

    Tracking

  • 5.

    Feature Retrieval

  • 6.

    Classification and Recognition of sign

Tactile Language Video Input

2D camera of higher resolution can be used for proper clear video. It will not handle portion overlapping issue. Captured video is given as an input to the system.

  • Input video type: .avi

  • Size of video: 7.49mb

  • Video length: 5 sec

  • Frame width: 800px

  • Frame height: 500px

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing