Generating Window of Sign Languages on ITU J.200-Based Middlewares

Generating Window of Sign Languages on ITU J.200-Based Middlewares

Felipe Lacet Silva Ferreira, Tiago Maritan Ugulino de Araújo, Felipe Hermínio Lemos, Gutenberg Pessoa Botelho Neto, José Ivan Bezerra Vilarouca Filho, Guido Lemos de Souza Filho
DOI: 10.4018/jmdem.2012040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Sign languages are natural languages used by the deaf to communicate. Currently, the use of sign language on TV is still limited to a window with a sign language interpreter showed into the original video program. This approach has some problems, such as high operational costs, need for a full-time interpreter. Some works in the scientific literature propose solutions for this problem, but there are some gaps to be addressed. In this paper, the authors propose a solution to provide support for sign language in middlewares compatible with ITU J.200 specification. The solution allows sign language content to be signed by 3D-Avatars when human interpreters are not available. To provide a case study for the proposed solution, they implemented a prototype of it using Ginga, the Brazilian DTV middleware, compliant with ITU J.200. Some tests with Brazilian deaf were also performed to evaluate the proposal.
Article Preview
Top

1. Introduction

Digital television (DTV) allows interactive applications to be transmitted along with TV programs. This feature enables the extension of the systems functions and therefore the development of new scenarios and applications for this technology. It allows, for example, that interactive TV applications be developed for different types of special needs, opening a new range of possibilities for people with disabilities. However, in practice, few works have explored these possibilities in DTV context (Araújo, Souza Filho, Tavares, & Souza, 2009; Amorim, Assad, Lóscio, Ferraz, & Meira, 2010; Kaneko, Hamaguchi, Doke, & Inoue, 2010).

Currently, the support for sign languages (their primary means of communication) on DTV is, in general, still limited to manual devices, where a window with a sign language interpreter is displayed overlaying the video program. According to Brito and Pereira (2009), this solution may be appropriate when there is full-time interpreters available and the content is broadcasted live. In other situations, for example, when the main video can be changed or edited, this solution is not adequate (Brito & Pereira, 2009). In this situation, two sequences of signs cannot be aligned and merged because it would be necessary to record the video with the same interpreter under the same conditions (Elliott, Glauert, & Kennaway, 2004). In addition, the production of sign language videos in high quality is resource intensive, and requires high-capacity data transmission. At last, this model of transmission seems inconvenient in analog TV systems, since it is not possible for non-deaf people to disable the window of sign language.

Some works try to address these limitations, providing solutions for supporting sign language contents on DTV (Amorim et al., 2010; Araújo et al., 2009; Kaneko et al., 2010). Amorim et al. (2010), for example, proposed a solution, called RybenáTV, that adapts Rybená, tool (http://www.rybena.com.br/default/index.jsp) for DTV systems. It proposes the translation of texts in Brazilian Portuguese, extracted from closed caption, to Brazilian Sign Language (LIBRAS). Kaneko et al. (2010) define a text-based computer language (TVML) to generate graphic animations for TV. Araújo et al. (2009) proposed a solution for automatic generation of window of sign language for digital television systems using automatic translation techniques.

In order to improve the way that deaf people access information, in this paper we propose a solution to provide support for sign language in middlewares of DTV systems compatible with ITU J.200 specification (2010), an ITU-T (ITU Telecommunication Standardization Sector) standard developed to promote harmonization between the application environments of different DTV systems. This work was developed based in the architecture previously proposed by Araújo et al. (2009). In addition to some architecture changes that will be presented in Section 5, there are two major improvements performed. Since Araújo et al. (2009) do not define the requirements, features and APIs necessary to develop interactive DTV application to support sign language contents in different DTV systems, it is not possible standardize the development of interoperable applications for different DTV systems. The solution described in this work addresses this requirement and defines these features and set of APIs. Furthermore, the proposed solution also includes a signaling protocol for transmission of sign language content from the TV station to the DTV receiver. An important characteristic of our proposal is that it is based only on functionalities, components and APIs already defined on ITU J.200 and the ITU J.201 (2009) and ITU J.202 (2010) derived specifications. Thus, it is not necessary to create or adapt any APIs or feature to provide support for sign languages, making possible the development of sign language interoperable applications.

Finally, the solution also allows the use of human interpreters, but we propose the generation of sign language content performed by 3D-avatars when they are not available.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing