Primary Research on Arabic Visemes, Analysis in Space, and Frequency Domain

Primary Research on Arabic Visemes, Analysis in Space, and Frequency Domain

Fatma Zohra Chelali (Houari Boumedienne University of Sciences and Technologies, Algeria) and Amar Djeradi (Houari Boumedienne University of Sciences and Technologies, Algeria)
DOI: 10.4018/978-1-4666-2163-3.ch020
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Visemes are the unique facial positions required to produce phonemes, which are the smallest phonetic unit distinguished by the speakers of a particular language. Each language has multiple phonemes and visemes, and each viseme can have multiple phonemes. However, current literature on viseme research indicates that the mapping between phonemes and visemes is many-to-one: there are many phonemes which look alike visually, and hence they fall into the same visemic category. To evaluate the performance of the proposed method, the authors collected a large number of speech visual signal of five Algerian speakers male and female at different moments pronouncing 28 Arabic phonemes. For each frame the lip area is manually located with a rectangle of size proportional to 120*160 and centred on the mouth, and converted to gray scale. Finally, the mean and the standard deviation of the values of the pixels of the lip area are computed by using 20 images for each phoneme sequence to classify the visemes. The pitch analysis is investigated to show its variation for each viseme.
Chapter Preview
Top

Introduction

Speech comprises of a mixture of audio frequencies, and every speech sound belongs to one of the two main classes known as vowels and consonants. Vowels and consonants belong to the basic linguistic units known as phonemes, which can be mapped to visible mouth shapes called visemes. Visemes and phonemes can be used as the basic units of visible articulatory mouth shapes (Waters et al., 1993). A phoneme is an abstract representation of a sound, and the set of phonemes in a language is defined as the minimum number of symbols required to represent every word in that language (Breen et al., 1996).

The set of visemes in a language is often defined as the number of visibly different phonemes in that language. A simple definition of viseme would be that a viseme could be generated from a set of archetypal sounds in a language based on the phonemes of that language (Breen et al., 1996). Möttönen et al. (2000) define a viseme set as phoneme realizations that are visually inextricable from each other. In lip reading studies viseme categories can be defined as clustering response distributions to observed phoneme articulations. These clusters are then used to find those phoneme articulations, which are perceived to be similar. Typically a cluster is considered as a viseme category if the proportion of within cluster responses is at least 70% of the total amount of responses to the phoneme articulations that are included in the cluster (Möttönen et al., 2000).

To date, there has been no precise definition for the term, but in general it has come to refer to a speech segment that is visually contrastive from another (Möttönen et al., 2000; Ezzat et al., 1999). In their work, they define a viseme to be a static lip shape image that is visually contrastive from another.

It is also important to point out that that the map from phonemes to visemes is also one-to-many: the same phoneme can have many different visual forms. This phenomenon is termed coarticulation, and it occurs because the neighboring phonemic context in which a sound is uttered influences the lip shape for that sound (Ezzat et al., 2000).

However, current literature on viseme research indicates that the mapping between phonemes and visemes is many-to-one: there are many phonemes which look alike visually, and hence they fall into the same visemic category. This is particularly true, for example, in cases where two sounds are identical in manner and place of articulation, but differ only in voicing characteristics (Ezzat et al., 2000).

Several standards for visemes (Tiddeman et al., 2002) exist but they were developed for English language. Since each language comprises of a different phonetic set (sounds) therefore, visemes have to be identified separately for each language. Möttönen et al. (2000) developed a Finnish talking head, which had to identify Finnish visemes for each Finnish phoneme (Rafay et al., 2003; Ezzat et al., 1999).

Bastanfard et al. (2009) proposed a novel method adopting an image-based approach for grouping visemes in the Persian language considering coarticulation effect. In their work, visemes can be recognized in two ways: Model-based, and image-based. In model based approach, lip is identified by its parameters, in that various geometrical factors are specified to lip shape. Model-based approach has low accuracy and high execution time while the iteration of this algorithm may cause local minimum. In image-based approach, lip images are considered, where all work is done on lip pixels (Bastanfard et al., 2009).

In addition, Bastanfard et al. (2009) investigate the coarticulation effect and phoneme position in syllable and a new image-based approach is used for Persian viseme extraction and classification. The focus is on cv and cvc combinations in Persian (Bastanfard et al., 2009).

Abou Zliekha et al. (2006) present an emotional audio-visual Text to speech system for the Arabic language. The system described is based on two entities: an emotional audio text to speech system which generates speech depending on the input text and the desired emotion type, and an emotional Visual model which generates the talking heads, by forming the corresponding visemes. For their Arabic audio-visual speech synthesis system, they've built an inventory of Arabic visemes classes and developed a phoneme-viseme mapping. Thirteen (13) Arabic visemes were investigated for Arabic language (Abou Zliekha et al., 2006).

Complete Chapter List

Search this Book:
Reset