Lip Motion Features for Biometric Person Recognition

Lip Motion Features for Biometric Person Recognition

Maycel Isaac Faraj (Halmstad University, Sweden) and Josef Bigun (Halmstad University, Sweden)
Copyright: © 2009 |Pages: 38
DOI: 10.4018/978-1-60566-186-5.ch017


The present chapter reports on the use of lip motion as a stand alone biometric modality as well as a modality integrated with audio speech for identity recognition using digit recognition as a support. First, the auhtors estimate motion vectors from images of lip movements. The motion is modeled as the distribution of apparent line velocities in the movement of brightness patterns in an image. Then, they construct compact lip-motion features from the regional statistics of the local velocities. These can be used as alone or merged with audio features to recognize identity or the uttered digit. The author’s present person recognition results using the XM2VTS database representing the video and audio data of 295 people. Furthermore, we present results on digit recognition when it is used in a text prompted mode to verify the liveness of the user. Such user challenges have the intention to reduce replay attack risks of the audio system.
Chapter Preview


In speech recognition, two widely used terms are phoneme and viseme. The first is the basic linguistic unit and the later is the visually distinguishable speech unit (Luettin (1979)).1 Whereas the use of visemes has been prompted by machine recognition studies, and hence it is in its start stage, the idea of phonemes is old. The science of Phonetics has for example been playing a major role in human language studies. The consonant letters complemented with vocals are approximations of phonemes and the alphabet belongs to greatest inventions of humanity.

Complete Chapter List

Search this Book: