3D Lip Shape SPH Based Evolution Using Prior 2D Dynamic Lip Features Extraction and Static 3D Lip Measurements

3D Lip Shape SPH Based Evolution Using Prior 2D Dynamic Lip Features Extraction and Static 3D Lip Measurements

Alfonso Gastelum, Patrice Delmas, Jorge Marquez
Copyright: © 2009 |Pages: 26
DOI: 10.4018/978-1-60566-186-5.ch007
(Individual Chapters)
No Current Special Offers


This chapter describes a new user-specific 2D to 3D lip animation technique. 2D lip contour position and corresponding motion information are provided from a 2D lip contour extraction algorithm. Static face measurements are obtained from 3D scanners or stereovision systems. The data is combined to generate an initial subject-dependent 3D lip surface. The 3D lips are then modelled as a set of particles whose dynamic behaviour is governed by Smooth Particles Hydrodynamics. A set of forces derived from ellipsoid muscle encircling the lips simulates the muscles controlling the lips motion. The 3D lip model is comprised of more than 300 surface voxels and more than 1300 internal particles. The advantage of the particle system is the possibility of creating a more complex system than previously introduced surface models.
Chapter Preview


There has been a wealth of research and publications dealing with static and dynamic 2D lip-region study, extraction and analysis over the last three decades, but there are only a limited set of publications dealing with direct (without any a-priori assumptions about the expected surfaces) 3D lip information extraction. Research is concentrated on dynamic 2D lip parameter extraction for model-based 3D animation or static 3D lip-information extraction.

Recently, the movie industry has developed cumbersome dynamic systems to, partially or fully, recover fine (both in resolution and accuracy) 3D information from the face. These systems are essentially marker based and use pattern projection, infra-red sensitive dyes, or a combination of both. Data acquisition is via massively parallel multiple cameras. The data is extensively processed for Computer Graphics reconstruction. This requires specialist hardware and has no practical applications outside character generation and animation for motion pictures.

Ahlberg (2001) used the front and side images of a face, and a generic 3D face mesh model (based on the original Candide face model) assuming cylindrical geometry. After a planar projection, a subset of 3D model meshes and corresponding image face features were manually mapped, allowing a complete registration of the 2D face texture onto the 3D face model. Specifically designed models for 3D lips were derived by Basu, S., Oliver, N., & Pentland, A. (1998). First a limited set of 3D lip surface points (painted on the mouth as black dots) were manually extracted from video-sequences. Next, the 3D lips surface deformation manifold was restricted to statistically learned shapes via Principal Component Analysis (PCA). Reveret, Borel, & Badin (2000) used 3D lip surfaces interpolated from manually delineated 2D lip contours projected on a 3D torus. The allowed motion was restricted to deformations statistically learned (again using PCA) from a group of 23 French visemes.

Zhang, Liu, Adler, Cohen, Hanson, & Shan (2004) used a calibrated web-camera and structure from motion principles to semi-automatically (the user must manually select a small set of face feature points in two consecutive frames) create a low resolution textured 3D model of the human subject detected in the scene. Readers seriously interested in 3D face modelling analysis and synthesis should read Wen and Huang’s book (2004).

Complete Chapter List

Search this Book: