Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

Wang Yushun, Zhuang Yueting
DOI: 10.4018/978-1-60566-934-2.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds 3D facial models from a single image, with support of a 3D face database. First from the image, we extract a set of feature points, which are then used to automatically estimate the head pose parameters using the 3D mean face in our database as a reference model. After the pose recovery, a similarity measurement function is proposed to locate the neighborhood for the given image in the 3D face database. The scope of neighborhood can be determined adaptively using our cross-validation algorithm. Furthermore, the individual 3D shape is synthesized by neighborhood interpolation. Texture mapping is achieved based on feature points. The experimental results show that our algorithm can robustly produce 3D facial models from images captured in various scenarios to enhance the lifelikeness in distant learning.

Complete Chapter List

Search this Book:
Reset