Depth Maps and Deep Learning for Facial Analysis

Depth Maps and Deep Learning for Facial Analysis

Paulo C. Brito, Elizabeth S. Carvalho
DOI: 10.4018/978-1-7998-0414-7.ch057
(Individual Chapters)
No Current Special Offers


Gathering and examining progressively multi-modular sensor information of human faces is a critical issue in PC vision, with applications in examinations, entertainment, and security. However, due to the exigent nature of the problem, there is a lack of affordable and easy-to-use systems, with real-time, annotations capability, 3D analysis, replay capability and with a frame speed capable of detecting facial patterns in working behavior environments. In the context of an ongoing effort to develop tools to support the monitoring and evaluation of the human affective state in working environments, the authors investigate the applicability of a facial analysis approach to map and evaluate human facial patterns. The challenge is to interpret this multi-modal sensor data to classify it with deep learning algorithms and fulfill the following requirements: annotations capability, 3D analysis, and replay capability. In addition, the authors want to be able to continuously enhance the output result of the system with a training process in order to improve and evaluate different patterns of the human face.
Chapter Preview


Affective computing (Lin, Pan, Wang, Lv & Sun, 2010) is the advancement and investigation of frameworks and gadgets that can distinguish, comprehend, process, and recreate human emotions. The inspiration for the study and research of this area is the ability to simulate empathy. The system should infer the emotional state of humans and adapt its actions to them.

The way people participate in an activity has been studied from several perspectives in HCI and psychology. The term “engagement” involves attentional and emotional involvement with a task. Engagement is also not stable, but fluctuates throughout an interaction experience.

Complete Chapter List

Search this Book: