Appraisal Inference from Synthetic Facial Expressions

Appraisal Inference from Synthetic Facial Expressions

Ilaria Sergi (Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland), Chiara Fiorentini (Swiss Center for Affective Sciences, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland), Stéphanie Trznadel (Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland) and Klaus R. Scherer (Swiss Center for Affective Sciences, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland)
Copyright: © 2016 |Pages: 17
DOI: 10.4018/IJSE.2016070103
OnDemand PDF Download:
No Current Special Offers


Facial expression research largely relies on forced-choice paradigms that ask observers to choose a label to describe the emotion expressed, assuming a categorical encoding and decoding process. In contrast, appraisal theories of emotion suggest that cognitive appraisal of a situation and the resulting action tendencies determine facial actions in a complex cumulative and sequential process. It is feasible to assume that, in consequence, the expression recognition process is driven by the inference of appraisal configurations that can then be interpreted as discrete emotions. To obtain first evidence with realistic but well-controlled stimuli, theory-guided systematic facial synthesis of action units in avatar faces was used, asking judges to rate 42 combinations of facial actions (action units) on 9 appraisal dimensions. The results support the view that emotion recognition from facial expression is largely mediated by appraisal-action tendency inferences rather than direct categorical judgment. Implications for affective computing are discussed.
Article Preview


In the field of affective computing there is a strong emphasis on automatic recognition of emotion via facial expression (Gunes, 2010; Valstar, Mehu, Jiang, Pantic, & Scherer, 2012). This mirrors a long-standing interest in psychological work on the recognition of emotion, mostly in the face but also in voice, gestures, or movement. In general, actor portrayals of prototypical emotional expressions are used for recognition studies with lay observer-raters. The analyses are mostly performed by determining the percentage accuracy and/or computing confusion matrices for discrete basic emotions identified with emotion words such as sad, angry, fearful, or joyful (or in some cases, using a dimensional approach in terms of positive or negative valence or low vs. high arousal of the respective expression. In this article, a different approach is proposed, arguing that the emotion recognition process may be quite different from simple matching of faces with labels Rather, it is suggested that it consists of a process of inference based on lower-level information. Concretely, the argument is that observers first infer the expressor's appraisal (i.e., the evaluation of the significance of a given event and the potential for action) followed by the use this information to deduce the likelihood of particular emotions or emotion blends. The study reported here represents a first step in validating this notion, seeking to obtain evidence that judges can reliably infer appraisal from facial action patterns systematically produced via synthesis in avatar faces. The results of this work provide important information on the nature of the psychophysiological processes as well as for the attempt to develop automatic recognition and classification algorithms.

Data Sets

Complete Article List

Search this Journal:
Open Access Articles
Volume 12: 2 Issues (2021): Forthcoming, Available for Pre-Order
Volume 11: 2 Issues (2020): 1 Released, 1 Forthcoming
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing