An Interoperable Framework for Computational Models of Emotion

An Interoperable Framework for Computational Models of Emotion

Enrique Osuna, Sergio Castellanos, Jonathan Hernando Rosales, Luis-Felipe Rodríguez
DOI: 10.4018/IJCINI.296257
Article PDF Download
Open access articles are freely available for download

Abstract

Computational models of emotion (CMEs) are software systems designed to emulate specific aspects of the human emotions process. The underlying components of CMEs interact with cognitive components of cognitive agent architectures to produce realistic behaviors in intelligent agents. However, in contemporary CMEs, the interaction between affective and cognitive components occurs in ad-hoc manner, which leads to difficulties when new affective or cognitive components should be added in the CME. This paper presents a framework that facilitates taking into account in CMEs the cognitive information generated by cognitive components implemented in cognitive agent architectures. The framework is designed to allow researchers define how cognitive information biases the internal workings of affective components. This framework is inspired in software interoperability practices to enable communication and interpretation of cognitive information and standardize the cognitive-affective communication process by ensuring semantic communication channels used to modulate affective mechanisms of CMEs
Article Preview
Top

Introduction

Computational models of emotion (CMEs) are software systems designed to imitate some aspects of the process of human emotions (Sergio Castellanos & Rodriguez, 2018). This type of computational model is usually developed to be included in the cognitive architecture of virtual agents so that this type of intelligent system is capable of exhibiting affective behaviors in specific application domains (Caro et al., 2019; Rath et al., 2021). In general, CMEs are designed and implemented to provide virtual agents with mechanisms for evaluating a stimulus, eliciting synthetic emotions, and generating emotional behaviors (Huang et al., 2017; Rodríguez & Ramos, 2014). It is common practice that the internal mechanisms of CMEs are inspired by theories about human emotions originated in areas such as psychology and neuroscience. Thus, the development process of CMEs is supported by theoretical and computational aspects. First, emotion theory provides explanations about the workings of human emotions that serve as guidelines underlying the design of the internal mechanisms, processes, phases, architectures, among other elements of CMEs. Second, computational artifacts and practices from areas such as software engineering are utilized to achieve a working computational software of such a human emotion model and ensure a correct technical functioning. The development process of contemporary CMEs reported in the literature follows, in general, the procedure depicted in Figure 1, which reflects an effort of researchers in obtaining the requirements from emotion theories and the generation of a functional model (Rodríguez & Ramos, 2014).

Figure 1.

Development process of CMEs

IJCINI.296257.f01

According to emotion theory, the underlying mechanisms of emotion processing are largely influenced by cognitive information that results from cognitive functions such as attention, as well as by psychological constructs (e.g., individual’s personality and culture) (Jain & Asawa, 2015; Jha et al., 2013; Rath et al., 2021). Based on this evidence, the components of a CME are required to be designed so that cognitive information from components in cognitive agent architectures are considered. It is assumed that this strategy leads to imitating closely the process of human emotion and ultimately allowing the virtual agent to exhibit very realistic affective behavior (Jha et al., 2013; Xie et al., 2012; Yalcin & Dipaola, 2018). From a software system perspective, affective and cognitive components must therefore interact with each other in order to generate realistic emotions and, in turn, these emotions influence the functioning of cognitive processes such as the agent’s decision-making and planning (Gavirangaswamy et al., 2019; Tieck et al., 2019). Nevertheless, this cognitive-affective relationship becomes highly complex since sharing information between cognitive and affective components that presumably may have been developed independently involves an important technical challenge. For instance, enabling the data exchange between affective components in CMEs and cognitive components in cognitive agent architectures is not enough, it is also necessary to resolve semantic issues to enable the accurate interpretation of the data that is being exchanged.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing