Article Preview
TopIntroduction
Computational models of emotion (CMEs) are software systems designed to imitate some aspects of the process of human emotions (Sergio Castellanos & Rodriguez, 2018). This type of computational model is usually developed to be included in the cognitive architecture of virtual agents so that this type of intelligent system is capable of exhibiting affective behaviors in specific application domains (Caro et al., 2019; Rath et al., 2021). In general, CMEs are designed and implemented to provide virtual agents with mechanisms for evaluating a stimulus, eliciting synthetic emotions, and generating emotional behaviors (Huang et al., 2017; Rodríguez & Ramos, 2014). It is common practice that the internal mechanisms of CMEs are inspired by theories about human emotions originated in areas such as psychology and neuroscience. Thus, the development process of CMEs is supported by theoretical and computational aspects. First, emotion theory provides explanations about the workings of human emotions that serve as guidelines underlying the design of the internal mechanisms, processes, phases, architectures, among other elements of CMEs. Second, computational artifacts and practices from areas such as software engineering are utilized to achieve a working computational software of such a human emotion model and ensure a correct technical functioning. The development process of contemporary CMEs reported in the literature follows, in general, the procedure depicted in Figure 1, which reflects an effort of researchers in obtaining the requirements from emotion theories and the generation of a functional model (Rodríguez & Ramos, 2014).
Figure 1. Development process of CMEs
According to emotion theory, the underlying mechanisms of emotion processing are largely influenced by cognitive information that results from cognitive functions such as attention, as well as by psychological constructs (e.g., individual’s personality and culture) (Jain & Asawa, 2015; Jha et al., 2013; Rath et al., 2021). Based on this evidence, the components of a CME are required to be designed so that cognitive information from components in cognitive agent architectures are considered. It is assumed that this strategy leads to imitating closely the process of human emotion and ultimately allowing the virtual agent to exhibit very realistic affective behavior (Jha et al., 2013; Xie et al., 2012; Yalcin & Dipaola, 2018). From a software system perspective, affective and cognitive components must therefore interact with each other in order to generate realistic emotions and, in turn, these emotions influence the functioning of cognitive processes such as the agent’s decision-making and planning (Gavirangaswamy et al., 2019; Tieck et al., 2019). Nevertheless, this cognitive-affective relationship becomes highly complex since sharing information between cognitive and affective components that presumably may have been developed independently involves an important technical challenge. For instance, enabling the data exchange between affective components in CMEs and cognitive components in cognitive agent architectures is not enough, it is also necessary to resolve semantic issues to enable the accurate interpretation of the data that is being exchanged.