Studying the Suitability of Discourse Analysis Methods for Emotion Detection and Interpretation in Computer-Mediated Educational Discourse

Studying the Suitability of Discourse Analysis Methods for Emotion Detection and Interpretation in Computer-Mediated Educational Discourse

Thanasis Daradoumis, Marta María Arguedas Lafuente
DOI: 10.4018/978-1-4666-4426-7.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Conversation analysis (CA) and discourse analysis (DA) methods have been widely used to analyse classroom interaction in conventional educational environments and to some extent in e-learning environments, paying more attention to the ’quality’ and purposes the discourse serves to accomplish in its specific context. However, CA and DA methods seem to ignore emotion detection and interpretation when analysing learners’ interaction in online environments. Effective regulation of emotion, motivation and cognition in social interaction has been shown to be crucial in achieving problem-solving goals. The aim of this chapter is to provide an in-depth study on the possibility of applying discourse analysis methods in e-learning contexts with implications for emotion detection, interpretation and regulation. The result of this study shows whether a comprehensive approach that includes DA methodological solutions and constructivist strategies (e.g., cognitive dissonance) for emotion detection and interpretation can be elaborated and applied.
Chapter Preview
Top

Introduction

According to Ortony et al. (1988), emotions are balanced affective states focused on events, agents or objects. Since emotions play such an essential role in human life, they cannot be left aside when modelling systems that interact with human beings. A new branch named “affective computing” appeared in this research line in the late 90s. This branch is divided in turn into two other branches. The first one studies the mechanisms to recognise human emotions or express emotions by means of a computer in a man-machine interaction (Jacques & Vicari, 2007). The second branch investigates the simulation of emotions in machines (synthetic emotions) with the aim of finding out more about human emotions (Laureano-Cruces, 2006).

Many scientific studies have tried to understand what emotions are and how they take place in human beings. For instance, Fehr and Russell (1984, p. 464) state that everyone knows what an emotion is until asked to give a definition. Then it seems that no one knows. Some of the detection methods used has merely classified emotions into categories. The four most common emotions that appear in the lists of many theorists are fear, anger, sadness and happiness (Ekman & Friesen, 1971; Izard, 1977; Plutchik, 1980). The classification of the emotions that are defined as primary, that is to say, that are not composed of other emotions, varies according to the theory that is taken as a reference. The list of models and theories that analyse basic emotions is very long. Feidakis et al. (2011), in a preliminary study in an attempt to classify models and theories of basic emotion, proposed ten basic emotions: anger, happiness, fear, sadness, surprise, disgust and love, anticipation, joy and trust.

According to Ballano (2011), there are also other methods that widen the number of classes with secondary emotions (Abrilian et al., 2005) or mental states such as “concentrated,” “interested,” and “thoughtful” (Kapoor et al., 2007). However, all these approaches imply discrete representations, with no relation among each other, and they are not able to reflect the wide range of complex emotions a human being is able to express.

Furthermore, studies about the dimensions of emotions have also been carried out. In the literature, research on learning theories and models revealed the following dimensions according to Hascher (2010): arousal, valence, control, intensity, duration, frequency of occurrence, time dimension, reference point and context.

Starting from the dimensions of emotions, Ballano (2011) argues that certain researchers, such as Whissell (1989) and Plutchik (1980), prefer not to see affective states independently, but rather as interrelated states. These authors consider emotions as a bi-dimensional continuous space, whose axes represent the evaluation and activation of each emotion.

These stimuli and what a person is feeling at all times can be captured by means of techniques such as: facial recognition, recognition of movements and body language, capture of speech patterns and intonation, pupil dilation, heart rate and respiratory rate monitoring, as well as detection of typical smell patterns. For all these purposes, more and more sophisticated devices (sensors) are able to obtain all these data and can be found in the market with no major difficulty. Nonetheless, as regards teaching/learning processes, the least invasive possible techniques should be used.

Another part of the process is to give machines the ability to “understand” human emotions and make them capable of realising what humans are feeling. The aim is to improve people's relationship with them, make the interaction more flexible and offer a pleasant user interface, as well as to allow users to focus their attention on a specific element or situation and improve their decision-making, by adapting to the context of each moment.

In order to do so, and following the objective of this chapter, we carried out an in-depth study about the possibility of applying discourse analysis methods in e-learning contexts with implications in the detection, interpretation and regulation of emotions.

Complete Chapter List

Search this Book:
Reset