Machine Learning and Sensor Data Fusion for Emotion Recognition

Machine Learning and Sensor Data Fusion for Emotion Recognition

Eman M. G. Younis, Someya Mohsen Zaki, Essam Halim Houssein
Copyright: © 2023 |Pages: 30
DOI: 10.4018/978-1-7998-9220-5.ch159
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this article, the authors investigate the development of sensor data fusion-based emotion detection models. They use direct and continuous sensor data to construct emotion prediction models. They use sensor data fusion involving the environmental and physiological signals. This article integrates on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: 1) collecting in the wild data set of multiple sensors; 2) using data fusion, feature fusion, and decision fusion for emotion recognition; 3) prediction of emotional states based on fusing environmental and physiological variables; 4) developing subject-dependent emotion detection models. To achieve that, they have done a real-world study “in the wild” with physiological and mobile sensors. The datasets come from participants walking around Minia University campus. The authors compared the obtained results to choose the best-performing model. Results show that D Tand RF outperforms SVM and KNN significantly by 1% or 2% (p <0.01) with an average accuracy of 0.97%.
Chapter Preview
Top

Introduction

Emotions have massive influences on our lives. Negative emotions have become determinants of human health. Long-term unpleasant reactions are associated with health issues, including migraines, asthma, ulcers, and heart disease (Kim, J., & André, E., 2008). Growing usage of sensors and wireless networks have led to development of low-cost, efficient wearable devices collecting and transferring data in real-time for long periods (Kanjo, E., Younis, E. M., & Ang, C. S., 2019). These data sources provide a chance to create innovative algorithms for identifying human emotions. This can aid in treatment of chronic diseases, including diabetes, asthma, and heart disease (Pollreisz, D., & TaheriNejad, N., 2017, July).

Researchers made several attempts to integrate ML techniques with sensor datasets for automatic emotion identification (Busso, C., & Deng et al., 2004;Jerritta et al., 2011;Kanjo, E., Kuss, D. J., & Ang, C. S., 2017;Katsis et al., 2008). Many studies on automatic emotion identification have focused on visual, auditory, and movement data (e.g., facial expressions, body postures, speeches) (Busso, C., & Deng et al., 2004;Jerritta et al., 2011;Katsis et al., 2008;Basiri, M., Schill, F., U. Lima, P., & Floreano, D., 2018;Kanjo, E., Al-Husain, L., & Chamberlain, A., 2015). With growing availability of low-cost wearable sensors (e.g., Fitbit, Microsoft wristbands), research interest in using human physiological data for emotion identification has grown. Due to possibility and diversity of human emotional manifestations, automated human emotion categorization remains difficult despite the capacity to sense a wide range of information (from human physiology to surroundings) (Plasqui, G., & Westerterp, K. R., 2007). In addition, they extracted specific emotions based on controlled samples in lab settings using audio-visual stimuli (e.g., showing participants photos or videos or asking participants to complete designed tasks to induce emotional states (Agrafioti, F., Hatzinakos, D., & Anderson, A. K., 2011). That sort of controlled study is limited to strictly controlled environments despite effectiveness.

Authors used several standard ML techniques to capture variability of multi- modal data at sensor and feature levels for mood categorization “in the wild” using smartphones and wristbands, based on integrating many sensors of various modalities (physiological [EDA, HR, Body-Temperature, Motion] and environmental [Air-Pressure, Env-Noise, UV]).

The purpose of this chapter is to compare several supervised ML techniques (SVM, KNN, RF, DT) to classify five distinct emotional states. Authors collected data from participants wandering about Minia - university campus using physiological and mobile sensors in real-world settings to develop prediction models. Using sensor data, authors applied ML approach for emotion classification in this work, which incorporates a set of ML algorithms to achieve the following goals: - 1) Using on-body and environmental factors to predict emotional reactions. 2) Constructing a user-dependent model based on various modalities associated with several sensors using various ML techniques.

Authors organize the rest of this chapter as follows. Section 2 presents some background knowledge about affective computing, data fusion and machine learning. Section 3 surveys related research. Section 4 presents system description including data collection and tools used in the experiments. Section 5 presents implementation framework and design of the proposed procedure, data preprocessing, and statistics. Results are presented and discussed in Section 6. Finally, authors present conclusions and future work in Section 7

Key Terms in this Chapter

Feature-Level Fusion: Intermediate-level data fusion is used to select the best set of features for categorization. The best combination of features, like EMC, Respiration, Skin Conductance, and ECC, has been retrieved using feature-level fusion.

Feature Selection: Is selecting a subset of the variables of the dataset to generate predictive models using machine learning algorithms.

Data-Level Fusion: Low-level attempts to gather diverse data components from multiple sensors to complement one another. During data collection, it is possible to incorporate other data sources, like user self-reported emotions.

Machine Learning: Is a kind of Artificial intelligence to enable machines to think using data and algorithms.

Feature Engineering: It is the process of identifying the most relevant and important features for machine learning algorithms for creating predictive models for machine learning.

Decision-Level Fusion: High-level data fusion, the goal is to improve decision-making by integrating results of several algorithms.

Complete Chapter List

Search this Book:
Reset