Incorporating Affective Computing Into an Interactive System With MakeyMakey: An Emotional Human-Computer Interaction Design

Incorporating Affective Computing Into an Interactive System With MakeyMakey: An Emotional Human-Computer Interaction Design

Liu Hsin Lan, Lin Hao-Chiang Koong, Liang Yu-Chen, Zeng Yu-cheng, Zhan Kai-cheng, Liu Hsin-Yueh
Copyright: © 2022 |Pages: 15
DOI: 10.4018/IJOPCD.2022010102
(Individual Articles)
No Current Special Offers


People's motions or behaviors often ensue from these positive or negative emotions. Set off either subconsciously or intentionally, these fragmentary responses also represent people's emotional vacillations at different times, albeit rarely noted or discovered. This system incorporates affective computing into an interactive installation: While a user is performing an operation, the system instantaneously and randomly generates corresponding musical instrument sound effects and special effects. The system is intended to enable users to interact with emotions through the interactive installation to yield a personalized digital artwork as well learning about how emotions affect the causative factors of consciousness and personal behaviors. At the end of the process, this project design renders three questionnaires for users to fill in as a means to enhance the integrity and richness of the system with a survey and to further increase the stability and precision of the system through progressive modifications aligned with user suggestions.
Article Preview


The history of music is very long, and the genres and constituent elements of music vary across countries. With its ubiquitous presence, music enriches human life and affects human emotions (Arjmand, Hohagen, Paton, & Rickard, 2017). In the past, the musical expression of emotions could only be conveyed by using physical instruments or by singing. However, with the advancement of information technology, the manifestation of musical art should break free of a musician’s constraints and reach out to the general public while returning to music’s original intent. A system that offers simple operations and interfaces without compromising its accuracy can hopefully instantaneously present a user’s most authentic feelings, thus generating a personalized digital artwork (Geng & Cao, 2019). Combining an interactive installation with emotional-recognition technology, this system allows users to manipulate sound shadows; furthermore, it randomly generates corresponding sound effects to raise users’ sense of participation in the operation and to facilitate their emotional release to some degree. Combined with augmented reality (AR) experiences, this system promotes the embodiment of musical performances, bringing users a brand new visual and auditory feast (Coulton, Smith, Murphy, Pucihar, & Lochrie, 2014).

Facial recognition is chosen as the distinguishing criterion for affective computing rather than other elements. Compared with other decision elements such as heart rhythm, body temperature, and skin conductance, facial recognition offers immediate, precise numerical variations and considerably high accuracy (Wegrzyn, Vogt, Kireclioglu, Schneider, & Kissler, 2017). Therefore, this technique is considered more suitable for the present study’s support and augmentation. However, merely digitizing the aforementioned physiological data and movements seem rather dull. This system materializes feelings from the mental level. It allows users to control the variation of sound shadows through an interactive installation, thereby heightening their sense of participation in operating the system and making it more stimulating and interesting to use (Duarte, Gonçalves, & Baranauskas, 2018).

Our ultimate objective is to bring digital art closer to people’s lives using the abovementioned information technology and to eradicate further misunderstandings resulting from differences between research fields, shortages of knowledge and technology, or even age barriers. It stands to reason that digital art should go beyond specific ethnic groups and become integrated into daily life. Hopefully, a system that runs with simple operations and interfaces without losing its accuracy can instantaneously present a user’s most authentic feelings and further generate personalized digital artwork.

Therefore, our research focus addresses the following questions:

  • 1.

    How to automatically generate a tune corresponding to users’ mindset based on computing the emotional data they enter at will?

  • 2.

    How to convert the musical attributes of the input from the installation to a value that is calculable by the computer?

  • 3.

    How to judge the emotionality of music? What musical attributes can be used as a basis for such judgment?

  • 4.

    Can this system offer users a deeper understanding of and familiarity with digital art?

  • 5.

    Can this system promote users’ willingness and propensity for artistic creation?

Complete Article List

Search this Journal:
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing