An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques

An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques

Hao-Chiang Koong Lin, Cong Jie Sun, Bei Ni Su, Zu An Lin
Copyright: © 2013 |Pages: 14
DOI: 10.4018/ijopcd.2013070104
(Individual Articles)
No Current Special Offers


All kinds of arts have the chance to be represented in digital forms, and one of them is the sound art, including ballads by word of mouth, classical music, religious music, popular music and emerging computer music. Recently, affective computing has drowned a lot of attention in the academic field, and it has two parts: physiology and psychology. Through a variety of sensing devices, the authors can get behaviors which are represented by feelings and emotions. Therefore, the authors may not only identify but also understand human emotions. This work focuses on exploring and producing the MAX/MSP computer program which can generate the emotional music automatically. It can also recognize the emotion identified when users play MIDI instruments and create visual effects. The authors hope to achieve two major goals: (1) Producing the performance of art combined with dynamic vision and auditory tune. (2) Making computers understand human emotions and interact with music by affective computing. The results of this study are as follows:(1) The authors design a corresponding mechanism of music tone and human emotion recognition. (2) The authors develop a combination of affective computing and the auto music generator. (3) The authors design a music system which can be used with MIDI instrument and also be incorporated with other music effects to add the Musicality. (4) The authors Assess and complete the emotion discrimination mechanism of how mood music can feedback accurately. The authors make computers simulate (even have) human emotion, and obtain relevant basis for more accurate sound feedback. The authors use System Usability Scale to analyze and discuss about the usability of the system. Also, the average score of each item is obviously higher than the simple score (four points) for the overall response and the performance of music when we use “auto mood music generator”. There are average performance which is more than five points in each part of Interaction and Satisfaction Scale. Subjects are willing to accept this interactive work, so it proves that the work has the usability and the potential which the authors can keep developing on.
Article Preview

Research Questions

Music, we want to use as a symbol, is the expression of sensibility. “Sound” is the way that user unleashed. It would be good music if we combine the sound and the emotion correctly. Through the art creation mechanism with random real-time effects, we want the audience to join the scene, to feel something, and then to own the entirely different exhibition form. Music gets along with us in a specific manner; we expect to make use of its power and function, to echo with audiences as well:

  • 1.

    How do we use Max/MSP/Jitter and music theory to manufacture the automatic music mechanism?

  • 2.

    Which parameter does Max/MSP/Jitter need, when the sensor receive a variety of tunes?

  • 3.

    How do the MIDI instrument and Max/MSP/Jitter match up? How to get the parameter during the play?

  • 4.

    How do we express the feelings and make sure the interaction between audiences and our mechanism?

  • 5.

    Can we sum up the melodies in different emotion, bring a whole new creative way for the emerging computer music as well?

Complete Article List

Search this Journal:
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing