An HSV-Based Visual Analytic System for Data Science on Music and Beyond

An HSV-Based Visual Analytic System for Data Science on Music and Beyond

Carson K.S. Leung, Yibin Zhang
Copyright: © 2019 |Pages: 16
DOI: 10.4018/IJACDT.2019010105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the current era of big data, high volumes of a wide variety of valuable data—which may be of different veracities—can be easily generated or collected at a high speed in various real-life applications related to art, culture, design, engineering, mathematics, science, and technology. A data science solution helps manage, analyze, and mine these big data—such as musical data—for the discovery of interesting information and useful knowledge. As “a picture is worth a thousand words,” a visual representation provided by the data science solution helps visualize the big data and comprehend the mined information and discovered knowledge. This journal article presents a visual analytic system—which uses a hue-saturation-value (HSV) color model to represent big data—for data science on musical data and beyond (e.g., other types of big data).
Article Preview
Top

Introduction

In the current era of big data, high volumes of a wide variety of valuable data—which may be of different veracities (e.g., precise data, imprecise and uncertain data)—can be easily generated or collected at a high velocity in various real-life artistic, cultural, design, engineering, mathematical, scientific, and technological applications. Music repository, music and sound art, new media art, net art, performance art, visual arts, poems, video games, social networks, and World Wide Web are some sources of big data. Embedded in these big data are rich sets of implicit, previously unknown, and potentially useful information and valuable knowledge, which can be discovered by data science solution. In general, data science solutions apply big data mining, machine learning, high performance computing, statistical methods, mathematical modelling, and other related tools for managing, analyzing and visualizing data.

In general, music can be played, sung and heard as a form of art or entertainment (e.g., in music concerts, orchestra performance, theater shows, radios, TVs, CD or MP3 players). It can also be used as therapy to improve or maintain people health. In addition, it can play a key role in religious ritual or cultural ceremonies. With advances in technology, musical data on traditional forms (e.g., audio or video records, CDs) can be easily digitalized, collected and stored on the internet. These musical data are examples of big data. Since music has played an important role in our life, having the capability to manage these musical data and understand their contents is desirable (Li & Li, 2011). For instance, music data management helps users effectively manage huge volumes of data (e.g., search and retrieve a particular song based on lyrics). Music data mining (Li et al., 2011, 2014; Neubarth & Conklin, 2016; Martínez & Liern, 2017; Barkwell et al., 2018; Karydis et al., 2018) helps users discover implicit, previously unknown, and potentially useful information and valuable knowledge from musical data (e.g., find popular verses). Different music data mining techniques—such as lyric text mining, cognitive musicology, computational musicology, and computational music analysis (Meredith, 2016)—focus on mining different aspects of musical data, as listed below:

  • Lyrics (e.g., verses, choruses);

  • Metadata (e.g., song title, band or singer’s name, album name);

  • Genre (e.g., classical, folk, Jazz, Blues, country, hip-hop, rock, metal or heavy music);

  • Social tags (e.g., hash tags); and/or

  • Acoustic features (e.g., harmony, intensity, melody, pitch, rhythm, tempo, timbre).

In the current journal article, we focus on lyric text mining, which aims to analyze musical lyrics—i.e., sets of words in verses or choruses that make up a song in a textual form—for interesting patterns such as frequently occurring sets of words in different types of music genre (i.e., popular words or phrases in a musical piece or song). This mined knowledge and useful information reveals characteristics of the musical pieces, styles of songwriters (or music composers or lyricists), and preference of audiences or listeners. This, in turn, tells us more about the art and cultural of songwriters in a particular region of the world and/or at a particular time period.

Lyric text mining can be considered as a special case of frequent pattern mining where the data to be mined are musical lyrics. Over the past two decades, numerous frequent pattern mining algorithms (Rompré et al., 2017) have been designed and developed. Many of them have been focused on either functionality or efficiency. These algorithms usually return the mining results in textual form (e.g., a very long list of frequent patterns). Consequently, users may not easily comprehend the mined knowledge and useful information from the textual list. When compared with textual representation of the data and the mined results, visual representation (Keim et al., 2010; Tanaka et al., 2016; Ventocilla & Riveiro, 2017; Jentner & Keim, 2019) is more comprehensible to users.

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 12: 1 Issue (2023)
Volume 11: 3 Issues (2022)
Volume 10: 2 Issues (2021)
Volume 9: 2 Issues (2020)
Volume 8: 2 Issues (2019)
Volume 7: 2 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 2 Issues (2016)
Volume 4: 2 Issues (2014)
Volume 3: 2 Issues (2013)
Volume 2: 2 Issues (2012)
Volume 1: 2 Issues (2011)
View Complete Journal Contents Listing