MixMash: An Assistive Tool for Music Mashup Creation from Large Music Collections

MixMash: An Assistive Tool for Music Mashup Creation from Large Music Collections

Catarina Maçãs, Ana Rodrigues, Gilberto Bernardes, Penousal Machado
Copyright: © 2019 |Pages: 21
DOI: 10.4018/IJACDT.2019070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article presents MixMash, an interactive tool which streamlines the process of music mashup creation by assisting users in the process of finding compatible music from a large collection of audio tracks. It extends the harmonic mixing method by Bernardes, Davies and Guedes with novel degrees of harmonic, rhythmic, spectral, and timbral similarity metrics. Furthermore, it revises and improves some interface design limitations identified in the former model software implementation. A new user interface design based on cross-modal associations between musical content analysis and information visualisation is presented. In this graphic model, all tracks are represented as nodes where distances and edge connections display their harmonic compatibility as a result of a force-directed graph. Besides, a visual language is defined to enhance the tool's usability and foster creative endeavour in the search of meaningful music mashups.
Article Preview
Top

Introduction

Mashup creation is a music composition practice strongly linked to various sub-genres of Electronic Dance Music (EDM) and the role of the DJ (Shiga, 2007). It entails the recombination of existing pre-recorded musical audio as a means of creative endeavour (Navas, 2014). This practice has been nurtured by the existing and growing media preservation mechanisms that allow users to access large collections of musical audio in digital format for their mixes (Vesna, 2007). However, the scalability of these growing audio collections also raises the issue of retrieving musical audio that matches particular criteria (Schedl, Gómez, & Urbano, 2014). In this context, both industry and academia have been devoting effort to develop tools for computational mashup creation, which streamline the time-consuming and complex search for compatible musical audio.

Early research on computational mashup creation, focused on rhythmic-only attributes, particularly those relevant to the temporal alignment of two or more musical audio tracks (Griffin, Kim & Turnbull, 2010). Recent research (Davies, Hamel, Yoshii, & Goto, 2014; Gebhardt, Davies, & Seebe, 2016; Bernardes, Davies, & Guedes, 2018) has expanded the range of musical attributes under consideration towards harmonic- and spectral-driven attributes. The former aims to identify the degree of harmonic compatibility in musical audio, commonly referred to as harmonic mixing. The latter aims to identify the spectral region occupied by a particular musical audio track across the frequency range (e.g., the concentration of energy in low, middle, and high frequency bands), which can then guide the spectral distribution of the mix.

The interface design of early software implementation models adopts a one-to-many mapping strategy between a user-defined track and a ranked list of compatible tracks to show the results to the user (Mixed in Key, n.d.; Native Instruments, n.d.; Davies et al., 2014). Recently, Bernardes et al. (2018) proposed an interface design which adopts a many-to-many mapping strategy, which offers a global view of the compatibility between all tracks in a music collection and promotes serendipitous navigation (Figure 1). It represents each audio track in a collection as a graphical element in a navigable 2-dimensional interface. Distances among these elements indicate harmonic compatibility and the additional graphic variables of these elements, such as colour and shape, indicate rhythmic and spectral information relevant to mashup creation. By exposing users to the compatibility between all tracks in a collection, this interface design aims to promote an overview of the relations between tracks.

Figure 1.

Screenshot of the visualisation of the original MixMash interface representing (a) 50 musical audio tracks and (b) 200 musical audio tracks. Refer to Bernardes et al. (2018) for a detailed interpretation.

IJACDT.2019070102.f01

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 12: 1 Issue (2023)
Volume 11: 3 Issues (2022)
Volume 10: 2 Issues (2021)
Volume 9: 2 Issues (2020)
Volume 8: 2 Issues (2019)
Volume 7: 2 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 2 Issues (2016)
Volume 4: 2 Issues (2014)
Volume 3: 2 Issues (2013)
Volume 2: 2 Issues (2012)
Volume 1: 2 Issues (2011)
View Complete Journal Contents Listing