Automatic Tagging of Audio: The State-of-the-Art

Automatic Tagging of Audio: The State-of-the-Art

Thierry Bertin-Mahieux (Columbia University, USA), Douglas Eck (University of Montreal, Canada) and Michael Mandel (University of Montreal, Canada & Columbia University, USA)
Copyright: © 2011 |Pages: 19
DOI: 10.4018/978-1-61520-919-4.ch014
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Recently there has been a great deal of attention paid to the automatic prediction of tags for music and audio in general. Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of ``Web 2.0‘‘ recommender systems. There have been many attempts at automatically applying tags to audio for different purposes: database management, music recommendation, improved human-computer interfaces, estimating similarity among songs, and so on. Many published results show that this problem can be tackled using machine learning techniques, however, no method so far has been proven to be particularly suited to the task. First, it seems that no one has yet found an appropriate algorithm to solve this challenge. But second, the task definition itself is problematic. In an effort to better understand the task and also to help new researchers bring their insights to bear on this problem, this chapter provides a review of the state-of-the-art methods for addressing automatic tagging of audio. It is divided in the following sections: goal, framework, audio representation, labeled data, classification, evaluation, and future directions. Such a division helps understand the commonalities and strengths of the different methods that have been proposed.
Chapter Preview
Top

Introduction

Many tasks require machines to hear in order to accomplish them. In the case of music, we would like computers to help us discover, manage, and describe the many new songs that become available every day. The goal is not necessarily to replace humans: the best description of an album is probably the one of a music expert. That being said, it is now impossible for any group of experts to listen to every piece on the Internet and summarize it in order for others to discover it. For example, let's consider an online radio like Pandora (http://www.music-ir.org/mirex/2009/index.php/Main_Page. Though we applaud the hard work of the contest organizers, there is room for improvement. For example, the contest could have been more clearly defined in terms of evaluation and we discuss it later on in this chapter. One goal of our review is to bring together the many proposed evaluation methods and work towards a common framework for evaluation. In addition we hope that this effort will help bring new researchers into this area by offering them a clear set of goals.

Note that the lack of a common goals does not mean that learning tags have been useless so far. For instance, Eck et al. (2008) and Barrington et al. (2009) both showed that automatically generated tags can improve music recommendation. Also, Turnbull et al. (Turnbull, Barrington, Torres & Lanckriet, 2008) explain how to manage a sound effect database using automatic tagging.

This chapter focuses on automatic tagging of music. However, regarding the vast and diverse set of tags that have been used and the absence of prior knowledge assumed on the audio, this work addresses tagging of audio in general. For instance, someone working on a speech versus music classifier should also find the following methods and algorithms interesting.

Complete Chapter List

Search this Book:
Reset