Emulating Subjective Criteria in Corpus Validation

Emulating Subjective Criteria in Corpus Validation

Ignasi Iriondo, Santiago Planet, Francesc Alías, Joan-Claudi Socoró, Elisa Martínez
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch083
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The use of speech in human-machine interaction is increasing as the computer interfaces are becoming more complex but also more useable. These interfaces make use of the information obtained from the user through the analysis of different modalities and show a specific answer by means of different media. The origin of the multimodal systems can be found in its precursor, the “Put-That-There” system (Bolt, 1980), an application operated by speech and gesture recognition. The use of speech as one of these modalities to get orders from users and to provide some oral information makes the human-machine communication more natural. There is a growing number of applications that use speech-to-text conversion and animated characters with speech synthesis. One way to improve the naturalness of these interfaces is the incorporation of the recognition of user’s emotional states (Campbell, 2000). This point generally requires the creation of speech databases showing authentic emotional content allowing robust analysis. Cowie, Douglas-Cowie & Cox (2005) present some databases showing an increase in multimodal databases, and Ververidis & Kotropoulos (2006) describe 64 databases and their application. When creating this kind of databases the main arising problem is the naturalness of the locutions, which directly depends on the method used in the recordings, assuming that they must be controlled without interfering the authenticity of the locutions. Campbell (2000) and Schröder (2004) propose four different sources for obtaining emotional speech, ordered from less control but more authenticity to more control but less authenticity: i) natural occurrences, ii) provocation of authentic emotions in laboratory conditions, iii) stimulated emotions by means of prepared texts, and iv) acted speech reading the same texts with different emotional states, usually performed by actors. On the one hand, corpora designed to synthesize emotional speech are based on studies centred on the listener, following the distinction made by Schröder (2004), because they model the speech parameters in order to transmit a specific emotion. On the other hand, emotion recognition implies studies centred on the speaker, because they are related to the speaker emotional state and the parameters of the speech. The validation of a corpus used for synthesis involves both kinds of studies: the former since it will be used for synthesis and the latter since recognition is needed to evaluate its content. The best validation system is the selection of the valid utterances1 of the corpus by human listeners. However, the big size of a corpus makes this process unaffordable.
Chapter Preview
Top

Background

Emotion recognition has been an interesting research field in human-machine interaction for long, as can be observed in Cowie et al. (2001). Some studies have been carried out to observe the influence of emotion in speech signals like the work presented by Rodríguez et al. (1999), but more recently, due the increasing power of modern computers that allows the analysis of huge amount of data in relatively small time lapses, machine learning techniques have been used to recognise emotions automatically by using labelled expressive speech corpora. Most of these studies have been centred on few algorithms and little sets of parameters.

However, recent works have performed more exhaustive experiments testing different machine learning techniques and datasets, as the described by Oudeyer (2003). All this kind of studies had the goal of achieving the best possible recognition rate obtaining, in many cases, better results than those obtained in subjective tests ((Oudeyer, 2003), (Planet, Morán & Formiga, 2006), (Iriondo, Planet, Socoró & Alías, 2007)). Nevertheless, many differences can be found when analyzing the results obtained from objective and subjective classifications and, to our knowledge, there are not studies with the goal of emulating these subjective criteria before those carried out by Iriondo, Planet, Alías, Socoró & Martínez (2007).

Key Terms in this Chapter

Greedy Algorithm: Algorithm -usually applied to optimization problems- based on the idea of finding a global solution for a problem (despite of not being the optimal one) by choosing locally optimal solutions in different iterations.

Backward Elimination Strategy: Greedy attribute selection method that evaluates the effect of removing one attribute from a dataset. The attribute that improves the performance of the dataset when it is deleted is chosen to be removed for the next iteration. Process begins with the full set of attributes and stops when no attribute removing improves performance.

Precision: Measure that indicates the percentage of correctly classified cases of one class with regard to the number of cases that are classified (correctly or not) as members of that class. This measure says if the classifier is assuming as members of one specific class cases from other different classes.

F1-Measure: The F1-measure is an approach of combining the precision and recall measures of a classifier by means of an evenly harmonic mean of both them. Its expression is F1-measure = (2×precision×recall) / (precision+recall).

Decision Trees: Classifier consisting on an arboreal structure. A test sample is classified by evaluating it in each node, starting at the top one and choosing a specific branch depending on this evaluation. The classification of the sample is the class assigned in the bottom node.

Forward Selection Strategy: Greedy attribute selection method that evaluates the effect of adding one attribute to a dataset. The attribute that improves the performance of the dataset is chosen to be added to the dataset for the next iteration. Process begins with no attributes and stops when adding new attributes provides no performance improvement.

Recall: Measure that indicates the percentage of correctly classified cases of one class with regard to the total number of cases that actually belong to this class. This measure says if the classifier is ignoring cases that should be classified as members of one specific class when doing a classification.

Naïve-Bayes: Probabilistic classifier based on Baye’s rule that assumes that all the pairs parameter-value that define a case are independent.

SVM: Acronym of Support Vector Machines. SVM are models able to distinguish members of classes whose limits are not lineal. This is possible by a non-linear transformation of input data mapping it into a higher-dimensionality space where data can be easily divided by a maximum margin hyperplane.

Complete Chapter List

Search this Book:
Reset