Implementation of Machine Learning-Aided Speech Analysis for Speaker Accent Identification Applied to Audio Forensics

Implementation of Machine Learning-Aided Speech Analysis for Speaker Accent Identification Applied to Audio Forensics

Vijayalakshmi G. V. Mahesh, Alex Noel Joseph Raj, Ruban Nersisson
DOI: 10.4018/978-1-6684-4558-7.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Accent recognition as a subset of speech recognition is crucial in audio forensics as it provides the authenticity of the speech that can be presented to the judicature as evidence. The challenge here is to design a speaker accent identifying system that can provide significant information to match with the standards of the court of justice. This process requires robust descriptors to represent speech signal with good discrimination ability. This chapter proposes to use Mel Frequency Cepstral Coefficients to identify and represent the human utterances by transforming the frequency from normal scale to Mel scale. The work utilized support vector machine, k nearest neighbors, XG boost, linear discriminant analysis, quadratic discriminant analysis, and decision tree algorithms to recognize the accent of the speaker. The experiment conducted on the accent recognition dataset demonstrated the ability of Mel Frequency Cepstral Coefficients and kNN classifier in identifying and discriminating six accents belonging to different speakers with better accuracy.
Chapter Preview
Top

Introduction

With the advancements in Artificial intelligence, signal processing and machine learning algorithms, there is growing demand for speech processing and speech/accent recognition applications in banking sector, tourism industry, marketing, healthcare, law enforcement, forensics, language learning etc., Further the introduction of voice assistants with AI and Internet of Things have transformed speech recognition technology and have changed the way human interact with devices. Speech or accent recognition involves understanding, analyzing the human speech and make the right decision.

Speech processing and speech/accent recognition plays a significant role in audio forensics (AF). AF is a part of forensic science that serves as evidence for investigation as required for law enforcement. This involves (i) Audio/Speech data acquisition, (ii) Analysis, (iii) Interpretation and (iv) Evaluation as major processing steps. The overview of the process is displayed in Fig.1.The acquired data has to be preprocessed to improve its quality by using filters and enhancement techniques. The preprocessing improves the speech intelligibility by removing background noise, distortions and artifacts thus providing authenticity of the evidence. Further it is very important to note the key frequency components and frequency range in the audio/speech signal to observe and gather required information to make it available as evidence.

Figure 1.

Overview of Audio Forensic Process

978-1-6684-4558-7.ch008.f01

Digital signal processing algorithms such as Fourier transform, Wavelet transform, and Mel Frequency Cepstral Coefficients serve as a better choice for frequency analysis of the audio signals. This analysis later serves in identifying the suspect based on identifying the speech and accent of the speaker. This chapter emphasizes on applying frequency analysis to extract MFCC features from the speech signals and provide them to the machine learning algorithm and evaluate its output to interpret the result.

Top

A lot of research has been carried out on speech recognition and to identify the accent of the speaker. A critical review on the various methods used for accent-based speech recognition is provided in (Thandil and Basheer, 2020; Bhagath and Das, 2004). Automatic speech recognition framework can be developed based on acoustic phonetic method and pattern recognition (Jahangir et al., 2021).

Acoustic phonetic method explores the acoustic properties of speech signal such as time domain characteristics of the speech that include the fundamental frequency, its duration, mean squared amplitude of the signal or transform domain characteristics such as frequency spectrum to represent and identify the speech. Whereas the pattern recognition (Rabiner, 1992) methods involve extracting the patterns from the speech signal, training the pattern to generate the model and finally adapting the model for speech recognition. Numerous approaches have been developed to extract the patterns or features based on independent component analysis (ICA), principal component analysis (PCA), Liner predictive coding (LPC), zero crossing detection (ZCD), Wavelet transform, Mel Frequency Cepstral coefficient (MFCC), Guassian mixture modeling (GMM), Vector quantization (VQ), Dynamic time warping (DTW), Hidden Markov modeling (HMM) and deep learning architectures (Nassif et al., 2019) to be applied in speech recognition.

Complete Chapter List

Search this Book:
Reset