Ways of improving the accuracy and efficiency of automatic speech recognition (ASR) systems have been a long term goal of researchers to develop the natural language man machine communication interface. In widely used statistical framework of ASR, feature extraction technique is used at the front-end for speech signal parameterization, and hidden Markov model (HMM) is used at the back-end for pattern classification. This chapter reviews classical and recent approaches of Markov modeling, and also presents an empirical study of few well known methods in the context of Hindi speech recognition system. Various performance issues such as number of Gaussian mixtures, tied states, and feature reduction procedures are also analyzed for medium size vocabulary. The experimental results show that using advanced techniques of acoustic models, more than 90% accuracy can be achieved. The recent advanced models outperform the conventional methods and fit for HCI applications.
Human computer interaction through natural language conversational interface plays an important role in improving the usage of computers for the common man. The success of such speech enabled man machine communication interface depends mainly upon the performance of automatic speech recognition system. State-of-the-art ASR systems use statistical pattern classification approach, having the two well known phases: feature extraction and pattern classification.
In the architecture of ASR, feature extraction phase comes under front-end, that converts the recorded waveform to some form of acoustic representation known as feature vectors. Back-end covers the different statistical models such as acoustic models and language models, along with searching methods and adaptation techniques for classification. The features are based on time-frequency representation of acoustic signals, which are computed at regular intervals (e.g., every 10ms). The feature vectors are decoded into linguistic units like word, syllable, and phones with the help of hidden Markov models (HMMs) at back-end. For classification, HMMs use either multivariate Gaussian mixtures or artificial neural networks, to emit a state dependent likelihood or posterior probability on a frame by frame basis.
This chapter reviews and compares the existing statistical techniques (i.e., various types of HMMs) which have been used for acoustic-phonetic modeling of ASR in the context of Hindi language. The stochastic models are covered within three categories: conventional techniques, refinements and recently proposed methods. Various experiments are performed in normal field conditions as well as in noisy environments by using well known tools HTK 3.4.1 (Cambridge, University, 2011) and MATLAB. The deficiency in resources like speech and text corpora is the major hurdle in speech research for Hindi or any other Indian language. Unfortunately no standard database for Hindi language is available for public use till yet. Since databases from non-Indian languages cannot be used for Hindi (owing to the language specific effects), we have used self developed corpus which includes documents from popular Hindi news papers. The system includes PLP and PLP-RASTA techniques for feature extraction at front-end.
Rest of the chapter is organized as follows: Section 2 presents the role of speech recognition in HCI with ASR architecture and working. Classical approach of acoustic phonetic modeling is discussed in Section 3. In acoustic modeling, structure of HMM, discrete and continuous type of HMM, modeling unit of HMM, and pronunciation adaptation are covered. Section 4 presents the refinements (variable duration HMM and discriminative techniques) and advancements of HMM such as large margin and soft margin (based on support vector machines), dual stream approach and HMM with wavelet networks proposed by various researchers to overcome the limitations of standard HMM. Feature extraction and reduction techniques are covered in Section 5. In Section 6 ASR challenges and optimization are explained. Experimental results are analyzed in Section 7. Finally conclusions are drawn in Section 8.