A Proposed Speech Discrimination Assessment Methodology Based on Event-Related Potentials to Visual Stimuli

A Proposed Speech Discrimination Assessment Methodology Based on Event-Related Potentials to Visual Stimuli

Koji Morikawa (Panasonic Corporation, Japan), Kazuki Kozuka (Panasonic Corporation, Japan) and Shinobu Adachi (Panasonic Corporation, Japan)
Copyright: © 2012 |Pages: 17
DOI: 10.4018/jehmc.2012040102


Objective and quantitative assessment methods are needed for the fitting of hearing aid parameters. This paper proposes a novel speech discrimination assessment method using electroencephalograms (EEGs). The method utilizes event-related potentials (ERPs) to visual stimuli instead of the conventionally used auditory stimuli. A spoken letter is played through a speaker as an initial auditory stimulus. The same letter can then be visually displayed on a screen as a match condition, or a different letter is displayed (mismatch condition). The participant determines whether the two stimuli represent the same letter or not. The P3 component or late positive potential (LPP) component are elicited when a participant detects either a match or mismatch between the auditory and visual stimuli, respectively. The hearing ability of each participant can be estimated objectively via analysis of these ERP components.
Article Preview


Monitoring user diagnostic state is one of the main applications of the e-Health field. Recent advancements in sensor devices and signal processing have made monitoring applications more common. Electroencephalogram (EEG) sensors are becoming smaller in size and more energy efficient. These developments permit EEG signals to be measured outside of hospitals.

This paper proposes a novel speech discrimination assessment method based on event-related potentials (ERPs) to visual stimuli. Hearing assessment is currently used for fitting hearing aid parameters (Kates, 2008), and is a key process in providing the user with suitable, well-tuned hearing aids. However, there are no well-established standardized methods of hearing assessment, and an audiologist has to subjectively assess each user’s hearing ability based on their own training and experiences. Quantitative and objective assessment methods are needed for precise and stable hearing aid fitting.

Conventionally, hearing ability is assessed mainly through oral interaction with an audiologist (ISO, 1996; American Speech-Language-Hearing Association, 1998). Such an assessment generally contains two elements: A hearing threshold evaluation and a speech discrimination assessment. Hearing threshold level (HTL) is assessed using an audiometer that relies on the user’s explicit responses, typically via button press or by oral response. HTL is the smallest auditory level of pure tone that a user can hear, and obtaining a precise measure takes considerable time and concentration. HTL is evaluated for both left and right ears and at several frequencies, such as 250 Hz, 500 Hz, 1 KHz, 2 KHz, and 4 KHz. The initial hearing aid fitting is executed based on the HTL results provided by the audiologist. There are various theories regarding the best approach to the initial fitting. NAL-NL1 (Dillon, 1999) is one such standard. This initial fitting enables the hearing aids to produce sound pressure levels that are over the user’s hearing threshold. However, this approach is not sufficient, given that the HTL test does not assess speech discrimination. The speech signal consists of a mixture of many frequencies and is constantly changing as time unfolds.

Because the primary purpose of hearing aids is to facilitate conversation, speech discrimination should be a key component of any hearing aid assessment. A speech discrimination test evaluates the ability to correctly distinguish confusable words; speech discrimination is roughly assessed by oral interaction with an audiologist. The general assessment procedure (without the use of EEG) occurs on a word-by-word basis (Nilsson, Soli, & Sullivan, 1994), such that the user is instructed to listen to and repeat aloud whatever is heard or understood. Because each word has special frequency features, the audiologist can use the user’s errors to determine which frequencies need to be emphasized. In an alternative procedure, the user has to respond to speech played from a CD by speaking or writing, a time and labor-intensive process. Additional tuning is done after the discrimination test, if necessary. However, there is a problem that a user’s subjective respond is sometimes not stable.

An EEG measures potentials on the scalp surface, thereby reflecting neuronal activity in the brain. Since event-related potentials (or ERPs) (Handy, 2004), which are a kind of EEG signal, are automatically elicited to external stimuli, a user does not have to answer explicitly by choosing a correct key on the keyboard or by saying words. Assessment based on ERPs could be a candidate approach for the objective, quantitative evaluation of hearing parameters.

Complete Article List

Search this Journal:
Open Access Articles
Volume 12: 6 Issues (2021): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing