Document Type
Article
Publication Date
9-15-2023
Journal / Book Title
Hearing Research
Abstract
Bimodal hearing, in which a contralateral hearing aid is combined with a cochlear implant (CI), provides greater speech recognition benefits than using a CI alone. Factors predicting individual bimodal patient success are not fully understood. Previous studies have shown that bimodal benefits may be driven by a patient's ability to extract fundamental frequency (f0) and/or temporal fine structure cues (e.g., F1). Both of these features may be represented in frequency following responses (FFR) to bimodal speech. Thus, the goals of this study were to: 1) parametrically examine neural encoding of f0 and F1 in simulated bimodal speech conditions; 2) examine objective discrimination of FFRs to bimodal speech conditions using machine learning; 3) explore whether FFRs are predictive of perceptual bimodal benefit. Three vowels (/ε/, /i/, and /ʊ/) with identical f0 were manipulated by a vocoder (right ear) and low-pass filters (left ear) to create five bimodal simulations for evoking FFRs: Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. Results suggested that neural representation of f0 and F1 FFR components were enhanced with increasing acoustic bandwidth in the simulated “non-implanted” ear. As spectral differences between vowels emerged in the FFRs with increased acoustic bandwidth, FFRs were more accurately classified and discriminated using a machine learning algorithm. Enhancement of f0 and F1 neural encoding with increasing bandwidth were collectively predictive of perceptual bimodal benefit on a speech-in-noise task. Given these results, FFR may be a useful tool to objectively assess individual variability in bimodal hearing.
DOI
10.1016/j.heares.2023.108853
Rights
This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
Montclair State University Digital Commons Citation
Xu, Can; Cheng, Fan-Yin; Medina, Sarah; Eng, Erica; Gifford, René; and Smith, Spencer, "Objective discrimination of bimodal speech using frequency following responses" (2023). Department of Communication Sciences and Disorders Faculty Scholarship and Creative Works. 191.
https://digitalcommons.montclair.edu/communcsci-disorders-facpubs/191
Published Citation
Xu, Can, et al. “Objective Discrimination of Bimodal Speech Using Frequency Following Responses.” Hearing Research, vol. 437, Sept. 2023, p. 108853. DOI.org (Crossref), https://doi.org/10.1016/j.heares.2023.108853.