Pub Date : 2025-01-01Epub Date: 2025-04-23DOI: 10.1177/23312165251320156
Ryan M O'Leary, Arthur Wingfield, Michael J Lyons, Carol E Franz, William S Kremen
Over 430 million people worldwide experience disabling hearing loss, a condition that becomes more prevalent with age. Although the genetic component to hearing loss has been well established, there has been less data available regarding changes in the genetic contributions to hearing loss over time. We report the pure tone hearing thresholds across 500, 1,000, 2,000, 4,000, and 8,000 Hz from over 1,000 male twins comprising monozygotic (MZ) and dizygotic (DZ) pairs sampled from the United States-based Vietnam Era Twin Study of Aging (VETSA). Twins were tested during three waves, at an average age of 56 at wave 1, an average age of 62 at wave 2, and an average age of 68 at wave 3. Genetically informed structural equation models were used to calculate the genetic contributions. Genetic factors accounted for between 49.4% and 67.7% of the variance in hearing acuity for all frequencies at all three time points. There was no substantial change in the ratio of genetic versus environmental contributions across the three time points, or across individual acoustic frequencies. The stability of hearing acuity over time was moderate to highly attributable to genetic factors. Change in hearing acuity was better explained by unique person-specific environmental factors. These results, from the largest-scale twin study of hearing acuity to date, replicate previous findings that hearing acuity in late life is significantly determined by genetic factors. The unique contribution of the present analysis is that the proportion of hearing acuity attributed to genetics remains relatively consistent across 12 years.
{"title":"Genetic and Environmental Contributions to Age-Related Hearing Loss: Results from a Longitudinal Twin Study.","authors":"Ryan M O'Leary, Arthur Wingfield, Michael J Lyons, Carol E Franz, William S Kremen","doi":"10.1177/23312165251320156","DOIUrl":"https://doi.org/10.1177/23312165251320156","url":null,"abstract":"<p><p>Over 430 million people worldwide experience disabling hearing loss, a condition that becomes more prevalent with age. Although the genetic component to hearing loss has been well established, there has been less data available regarding changes in the genetic contributions to hearing loss over time. We report the pure tone hearing thresholds across 500, 1,000, 2,000, 4,000, and 8,000 Hz from over 1,000 male twins comprising monozygotic (MZ) and dizygotic (DZ) pairs sampled from the United States-based Vietnam Era Twin Study of Aging (VETSA). Twins were tested during three waves, at an average age of 56 at wave 1, an average age of 62 at wave 2, and an average age of 68 at wave 3. Genetically informed structural equation models were used to calculate the genetic contributions. Genetic factors accounted for between 49.4% and 67.7% of the variance in hearing acuity for all frequencies at all three time points. There was no substantial change in the ratio of genetic versus environmental contributions across the three time points, or across individual acoustic frequencies. The stability of hearing acuity over time was moderate to highly attributable to genetic factors. Change in hearing acuity was better explained by unique person-specific environmental factors. These results, from the largest-scale twin study of hearing acuity to date, replicate previous findings that hearing acuity in late life is significantly determined by genetic factors. The unique contribution of the present analysis is that the proportion of hearing acuity attributed to genetics remains relatively consistent across 12 years.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320156"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-21DOI: 10.1177/23312165251359755
Larry E Humes, Sumitrajit Dhar, Jasleen Singh
The Abbreviated Profile of Hearing Aid Benefit (APHAB) has been one of the most frequently used patient-reported outcome measures (PROMs) since its inception 30 years ago. For the APHAB, single-valued 95% critical differences have been presented for the identification and interpretation of meaningful benefits in research and in the clinic. A narrative literature review of studies that used the global APHAB score as a hearing-aid outcome measure showed that the average benefit varied directly with the average unaided baseline score for each measure. Next, data from 584 older adults enrolled in our recently completed randomized controlled hearing-aid trial were examined. The same dependence of benefit scores on unaided baseline scores was observed in these data. Regression to the mean made relatively minor contributions to the observed dependence of APHAB scores on baseline unaided scores. These results indicate that the application of a single value for the 95% critical difference is not valid for the interpretation of APHAB scores. Rather, baseline-specific benefit criteria are needed. Based on these results, baseline-specific Minimal Detectable Differences (MDDs; or 95% critical differences) and Minimal Clinically Important Differences (MCIDs) using both distribution-based and anchor-based approaches were generated for the APHAB-global score.
{"title":"Some Considerations for the Use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) as a Hearing-Aid Outcome Measure.","authors":"Larry E Humes, Sumitrajit Dhar, Jasleen Singh","doi":"10.1177/23312165251359755","DOIUrl":"10.1177/23312165251359755","url":null,"abstract":"<p><p>The Abbreviated Profile of Hearing Aid Benefit (APHAB) has been one of the most frequently used patient-reported outcome measures (PROMs) since its inception 30 years ago. For the APHAB, single-valued 95% critical differences have been presented for the identification and interpretation of meaningful benefits in research and in the clinic. A narrative literature review of studies that used the global APHAB score as a hearing-aid outcome measure showed that the average benefit varied directly with the average unaided baseline score for each measure. Next, data from 584 older adults enrolled in our recently completed randomized controlled hearing-aid trial were examined. The same dependence of benefit scores on unaided baseline scores was observed in these data. Regression to the mean made relatively minor contributions to the observed dependence of APHAB scores on baseline unaided scores. These results indicate that the application of a single value for the 95% critical difference is not valid for the interpretation of APHAB scores. Rather, baseline-specific benefit criteria are needed. Based on these results, baseline-specific Minimal Detectable Differences (MDDs; or 95% critical differences) and Minimal Clinically Important Differences (MCIDs) using both distribution-based and anchor-based approaches were generated for the APHAB-global score.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251359755"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12290275/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audiological datasets contain valuable knowledge about hearing loss in patients, which can be uncovered using data-driven techniques. Our previous approach summarized patient information from one audiological dataset into distinct Auditory Profiles (APs). To obtain a better estimate of the audiological patient population, however, patient patterns must be analyzed across multiple, separated datasets, and finally, be integrated into a combined set of APs. This study aimed at extending the existing profile generation pipeline with an AP merging step, enabling the combination of APs from different datasets based on their similarity across audiological measures. The 13 previously generated APs (NA = 595) were merged with 31 newly generated APs from a second dataset (NB = 1,272) using a similarity score derived from the overlapping densities of common features across the two datasets. To ensure clinical applicability, random forest models were created for various scenarios, encompassing different combinations of audiological measures. A new set with 13 combined APs is proposed, providing separable profiles, which still capture detailed patient information from various test outcome combinations. The classification performance across these profiles is satisfactory. The best performance was achieved using a combination of loudness scaling, audiogram, and speech test information, while single measures performed worst. The enhanced profile generation pipeline demonstrates the feasibility of combining APs across datasets, which should generalize to all datasets and could lead to an interpretable global profile set in the future. The classification models maintain clinical applicability.
{"title":"Integrating Audiological Datasets via Federated Merging of Auditory Profiles.","authors":"Samira Saak, Dirk Oetting, Birger Kollmeier, Mareike Buhl","doi":"10.1177/23312165251349617","DOIUrl":"10.1177/23312165251349617","url":null,"abstract":"<p><p>Audiological datasets contain valuable knowledge about hearing loss in patients, which can be uncovered using data-driven techniques. Our previous approach summarized patient information from one audiological dataset into distinct Auditory Profiles (APs). To obtain a better estimate of the audiological patient population, however, patient patterns must be analyzed across multiple, separated datasets, and finally, be integrated into a combined set of APs. This study aimed at extending the existing profile generation pipeline with an AP merging step, enabling the combination of APs from different datasets based on their similarity across audiological measures. The 13 previously generated APs (<i>N<sub>A</sub></i> = 595) were merged with 31 newly generated APs from a second dataset (<i>N<sub>B</sub></i> = 1,272) using a similarity score derived from the overlapping densities of common features across the two datasets. To ensure clinical applicability, random forest models were created for various scenarios, encompassing different combinations of audiological measures. A new set with 13 combined APs is proposed, providing separable profiles, which still capture detailed patient information from various test outcome combinations. The classification performance across these profiles is satisfactory. The best performance was achieved using a combination of loudness scaling, audiogram, and speech test information, while single measures performed worst. The enhanced profile generation pipeline demonstrates the feasibility of combining APs across datasets, which should generalize to all datasets and could lead to an interpretable global profile set in the future. The classification models maintain clinical applicability.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251349617"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12209579/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165241306091
Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.
在日常语言交流中,语音对语音的掩蔽是一种常见且具有挑战性的情况。要将注意力集中在目标语音上,就必须具备分离相互竞争的听觉流的能力。视觉世界范式(Visual World Paradigm,VWP)通过捕捉对反映语音信号的视觉呈现图标的注视固定来深入了解语音处理过程。本研究旨在提出一种新的视觉世界范式,以考察在出现竞争句子时语音分离的时间过程,并收集瞳孔大小数据作为听力努力程度的测量指标。研究人员通过耳机向 12 名听力正常的年轻受试者连续呈现了四种目标与掩码比(TMRs)的竞争矩阵句子(结构为 "名称-动词-名词-形容词-宾语"),这四种目标与掩码比分别对应于中等到接近完美的语音识别能力。VWP 可视化呈现目标句和掩蔽句中的数词和宾词。受试者被要求注视目标句子中的相应单词,而不提供口头回答。注视定着一致地反映了数字词和物词的不同 TMR。固定曲线的斜率更陡峭,目标固定的比例随 TMR 越高而增加,这表明在更有利的条件下,分离的效率更高。使用贝叶斯配对样本 t 检验法对瞳孔数据进行的时间分析表明,随着 TMR 的增加,瞳孔放大的程度也相应减小,这表明听力强度降低了。这些结果支持这样的结论,即所提出的 VWP 以及捕捉到的眼球运动和瞳孔放大适合用于客观评估基于句子的语音分离和相应的听力强度。
{"title":"Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.","authors":"Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister","doi":"10.1177/23312165241306091","DOIUrl":"10.1177/23312165241306091","url":null,"abstract":"<p><p>Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure \"name-verb-numeral-adjective-object\") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample <i>t</i>-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241306091"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1177/23312165251320794
Alexina Whitley, Timothy Beechey, Lauren V Hadley
Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.
{"title":"Who Said That? The Effect of Hearing Ability on Following Sequential Utterances From Varying Talkers in Noise.","authors":"Alexina Whitley, Timothy Beechey, Lauren V Hadley","doi":"10.1177/23312165251320794","DOIUrl":"10.1177/23312165251320794","url":null,"abstract":"<p><p>Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320794"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-30DOI: 10.1177/23312165251359415
Laura K Holden, Rosalie M Uchanski, Noël Y Dwyer, Ruth M Reeder, Timothy A Holden, Jill B Firszt
The study aimed to improve outcomes in Nucleus cochlear implant (CI) recipients with single-sided deafness (SSD) by reducing interaural frequency and loudness mismatches through device programming. In Experiment 1a, a modified frequency allocation table (FAT) was created to better match the tonotopicity of the contralateral ear and reduce interaural frequency mismatch. Twenty experienced SSD-CI users completed localization and speech recognition tests with their everyday FAT. Tests were repeated after 6 weeks' use of the modified FAT. Participants compared both FATs for 2 weeks before being tested again with each. For 10 newly implanted SSD-CI recipients (Experiment 1b), Group A was programmed with the manufacturer's default FAT and Group B with the modified FAT at activation. Speech recognition and localization were completed, after 6 weeks' use of each FAT. Participants then compared both FATs before testing with each. In Experiment 2, 15 experienced SSD-CI users were evaluated with their everyday program and a modified loudness program, which was created to obtain audibility of ∼20 dB HL from 0.25 to 6 kHz and balanced loudness between ears. Three test sessions occurred, resembling Experiment 1a. Experienced participants in Experiments 1a and 2 showed significant improvement in one speech-in-noise task with a modified program compared to the everyday program. Newly implanted recipients showed no significant difference in results between FATs. Results indicate that modified programs, created to reduce interaural mismatches, may improve outcomes. The first month after activation might be too early to compare FATs as SSD-CI recipients are adjusting to electric hearing.
该研究旨在通过设备编程减少耳蜗间频率和响度失配,改善单侧耳聋(SSD)人工耳蜗(CI)受者的预后。在实验1a中,为了更好地匹配对侧耳的张力性,减少耳间频率失配,我们创建了一个改进的频率分配表(FAT)。20名有经验的SSD-CI用户用他们的日常FAT完成了本地化和语音识别测试。使用改良FAT 6周后重复测试。参与者将两种脂肪进行了两周的比较,然后再次进行测试。对于10例新植入的SSD-CI受体(实验1b), A组使用制造商默认的FAT编程,B组使用激活时修改的FAT编程。每个FAT使用6周后完成语音识别和定位。然后参与者在测试前比较两种脂肪。在实验2中,15名经验丰富的SSD-CI用户使用他们的日常程序和修改的响度程序进行评估,该程序旨在获得0.25至6 kHz范围内约20 dB HL的可听性和耳朵之间的平衡响度。进行了三次测试,类似于实验1a。实验1a和实验2中经验丰富的参与者在使用修改后的程序时,与日常程序相比,在一项噪音语音任务中表现出显著的改善。新植入的受体在两种脂肪之间的结果没有显著差异。结果表明,修改程序,以减少内部不匹配,可以改善结果。在激活后的第一个月比较脂肪可能为时过早,因为SSD-CI接受者正在适应电听力。
{"title":"Improving Outcomes of Single-Sided Deaf Cochlear Implant Users by Reducing Interaural Frequency and Loudness Mismatches through Device Programming.","authors":"Laura K Holden, Rosalie M Uchanski, Noël Y Dwyer, Ruth M Reeder, Timothy A Holden, Jill B Firszt","doi":"10.1177/23312165251359415","DOIUrl":"10.1177/23312165251359415","url":null,"abstract":"<p><p>The study aimed to improve outcomes in Nucleus cochlear implant (CI) recipients with single-sided deafness (SSD) by reducing interaural frequency and loudness mismatches through device programming. In Experiment 1a, a modified frequency allocation table (FAT) was created to better match the tonotopicity of the contralateral ear and reduce interaural frequency mismatch. Twenty experienced SSD-CI users completed localization and speech recognition tests with their everyday FAT. Tests were repeated after 6 weeks' use of the modified FAT. Participants compared both FATs for 2 weeks before being tested again with each. For 10 newly implanted SSD-CI recipients (Experiment 1b), Group A was programmed with the manufacturer's default FAT and Group B with the modified FAT at activation. Speech recognition and localization were completed, after 6 weeks' use of each FAT. Participants then compared both FATs before testing with each. In Experiment 2, 15 experienced SSD-CI users were evaluated with their everyday program and a modified loudness program, which was created to obtain audibility of ∼20 dB HL from 0.25 to 6 kHz and balanced loudness between ears. Three test sessions occurred, resembling Experiment 1a. Experienced participants in Experiments 1a and 2 showed significant improvement in one speech-in-noise task with a modified program compared to the everyday program. Newly implanted recipients showed no significant difference in results between FATs. Results indicate that modified programs, created to reduce interaural mismatches, may improve outcomes. The first month after activation might be too early to compare FATs as SSD-CI recipients are adjusting to electric hearing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251359415"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12317272/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144754854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-04DOI: 10.1177/23312165251356333
Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon
Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (n = 42, MAge = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; n = 37, MAge = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be "emerging." Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.
{"title":"Children With Bilateral Cochlear Implants Show Emerging Spatial Hearing of Stationary and Moving Sound.","authors":"Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon","doi":"10.1177/23312165251356333","DOIUrl":"10.1177/23312165251356333","url":null,"abstract":"<p><p>Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (<i>n</i> = 42, <i>M</i><sub>Age</sub> = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; <i>n</i> = 37, <i>M</i><sub>Age</sub> = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be \"emerging.\" Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251356333"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12227942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-08DOI: 10.1177/23312165251376382
J Gerard G Borst, André Goedegebure
Individuals with tinnitus hear sounds that are not present in the external environment. Whereas hearing difficulties at frequencies near those matching the tinnitus pitch are a common complaint for individuals with tinnitus, it is unclear to what extent the internal tinnitus sounds interfere with the detection of external sounds. We therefore studied whether pure-tone detection at the estimated frequency corresponding to the tinnitus pitch (ftp) was affected by confusion with the tinnitus percept. Signs of confusion would be a high false alarm rate or a shallower slope of the psychometric function for tone detection at ftp. We selected participants with symmetric, tonal tinnitus, who were able to estimate its pitch consistently (n = 18). Another 18 participants matched for high-frequency hearing loss, age, and sex, but without tinnitus, served as the control group. For both groups, we measured the psychometric function for detecting long-duration tones, maximizing the likelihood for confusion with an external sound. We observed that false alarm rates for tinnitus participants were not higher for test tones at ftp, nor were they higher than for the control group without tinnitus. Similar results were obtained for the slopes of the psychometric functions. Apparently, individuals with tinnitus are well able to discriminate between their own tinnitus and comparable external sounds. Our results indicate that (tonal) tinnitus does not interfere with the detection of soft sounds at the tinnitus pitch-matched frequency.
{"title":"Tonal Tinnitus Does Not Interfere with Tone Detection at the Tinnitus Pitch-Matched Frequency.","authors":"J Gerard G Borst, André Goedegebure","doi":"10.1177/23312165251376382","DOIUrl":"10.1177/23312165251376382","url":null,"abstract":"<p><p>Individuals with tinnitus hear sounds that are not present in the external environment. Whereas hearing difficulties at frequencies near those matching the tinnitus pitch are a common complaint for individuals with tinnitus, it is unclear to what extent the internal tinnitus sounds interfere with the detection of external sounds. We therefore studied whether pure-tone detection at the estimated frequency corresponding to the tinnitus pitch (f<sub>tp</sub>) was affected by confusion with the tinnitus percept. Signs of confusion would be a high false alarm rate or a shallower slope of the psychometric function for tone detection at f<sub>tp</sub>. We selected participants with symmetric, tonal tinnitus, who were able to estimate its pitch consistently (n = 18). Another 18 participants matched for high-frequency hearing loss, age, and sex, but without tinnitus, served as the control group. For both groups, we measured the psychometric function for detecting long-duration tones, maximizing the likelihood for confusion with an external sound. We observed that false alarm rates for tinnitus participants were not higher for test tones at f<sub>tp</sub>, nor were they higher than for the control group without tinnitus. Similar results were obtained for the slopes of the psychometric functions. Apparently, individuals with tinnitus are well able to discriminate between their own tinnitus and comparable external sounds. Our results indicate that (tonal) tinnitus does not interfere with the detection of soft sounds at the tinnitus pitch-matched frequency.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251376382"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-16DOI: 10.1177/23312165251385017
John H Grose, Monica Folkerts, Emily Buss
This study compared the behavioral minimum audible angle (MAA) and the electrophysiological acoustic change complex (ACC) elicited by an azimuthal shift in sound location. To examine age effects, 63 participants with normal or near-normal hearing were divided into three age groups (Young, Mid-Aged, and Older). The stimuli were narrow bands of noise centered at 500 Hz to facilitate reliance on primarily binaural temporal cues. Putative spatial location was manipulated by means of head-related transfer functions under headphones. MAA results showed that performance was dependent on the reference location, with performance becoming poorer as the reference location shifted away from midline. The Young group had smaller MAAs than the Older group, and performance of the Mid-Age group was intermediate. Measurement of the ACC was restricted to shifts away from midline, and results showed no ACC for shifts of 4.5° and 9° but present ACCs for shifts of 13.5°, 18°, and 36°. The robustness of the ACC, as measured with the intertrial phase coherence metric, grew with increasing azimuthal shift. For shifts of 13.5° and 18°, Young participants had more robust ACCs than Older participants. Although age-related deficits were found in both the MAA and in the robustness of the ACC, no associations were observed at the individual level between MAA and ACC measures. Further work is necessary to evaluate the ACC elicited by shifts from off-midline reference locations before a firm conclusion can be reached that the ACC is not a viable objective proxy for the MAA.
{"title":"Minimum Audible Angle and the Acoustic Change Complex Elicited by Azimuthal Shifts in Low-Frequency Sounds: Effects of Age.","authors":"John H Grose, Monica Folkerts, Emily Buss","doi":"10.1177/23312165251385017","DOIUrl":"10.1177/23312165251385017","url":null,"abstract":"<p><p>This study compared the behavioral minimum audible angle (MAA) and the electrophysiological acoustic change complex (ACC) elicited by an azimuthal shift in sound location. To examine age effects, 63 participants with normal or near-normal hearing were divided into three age groups (Young, Mid-Aged, and Older). The stimuli were narrow bands of noise centered at 500 Hz to facilitate reliance on primarily binaural temporal cues. Putative spatial location was manipulated by means of head-related transfer functions under headphones. MAA results showed that performance was dependent on the reference location, with performance becoming poorer as the reference location shifted away from midline. The Young group had smaller MAAs than the Older group, and performance of the Mid-Age group was intermediate. Measurement of the ACC was restricted to shifts away from midline, and results showed no ACC for shifts of 4.5° and 9° but present ACCs for shifts of 13.5°, 18°, and 36°. The robustness of the ACC, as measured with the intertrial phase coherence metric, grew with increasing azimuthal shift. For shifts of 13.5° and 18°, Young participants had more robust ACCs than Older participants. Although age-related deficits were found in both the MAA and in the robustness of the ACC, no associations were observed at the individual level between MAA and ACC measures. Further work is necessary to evaluate the ACC elicited by shifts from off-midline reference locations before a firm conclusion can be reached that the ACC is not a viable objective proxy for the MAA.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251385017"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12536095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-23DOI: 10.1177/23312165251374938
Thomas Biberger, Stephan D Ewert
The effect of complex acoustic environments (CAEs), typically comprising target and interfering sound sources as well as room reflections, on the speech reception of hearing-impaired (HI) listeners has been examined in several studies. However, only little is known about audio quality perception of HI listeners in such CAEs. Thus, this study assessed detection thresholds and suprathreshold audio quality ratings of listeners with very mild and moderate hearing loss (HL) for several distortions applied to speech and pink noise: nonlinear saturation, spectral ripples, level differences, and spatial position offsets. The stimuli were presented in acoustical scenes that differ in their complexity by manipulating room size in conjunction with reverberation time, and the number and spatial position of interfering sound sources. The strongest differences between listeners with very mild and moderate HL were observed in the presence of interfering sounds. In such situations, listeners with moderate HL had consistently higher distortion detection thresholds than listeners with very mild HL. Moreover, they rated audio quality lower for the masked than for the unmasked distorted targets, indicating difficulties in separating the target from the maskers. Significant correlations were found between the listeners' pure tone average (PTA) and distortion detection thresholds in situations with maskers. Thus, PTAs seem to be a suitable predictor for distortion thresholds of HI listeners in CAEs. The effect of reverberation strongly depended on the target (speech or pink noise) and the type of distortions.
{"title":"Audio Quality Perception of Hearing-Impaired Listeners in Complex Acoustic Environments.","authors":"Thomas Biberger, Stephan D Ewert","doi":"10.1177/23312165251374938","DOIUrl":"10.1177/23312165251374938","url":null,"abstract":"<p><p>The effect of complex acoustic environments (CAEs), typically comprising target and interfering sound sources as well as room reflections, on the speech reception of hearing-impaired (HI) listeners has been examined in several studies. However, only little is known about audio quality perception of HI listeners in such CAEs. Thus, this study assessed detection thresholds and suprathreshold audio quality ratings of listeners with very mild and moderate hearing loss (HL) for several distortions applied to speech and pink noise: nonlinear saturation, spectral ripples, level differences, and spatial position offsets. The stimuli were presented in acoustical scenes that differ in their complexity by manipulating room size in conjunction with reverberation time, and the number and spatial position of interfering sound sources. The strongest differences between listeners with very mild and moderate HL were observed in the presence of interfering sounds. In such situations, listeners with moderate HL had consistently higher distortion detection thresholds than listeners with very mild HL. Moreover, they rated audio quality lower for the masked than for the unmasked distorted targets, indicating difficulties in separating the target from the maskers. Significant correlations were found between the listeners' pure tone average (PTA) and distortion detection thresholds in situations with maskers. Thus, PTAs seem to be a suitable predictor for distortion thresholds of HI listeners in CAEs. The effect of reverberation strongly depended on the target (speech or pink noise) and the type of distortions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251374938"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12559647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}