Treffer: Identifying the acoustic fingerprints of trigger sounds and predicting discomfort for misophonia.

Title:
Identifying the acoustic fingerprints of trigger sounds and predicting discomfort for misophonia.
Authors:
Clonan AC; Electrical and Computer Engineering, University of Connecticut, Storrs, CT 06269, United States; Biomedical Engineering, University of Connecticut, Storrs, CT 06269, United States; Institute of Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, United States., Stevenson IH; Biomedical Engineering, University of Connecticut, Storrs, CT 06269, United States; Psychological Sciences, University of Connecticut, Storrs, CT 06269, United States; Institute of Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, United States., Escabí MA; Electrical and Computer Engineering, University of Connecticut, Storrs, CT 06269, United States; Biomedical Engineering, University of Connecticut, Storrs, CT 06269, United States; Psychological Sciences, University of Connecticut, Storrs, CT 06269, United States; Institute of Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, United States. Electronic address: escabi@engr.uconn.edu.
Source:
Hearing research [Hear Res] 2026 Jan; Vol. 470, pp. 109478. Date of Electronic Publication: 2025 Nov 17.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Elsevier/North-Holland Biomedical Press Country of Publication: Netherlands NLM ID: 7900445 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1878-5891 (Electronic) Linking ISSN: 03785955 NLM ISO Abbreviation: Hear Res Subsets: MEDLINE
Imprint Name(s):
Original Publication: Amsterdam, Elsevier/North-Holland Biomedical Press.
Contributed Indexing:
Keywords: Auditory model; Diagnostic; Generalized perceptual regression; Misophonia; Sound sensitivity; Tolerance prediction; Trigger detection
SCR Disease Name:
misophonia
Entry Date(s):
Date Created: 20251213 Date Completed: 20260115 Latest Revision: 20260115
Update Code:
20260119
DOI:
10.1016/j.heares.2025.109478
PMID:
41389705
Database:
MEDLINE

Weitere Informationen

Human hearing is critical to everyday communication and the perception of natural auditory scenes. For individuals with misophonia, sounds commonly experienced in daily life can evoke severe discomfort and distress. Aversion is often described in terms of broad sound categories, such as bodily sounds, but what acoustic features cause specific sounds to be aversive or not, within the same category or across different individuals, remains unclear. Here, we explore whether bottom-up statistical sound features processed in the auditory periphery and midbrain can explain aversion to sounds. Using the Free Open-Access Misophonia Stimuli (FOAMS) dataset and a hierarchical model of the auditory system, we find that sound summary statistics can predict discomfort ratings in participants with misophonia. For each listener, the model produces individualized transfer functions that pinpoint the specific spectrotemporal modulations that contribute towards sound aversion. Overall, the model explains 76% of the variance in discomfort ratings, and we find substantial differences across participants in which sound features drive aversion. A major advantage of the modeling approach here is that it is sound-computable - perceptual ratings can be fit from or predicted for any sound. To illustrate applications of sound-computable models, we consider 1) extrapolation of participants' ratings to a large set of untested environmental sounds and develop 2) personalized trigger detection that uses the listener's acoustic feature preferences to identify potential triggers in continuous audio. Model predictions identify many untested sound categories, not in the original FOAMS set, that may also be aversive and suggest that there may be substantial heterogeneity in how aversive specific sounds are within some sound categories. In continuous audio, we show how sound-computable models can identify the timing of potential triggers from sound mixtures. Altogether, our results suggest that acoustic features - spectrotemporal modulations, in particular - can practically be used to characterize the individualized patterns of aversion in participants with misophonia. Future perceptual studies using synthetic sounds and sound sets with more diverse acoustics will allow model predictions to be tested more broadly; however, sound-computable models may already have applications in precision diagnosis and management of misophonia.
(Copyright © 2025. Published by Elsevier B.V.)

Declaration of competing interest The authors declare the following interests which may be considered as potential competing interests: the authors have a patent pending related to the work in this study.