Extracting Auditory Emotion in Noise: A Distributed Auxiliary Auditory Network Supporting Affect Processing of Non-Predictably Obscured Vocalisations
biorxiv(2024)
Cognitive and Affective Neuroscience Unit
Abstract
Decoding affect information encoded within a vocally produced signal is a key part of daily communication. The acoustic channels that carry the affect information, however, are not uniformly distributed across a spectrotemporal space meaning that natural listening environments with dynamic, competing noise may unpredictably obscure some spectrotemporal regions of the vocalisation, reducing the potential information available to the listener. In this study, we utilise behavioural and functional MRI investigations to first assess which spectrotemporal regions of a human vocalisation contribute to affect perception in the listener, and then use a reverse-correlation fMRI analysis to see which structures underpin this perceptually challenging task when categorisation relevant acoustic information is unmasked by noise. Our results show that, despite the challenging task and non-uniformity of contributing spectral regions of affective vocalizations, a distributed network of (non-primary auditory) brain regions in the frontal cortex, basal ganglia, and lateral limbic regions supports affect processing in noise. Given the conditions for recruitment and previously established functional contributions of these regions, we propose that this task is underpinned by a reciprocal network between frontal cortical regions and ventral limbic regions that assist in flexible adaptation and tuning to stimuli, while a hippocampal and parahippocampal regions support the auditory system’s processing of the degraded auditory information via associative and contextual processing. ### Competing Interest Statement The authors have declared no competing interest.
MoreTranslated text
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话