Chrome Extension
WeChat Mini Program
Use on ChatGLM

Extracting Auditory Emotion in Noise: A Distributed Auxiliary Auditory Network Supporting Affect Processing of Non-Predictably Obscured Vocalisations

biorxiv(2024)

Cognitive and Affective Neuroscience Unit

Cited 0|Views3
Abstract
Decoding affect information encoded within a vocally produced signal is a key part of daily communication. The acoustic channels that carry the affect information, however, are not uniformly distributed across a spectrotemporal space meaning that natural listening environments with dynamic, competing noise may unpredictably obscure some spectrotemporal regions of the vocalisation, reducing the potential information available to the listener. In this study, we utilise behavioural and functional MRI investigations to first assess which spectrotemporal regions of a human vocalisation contribute to affect perception in the listener, and then use a reverse-correlation fMRI analysis to see which structures underpin this perceptually challenging task when categorisation relevant acoustic information is unmasked by noise. Our results show that, despite the challenging task and non-uniformity of contributing spectral regions of affective vocalizations, a distributed network of (non-primary auditory) brain regions in the frontal cortex, basal ganglia, and lateral limbic regions supports affect processing in noise. Given the conditions for recruitment and previously established functional contributions of these regions, we propose that this task is underpinned by a reciprocal network between frontal cortical regions and ventral limbic regions that assist in flexible adaptation and tuning to stimuli, while a hippocampal and parahippocampal regions support the auditory system’s processing of the degraded auditory information via associative and contextual processing. ### Competing Interest Statement The authors have declared no competing interest.
More
Translated text
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文探讨了在噪声环境下,人类如何利用分布式辅助听觉网络处理情感信息,发现前额叶皮质、基底神经节和外侧边缘区域的网络支持在噪声中的情感处理。

方法】:通过行为实验和功能性磁共振成像(fMRI)研究,评估人类语音中哪些频谱时间区域对情感感知有贡献,并使用反向相关fMRI分析来识别在噪声中分类相关声学信息被揭示时支持这一任务的大脑结构。

实验】:实验包括行为测试和fMRI扫描,使用的数据集为人类语音信号,实验结果显示,尽管任务具有挑战性且情感语音的频谱贡献区域不均匀,但前额叶皮质、基底神经节和外侧边缘区域的分布式网络支持在噪声中的情感处理。