Chrome Extension
WeChat Mini Program
Use on ChatGLM

EEG-based Multimodal Representation Learning for Emotion Recognition

International Winter Conference on Brain-Computer Interface(2025)

Cited 0|Views1
Abstract
Multimodal learning has been a popular area of research, yet integrating electroencephalogram (EEG) data poses unique challenges due to its inherent variability and limited availability. In this paper, we introduce a novel multimodal framework that accommodates not only conventional modalities such as video, images, and audio, but also incorporates EEG data. Our framework is designed to flexibly handle varying input sizes, while dynamically adjusting attention to account for feature importance across modalities. We evaluate our approach on a recently introduced emotion recognition dataset that combines data from three modalities, making it an ideal testbed for multimodal learning. The experimental results provide a benchmark for the dataset and demonstrate the effectiveness of the proposed framework. This work highlights the potential of integrating EEG into multimodal systems, paving the way for more robust and comprehensive applications in emotion recognition and beyond.
More
Translated text
Key words
brain-computer interface,electroencephalogram,multimodal training,emotion recognition
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种新颖的多模态学习框架,将脑电图(EEG)数据与传统模态如视频、图像和音频相结合,提高了情绪识别的准确性和有效性。

方法】:研究采用了一种灵活的多模态框架,该框架能够处理不同输入尺寸,并通过动态调整注意力机制来考虑各模态特征的重要性。

实验】:研究在一种新引入的三模态情绪识别数据集上评估了所提方法,实验结果表明了该框架的有效性,并为该数据集提供了一个基准。