Machine Learning Classification of Active Viewing of Pain and Non-Pain Images Using EEG Does Not Exceed Chance in External Validation Samples
Cognitive, Affective, & Behavioral Neuroscience(2025)
Abstract
Previous research has demonstrated that machine learning (ML) could not effectively decode passive observation of neutral versus pain photographs by using electroencephalogram (EEG) data. Consequently, the present study explored whether active viewing, i.e., requiring participant engagement in a task, of neutral and pain stimuli improves ML performance. Random forest (RF) models were trained on cortical event-related potentials (ERPs) during a two-alternative forced choice paradigm, whereby participants determined the presence or absence of pain in photographs of facial expressions and action scenes. Sixty-two participants were recruited for the model development sample. Moreover, a within-subject temporal validation sample was collected, consisting of 27 subjects. In line with our previous research, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expressions. The results demonstrated that the RF successfully classified discrete categories of visual stimuli (faces and scenes) with accuracies of 78
MoreTranslated text
Key words
Empathy,Electroencephalography,Event-related potential,Random forest,Faces
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper