Chrome Extension
WeChat Mini Program
Use on ChatGLM

Machine Learning Classification of Active Viewing of Pain and Non-Pain Images Using EEG Does Not Exceed Chance in External Validation Samples

Tyler Mari,S. Hasan Ali, Lucrezia Pacinotti, Sarah Powsey,Nicholas Fallon

Cognitive, Affective, & Behavioral Neuroscience(2025)

University of Liverpool

Cited 0|Views1
Abstract
Previous research has demonstrated that machine learning (ML) could not effectively decode passive observation of neutral versus pain photographs by using electroencephalogram (EEG) data. Consequently, the present study explored whether active viewing, i.e., requiring participant engagement in a task, of neutral and pain stimuli improves ML performance. Random forest (RF) models were trained on cortical event-related potentials (ERPs) during a two-alternative forced choice paradigm, whereby participants determined the presence or absence of pain in photographs of facial expressions and action scenes. Sixty-two participants were recruited for the model development sample. Moreover, a within-subject temporal validation sample was collected, consisting of 27 subjects. In line with our previous research, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expressions. The results demonstrated that the RF successfully classified discrete categories of visual stimuli (faces and scenes) with accuracies of 78
More
Translated text
Key words
Empathy,Electroencephalography,Event-related potential,Random forest,Faces
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:该研究探讨了在主动观察条件下,利用EEG数据通过机器学习对疼痛和非疼痛图像进行分类,发现其性能并未超过随机水平。

方法】:研究使用了随机森林模型,在两选一强制选择范式下,对参与者的皮质事件相关电位(ERP)进行训练,以区分不同类型的视觉刺激。

实验】:62名参与者被招募用于模型开发样本,27名参与者用于内部时间验证样本。实验使用了ERP数据,三个随机森林模型被开发用于分类图像,结果显示模型在分类视觉刺激方面有一定准确性,但在外部验证样本上对疼痛共情分类的性能并未超过随机水平。