Qualitative EEG Abnormalities Signal a Shift Towards Inhibition-Dominated Brain Networks. Results from the EU-AIMS LEAP Studies
crossref(2024)
Department of Integrative Neurophysiology | Aspect Neuroprofiles BV | Department of Cognitive Neuroscience
Abstract
Qualitative EEG abnormalities are common in Autism Spectrum Disorder (ASD) and hypothesized to reflect disrupted excitation/inhibition balance. To test this, we recently introduced a functional measure of network-level E/I ratio (fE/I). Here, we applied fE/I and other quantitative EEG measures to alpha oscillations from source-reconstructed data in the EU-AIMS compilation of 267 EEG recordings from children-adolescents and adults with ASD and 209 controls. We analyzed these quantitative measures alongside evaluating for qualitative EEG abnormalities ranging from slowing of activity to epileptiform patterns aiming to replicate the findings from the SPACE-BAMBI study (Bruining et al., 2020). EEG abnormalities were only identified in a few adults and could not be statistically assessed. ASD children-adolescents with EEG abnormalities exhibited lower relative alpha power and lower fE/I compared to children-adolescents without abnormalities; however, the EEG-abnormality scoring did not stratify the behavioral heterogeneity of ASD using clinical measures. Surprisingly, several controls presented with qualitative EEG abnormalities and showed a strikingly similar anatomical distribution of lower fE/I to the one observed in the ASD group, suggesting a shift towards inhibition-dominated network dynamics, in regions associated with altered sensory processing. The robustness of this association between EEG abnormalities and reduced fE/I was further supported by re-analysis of the SPACE-BAMBI study in source space. Stratification by the presence of EEG abnormalities and their associated effects on network activity may help understand neurodevelopmental physiological heterogeneity and the difficulties in implementing E/I targeting treatments in unselected cohorts. ### Competing Interest Statement H.B., K.L.-H., and S.-S.P. are shareholders of Aspect Neuroprofiles BV, which develops physiology-informed prognostic measures for neurodevelopmental disorders. K.L.-H. has filed the patent claim (PCT/NL2019/050167) "Method of determining brain activity"; with priority date 16 March 2018. TC has served as a paid consultant to F. Hoffmann-La Roche Ltd. and Servier; and has received royalties from Sage Publications and Guilford Publications. A.E.-A. is a paid consultant for Aspect Neuroprofiles BV. J.B. has been in the past 3 years a consultant to / member of advisory board of / and/or speaker for Takeda, Roche, Medice, Angelini, Neuraxpharm, and Servier. He is not an employee of any of these companies, and not a stock shareholder of any of these companies. He has no other financial or material support, including expert testimony, patents, royalties. P.G. and J.F.H. are full-time employees of F. Hoffmann - La Roche Ltd. T.B. served in an advisory or consultancy role for eye level, Infectopharm, Medice, Neurim Pharmaceuticals, Oberberg GmbH and Takeda. He received conference support or speaker's fee by Janssen-Cilag, Medice and Takeda. He received royalities from Hogrefe, Kohlhammer, CIP Medien, Oxford University Press. The rest of the authors have no competing interests to declare. The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper