WeChat Mini Program
Old Version Features

静息态脑功能成像评价夜磨牙者的大脑默认网络

Huimin Jing, Wenjuan Yu,Sijia Wang,Cong Chen,Yifan Li,Yonglan Wang, Xin Li,Juan Zhang, Meng Liang

Chinese Journal of Tissue Engineering Research(2021)

Cited 0|Views15
Abstract
背景:夜磨牙是常见的一种口腔副功能,但是其病因现在尚不明确.通过很多对夜磨牙患者进行心理调查问卷发现,夜磨牙与心理因素有关,但是二者之间的具体联系与机制并不清晰.目的:分析夜磨牙患者大脑默认网络的改变,探讨夜磨牙患者静息态脑网络状态.方法:自2018年11月至2019年5月对20名经多导睡眠监测确诊的夜磨牙患者(夜磨牙组)及20名年龄、性别、受教育年限相匹配的无症状志愿者(对照组),在夜间20:00至23:00点采用3.0T核磁共振扫描仪行静息态脑功能磁共振成像检查,采用独立成分分析方法分离静息态脑网络,提取默认网络的成分进行统计学分析,首先使用单样本t检验制作网络成分模板,然后使用双样本t检验进行默认网络成分的比较.结果 与结论:夜磨牙组楔前叶默认网络内功能连接较对照组减弱,差异有显著性意义(t=-3.319,P<0.05),提示夜磨牙患者大脑静息态时默认网络存在异常.
More
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined