Chrome Extension
WeChat Mini Program
Use on ChatGLM

Joint Transformer Architecture in Brain 3D MRI Classification: Its Application in Alzheimer’s Disease Classification

Scientific reports(2024)SCI 3区

Erzurum Tech Univ | Louisiana State Univ Hlth Sci Ctr Shreveport | Department of Pathology and Translational Pathobiology | Department of Pharmacology

Cited 3|Views8
Abstract
Alzheimer’s disease (AD), a neurodegenerative disease that mostly affects the elderly, slowly impairs memory, cognition, and daily tasks. AD has long been one of the most debilitating chronic neurological disorders, affecting mostly people over 65. In this study, we investigated the use of Vision Transformer (ViT) for Magnetic Resonance Image processing in the context of AD diagnosis. ViT was utilized to extract features from MRIs, map them to a feature sequence, perform sequence modeling to maintain interdependencies, and classify features using a time series transformer. The proposed model was evaluated using ADNI T1-weighted MRIs for binary and multiclass classification. Two data collections, Complete 1Yr 1.5T and Complete 3Yr 3T, from the ADNI database were used for training and testing. A random split approach was used, allocating 60% for training and 20% for testing and validation, resulting in sample sizes of (211, 70, 70) and (1378, 458, 458), respectively. The performance of our proposed model was compared to various deep learning models, including CNN with BiL-STM and ViT with Bi-LSTM. The suggested technique diagnoses AD with high accuracy (99.048% for binary and 99.014% for multiclass classification), precision, recall, and F-score. Our proposed method offers researchers an approach to more efficient early clinical diagnosis and interventions.
More
Translated text
Key words
Alzheimer’s disease,MRI,Transfer learning,Sequence classification,Vision transformer
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本研究提出了一种基于Vision Transformer的3D MRI分类架构,用于阿尔茨海默病的早期诊断,实现了高准确度的分类效果。

方法】:研究采用Vision Transformer(ViT)提取3D MRI特征,并通过时间序列变换器进行特征序列建模和分类。

实验】:实验使用了ADNI数据库中的Complete 1Yr 1.5T和Complete 3Yr 3T数据集,采用随机分配方法,训练集、测试集和验证集的样本量分别为(211, 70, 70)和(1378, 458, 458),实验结果显示模型在二分类和多分类任务中分别达到了99.048%和99.014%的准确度。