Free‐breathing Liver Fat and Quantification Using Motion‐corrected Averaging Based on a Nonlocal Means Algorithm
Magnetic Resonance In Medicine(2020)SCI 2区
Univ Wisconsin | GE Healthcare
Abstract
PurposeTo propose a motion‐robust chemical shift‐encoded (CSE) method with high signal‐to‐noise (SNR) for accurate quantification of liver proton density fat fraction (PDFF) and .MethodsA free‐breathing multi‐repetition 2D CSE acquisition with motion‐corrected averaging using nonlocal means (NLM) was proposed. PDFF and quantified with 2D CSE‐NLM were compared to two alternative 2D techniques: direct averaging and single acquisition (2D 1ave) in a digital phantom. Further, 2D NLM was compared in patients to 3D techniques (standard breath‐hold, free‐breathing and navigated), and the alternative 2D techniques. A reader study and quantitative analysis (Bland‐Altman, correlation analysis, paired Student’s t‐test) were performed to evaluate the image quality and assess PDFF and measurements in regions of interest.ResultsIn simulations, 2D NLM resulted in lower standard deviations (STDs) of PDFF (2.7%) and (8.2 ) compared to direct averaging (PDFF: 3.1%, : 13.6 ) and 2D 1ave (PDFF: 8.7%, : 33.2 ). In patients, 2D NLM resulted in fewer motion artifacts than 3D free‐breathing and 3D navigated, less signal loss than 2D direct averaging, and higher SNR than 2D 1ave. Quantitatively, the STDs of PDFF and of 2D NLM were comparable to those of 2D direct averaging (p>0.05). 2D NLM reduced bias, particularly in (−5.73 to −0.36 ) that arises in direct averaging (−3.96 to 11.22 ) in the presence of motion.Conclusions2D CSE‐NLM enables accurate mapping of PDFF and in the liver during free‐breathing.
MoreTranslated text
Key words
liver,motion-corrected averaging,nonlocal means,proton density fat fraction,quantification,R-2(*)
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
European Radiology 2022
被引用3
Quantitative Imaging in Medicine and Surgery 2024
被引用0
Free-breathing MRI Techniques for Fat and R2* Quantification in the Liver
Magnetic Resonance Materials in Physics, Biology and Medicine 2024
被引用1
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper