Deep Learning-Based Prediction of Monte Carlo Dose Distribution for Heavy Ion Therapy.
Physics and Imaging in Radiation Oncology(2025)
Institute of Modern Physics
Abstract
Background and purpose Current methods, like treatment planning system algorithms (TPSDose), lack accuracy, whereas Monte Carlo dose distribution (MCDose) is accurate but computationally intensive. We proposed a deep learning (DL) model for rapid prediction of Monte Carlo simulated dose distribution (MCDose) in heavy ion therapy (HIT). Materials and methods We developed a DL model − the Cascade Hierarchically Densely 3D U-Net (CHD U-Net) − to predict MCDose using computed tomography images and TPSDose of 67 head-and-neck patients and 30 thorax-and-abdomen patients. We also compared the results with other proton dose DL models and TPSDose. Results Compared to TPSDose, the gamma passing rate (GPR) improved by 16 % (1 %/1 mm). Notably, the model achieved 99 % and 97 % accuracy under clinically relevant criteria (3 %/3 mm) across the whole dose distribution in patients. For head-and-neck patients, the GPRs of the C3D and HD U-Net models in the PTV region were 97 % and 85 %, and in the body were 98 % and 97 %, respectively. For thorax-and-abdomen patients, the GPR of the C3D and HD U-Net models in the PTV region were 71 % and 51 %, and in the body were 95 % and 90 %, respectively. Conclusions The proposed CHD U-Net model can predict MCDose in a few seconds and outperforms two alternative DL models. The predicted dose can replace TPSDose in HIT clinical process due to its MC simulation accuracy, thus improving the accuracy of dose calculation and providing a valuable reference for quality assurance.
MoreTranslated text
Key words
Deep learning,Heavy ion therapy,Dose prediction,Monte Carlo simulation,Analytical algorithm
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined