Frequency and Classification of Addenda in Paediatric Neuroradiological Reports As Part of Quality Assurance
Clinical Radiology(2024)
Childrens Hosp Philadelphia
Abstract
AIM: To determine the frequency and classification of addenda seen in paediatric brain magnetic resonance imaging (MRI) reports. MATERIALS AND METHODS: A retrospective review of the addenda of brain MRI reports from a large tertiary children's hospital was undertaken between January 2013 to December 2021 and a subset of above radiology reports was used to classify addenda over 6 -month periods, October to March, spanning 2018 to 2021. A radiology fellow and a medical doctor classified the addenda into previously published categories using their best judgement. RESULTS: Out of 73,643 brain MRI reports over 9 years (108 months) included in the study, only 923 reports (1.25%) had addenda. There was a total of 13,615 brain MRI reports from 6month periods, of which only 179 reports (1.31%) had an addendum. The number of errors according to categories were: observational 88/13,615 (0.65%); interpretational 16/13,615 (0.12%); non -observational and non -interpretative 82/13,615 (0.6%). Notifications to referring physician made in 29/13,615 (0.21%). CONCLUSIONS: The overall proportion of addenda to the brain MRI reports of children in the present study was low, at 1.25%. Categorisation of different addenda revealed the most common errors to be observational in 0.65%, including under -reading in the region of interest in 0.25%. Appropriate measures can now be introduced to minimise the error -based addenda further and improve MRI diagnosis in children. Other paediatric practices may choose to follow suit in evaluating their addenda and errors to improve practice. (c) 2024 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
MoreTranslated text
Key words
Addendum,Magnetic Resonance Imaging (MRI),Radiology Report,Errors
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2009
被引用46541 | 浏览
2009
被引用74 | 浏览
2004
被引用77 | 浏览
2010
被引用89 | 浏览
1992
被引用264 | 浏览
1999
被引用79 | 浏览
2015
被引用10 | 浏览
2016
被引用43 | 浏览
2019
被引用19160 | 浏览
2020
被引用34 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper