Comparing Size Measurements of Simulated Colorectal Polyp Size and Morphology Groups when Using a Virtual Scale Endoscope or Visual Size Estimation: Blinded Randomized Controlled Trial
DIGESTIVE ENDOSCOPY(2023)
Montreal Univ Hosp Res Ctr | Montreal Univ Hosp Ctr CHUM
Abstract
ObjectivesThe virtual scale endoscope (VSE) allows projection of a virtual scale onto colorectal polyps allowing real‐time size measurements. We studied the relative accuracy of VSE compared to visual assessment (VA) for the measuring simulated polyps of different size and morphology groups.MethodsWe conducted a blinded randomized controlled trial using simulated polyps within a colon model. Sixty simulated polyps were evenly distributed across four size groups (1–5, >5–9.9, 10–19.9, and ≥20 mm) and three Paris morphology groups (flat, sessile, and pedunculated). Six endoscopists performed polyp size measurements using random allocation of either VA or VSE.ResultsA total of 359 measurements were completed. The relative accuracy of VSE was significantly higher when compared to VA for all size groups >5 mm (P = 0.004, P < 0.001, P < 0.001). For polyps ≤5 mm, the relative accuracy of VSE compared to VA was not significantly higher (P = 0.186). The relative accuracy of VSE was significantly higher when compared to VA for all morphology groups. VSE misclassified a lower percentage of >5 mm polyps as ≤5 mm (2.9%), ≥10 mm polyps as <10 mm (5.5%), and ≥20 mm polyps as <20 mm (21.7%) compared to VA (11.2%, 24.7%, and 52.3% respectively; P = 0.008, P < 0.001, and P = 0.003).ConclusionVirtual scale endoscope had significantly higher relative accuracies for every polyp size group or morphology type aside from diminutive. VSE enables the endoscopist to better classify polyps into correct size categories at clinically relevant size thresholds of 5, 10, and 20 mm.
MoreTranslated text
Key words
colonic polyp,colonoscopy,colorectal cancer,endoscope,polyp size estimation
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined