Abstract No. 198 AI-Powered Assistant for Procedure Request Routing in a Large Hospital System
Journal of Vascular and Interventional Radiology(2024)
Abstract
Within large hospital systems, difficulty with routing procedure requests to the appropriate team and covering provider can delay patient care and cause frustration for both radiologists and ordering clinicians. Furthermore, the heterogeneity of interventional radiology practices further increases complexity for procedure requests between non-vascular interventional teams or procedure teams from other specialties. Artificial intelligence (AI) large language models (LLMs) enable a wide range of capabilities across industries. This work demonstrates a proof-of-concept, LLM-based tool to route procedure requests to the appropriate teams. At a large academic hospital, existing teams, pager/phone numbers, and schedules were used to create text-based rules for procedure requests (Table 198.1). Using the OpenAI application programming interface (API) with Python, an LLM-based assistant was created to route procedure requests at specific days and times to the appropriate teams. Using GPT-3.5 Turbo and GPT-4 models, 270 procedure requests were tested using randomly generated days and times. The estimated cost of each API request was recorded. The assistant correctly routed 82.2% of procedure requests using GPT-3.5 Turbo and 96.3% of procedure requests using GPT-4. The routing was performed at an average cost of $0.00068 per request for GPT-3.5 Turbo and $0.013 per request for GPT-4. The most common errors for both models were in early morning requests, times at which multiple subspecialty division procedure services are covered by overnight resident phones. The GPT-3.5 Turbo model demonstrated lower accuracy with routing post-pyloric feeding tube placements, frequently routing them incorrectly to the interventional radiology service, a common error among clinicians in our clinical experience. This work demonstrates the feasibility of an accurate, low-cost AI-powered assistant to appropriately route procedure requests in a large, academic hospital system. Given the free-text input, the rules and teams can easily be adapted to different coverages or hospital systems. A similar approach may be used to help clinicians navigate a radiology phone tree, or as a tool to help reading room coordinators route requests effectively with decreased training.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper