System-level Trends in Ischemic Stroke Admissions after Adding Endovascular Stroke Capabilities in Community Hospitals
JOURNAL OF NEUROINTERVENTIONAL SURGERY(2024)
Abstract
BackgroundThere is substantial interest in adding endovascular stroke therapy (EST) capabilities in community hospitals. Here, we assess the effect of transitioning to an EST-performing hospital (EPH) on acute ischemic stroke (AIS) admissions in a large hospital system including academic and community hospitals.MethodsFrom our prospectively collected multi-institutional registry, we collected data on AIS admissions at 10 hospitals in the greater Houston area from January 2014 to December 2022: one longstanding EPH (group A), three community hospitals that transitioned to EPHs in November 2017 (group B), and six community non-EPHs that remained non-EPH (group C). Primary outcomes were trends in total AIS admissions, large vessel occlusion (LVO) and non-LVO AIS, and tissue plasminogen activator (tPA) and EST use.ResultsAmong 20 317 AIS admissions, median age was 67 (IQR 57–77) years, 52.4% were male, and median National Institutes of Health Stroke Scale (NIHSS) was 4 (IQR 1–10). During the first 12 months after EPH transition, AIS admissions increased by 1.9% per month for group B, with non-LVO stroke increasing by 4.2% per month (P<0.001). A significant change occurred for group A at the transition point for all outcomes with decreasing rates in admissions for AIS, non-LVO AIS and LVO AIS, and decreasing rates of EST and tPA treatments (P<0.001).ConclusionUpgrading to EPH status was associated with a 2% per month increase in AIS admissions during the first year post-transition for the upgrading hospitals, but decreasing volumes and treatments at the established EPH. These findings quantify the impact on AIS admissions in hospital systems with increasing EST access in community hospitals.
MoreTranslated text
Key words
Stroke,Intervention,Political
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2016
被引用7341 | 浏览
2020
被引用62 | 浏览
2021
被引用17 | 浏览
2021
被引用25 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest