Are Community Oncology Practices with or Without Clinical Research Programs Different? A Comparison of Patient and Practice Characteristics.
JNCI Cancer Spectrum(2024)
Flatiron Hlth Inc
Abstract
Background Expanding access to clinical trials in community settings is a potential approach to addressing disparities in accrual of historically underrepresented populations. However, little is known about the characteristics of practices that do not participate in research. We investigated differences in patient and practice characteristics of US community oncology practices with high vs low engagement in clinical research.Methods We included patients from a real-world, nationwide electronic health record-derived, de-identified database who received active treatment for cancer at community oncology practices between November 1, 2017, and October 31, 2022. We assessed patient and practice characteristics and their associations with high vs low research engagement using descriptive analyses and logistic regression models.Results Of the 178 practices, 70 (39.3%) events had high research engagement, treated 57.8% of the overall 568 540 patient cohort, and enrolled 3.25% of their patients on cancer treatment trials during the 5-year observation period (vs 0.27% enrollment among low engagement practices). Practices with low vs high research engagement treated higher proportions of the following patient groups: ages 75 years and older (24.2% vs 21.8%), non-Latinx Black (12.6% vs 10.3%) or Latinx (11.6% vs 6.1%), were within the lowest socioeconomic status quintile (21.9% vs16.5%), and were uninsured or had no documented insurance (22.2% vs 13.6%).Conclusions Patient groups historically underrepresented in oncology clinical trials are more likely to be treated at community practices with limited or no access to trials. These results suggest that investments to expand the clinical research footprint among practices with low research engagement could help address persistent inequities in trial representation.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2001
被引用571 | 浏览
2001
被引用771 | 浏览
2004
被引用176 | 浏览
2013
被引用146 | 浏览
2016
被引用279 | 浏览
2017
被引用72 | 浏览
2019
被引用406 | 浏览
2020
被引用145 | 浏览
2020
被引用120 | 浏览
2022
被引用8 | 浏览
2020
被引用186 | 浏览
2023
被引用26 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper