Contrastive Learning with Transformer for Adverse Endpoint Prediction in Patients on DAPT Post-Coronary Stent Implantation
FRONTIERS IN CARDIOVASCULAR MEDICINE(2025)
Mayo Clin | Univ Texas Hlth Sci Ctr Houston | Univ Penn | Univ Florida Hlth
Abstract
BackgroundEffective management of dual antiplatelet therapy (DAPT) following drug-eluting stent (DES) implantation is crucial for preventing adverse events. Traditional prognostic tools, such as rule-based methods or Cox regression, despite their widespread use and ease, tend to yield moderate predictive accuracy within predetermined timeframes. This study introduces a new contrastive learning-based approach to enhance prediction efficacy over multiple time intervals.MethodsWe utilized retrospective, real-world data from the OneFlorida + Clinical Research Consortium. Our study focused on two primary endpoints: ischemic and bleeding events, with prediction windows of 1, 2, 3, 6, and 12 months post-DES implantation. Our approach first utilized an auto-encoder to compress patient features into a more manageable, condensed representation. Following this, we integrated a Transformer architecture with multi-head attention mechanisms to focus on and amplify the most salient features, optimizing the representation for better predictive accuracy. Then, we applied contrastive learning to enable the model to further refine its predictive capabilities by maximizing intra-class similarities and distinguishing inter-class differences. Meanwhile, the model was holistically optimized using multiple loss functions, to ensure the predicted results closely align with the ground-truth values from various perspectives. We benchmarked model performance against three cutting-edge deep learning-based survival models, i.e., DeepSurv, DeepHit, and SurvTrace.ResultsThe final cohort comprised 19,713 adult patients who underwent DES implantation with more than 1 month of records after coronary stenting. Our approach demonstrated superior predictive performance for both ischemic and bleeding events across prediction windows of 1, 2, 3, 6, and 12 months, with time-dependent concordance (Ctd) index values ranging from 0.88 to 0.80 and 0.82 to 0.77, respectively. It consistently outperformed the baseline models, including DeepSurv, DeepHit, and SurvTrace, with statistically significant improvement in the Ctd-index values for most evaluated scenarios.ConclusionThe robust performance of our contrastive learning-based model underscores its potential to enhance DAPT management significantly. By delivering precise predictive insights at multiple time points, our method meets the current need for adaptive, personalized therapeutic strategies in cardiology, thereby offering substantial value in improving patient outcomes.
MoreTranslated text
Key words
dual antiplatelet therapy,contrastive learning,transformer,predictive modeling,adverse endpoint,drug-eluting coronary artery stent implantation,survival analysis
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2019
被引用511 | 浏览
2018
被引用1274 | 浏览
2021
被引用94 | 浏览
2022
被引用18 | 浏览
2023
被引用23 | 浏览
2021
被引用14 | 浏览
2023
被引用2 | 浏览
2024
被引用383 | 浏览
2024
被引用2 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper