Phase IIb Randomized, Blinded, Controlled Trial Evaluating Neoadjuvant PD-1 Blockade Combined with A2B5+ Glioma Stem-Like Cell Lysate-Loaded DC Vaccine in Recurrent Glioblastoma (IDH1/2 -).
Bone marrow transplantation(2024)SCI 3区
Huashan Hospital | Saint John's Health Center | Shanghai Clinical Research Center
Abstract
2059 Background: Our previous study showed A2B5+ glioma stem-like cell lysate-loaded DC vaccine (GSC-DCV) extended survival in recurrent GBM. Additionally, our earlier research indicates that A2B5+ GSCs, enriched with tumor-specific antigens like URGCP, have the potential to activate CD8+ T cells via dendritic cell antigen presentation. Recent research on recurrent GBM suggest aPD-1 mAb neoadjuvant therapy further extends survival. This study is intended to assess the safety and efficacy of this combined therapeutic approach. Methods: Patients with recurrent GBM (IDH1/2 -) post-radiotherapy and chemotherapy were enrolled. All patients received aPD-1 mAbs before surgery. Post-surgery, patients were randomly assigned to the monotherapy arm (aPD-1 mAbs and placebo) or combination therapy arm (aPD-1 mAbs and GSC-DCV), with treatments given every 3-6 weeks until disease progression or intolerable toxicity. The primary endpoint was OS, and secondary endpoints included PFS and trAEs. Results: A total of 21 patients were randomly assigned to the monotherapy (n=11) and combination therapy (n=10) arms. Patient characteristics were well-balanced. The monotherapy arm had a median OS of 8.2 months, while the combination arm showed a significantly longer OS of 22.7 months (HR, 0.2774; 95% CI, 0.0828 to 0.9291, P=0.0376). Multivariate Cox model analysis confirmed the independent prognostic impact of the combination therapy on patients with recurrent GBM (HR, 9.911; 95% CI, 1.520 to 64.623; P=0.016). There was no statistically significant difference in PFS between the two arms. Notably, patients in the combination group experienced a substantial improvement in post-progression survival (PPS) compared to the monotherapy group (7.8 m vs. 1.4 m; P=0.0266). The long survival benefit observed in the combination group was associated with continuous treatment, as cessation led to short-term tumor progression. Subgroup analysis based on tumor burden revealed that the combination therapy was significantly more effective in the low tumor burden cohort (mOS: 23.2 vs. 9.6 months; P=0.0162). No significant difference was observed in the monotherapy group between the two cohorts. High URGCP expression was significantly positively associated with OS in patients undergoing combination therapy (R2=0.8608, P=0.0061). Grade 1-2 trAEs were reported in 27.3% of patients in the monotherapy arm and 50.0% in the combination therapy arm. No grade 3 or higher trAEs occurred. Conclusions: The combination therapy has proven to be safe and well-tolerated. Although there was no significant improvement in PFS, patients achieved a sufficiently long OS benefit compared to the monotherapy arm. Notably, the combination therapy was particularly effective in patients with a low tumor burden through long-term, multi-course treatment. Clinical trial information: NCT04888611 .
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper