TIGIT Expression in Renal Cell Carcinoma Infiltrating T Cells is Variable and Inversely Correlated with PD-1 and LAG3.
Cancer Informatics(2024)SCI 2区SCI 3区
Abstract
Immune checkpoint inhibitors have revolutionized the treatment of renal cell carcinoma (RCC), but many patients do not respond to therapy and the majority develop resistant disease over time. Thus, there is increasing need for alternative immunomodulating agents. The co-inhibitory molecule T-cell immunoglobulin and ITIM domain (TIGIT) may play a role in resistance to approved immune checkpoint inhibitors and is being investigated as a potential therapeutic target. The purpose of this study was to quantify TIGIT positivity in tumor-infiltrating T cells in RCC. We employed tissue microarrays containing specimens from primary RCC tumors, adjacent normal renal tissue, and RCC metastases to quantify TIGIT within tumor-infiltrating CD3+ T cells using quantitative immunofluorescent analysis. We also compared these results to TIGIT+ CD3+ levels in four other tumor types (melanoma, non-small cell lung, cervical, and head and neck cancers). We did not observe significant differences in TIGIT positivity between primary RCC tumors and patient-matched metastatic samples. We found that the degree of TIGIT positivity in RCC is comparable to that in lung cancer but lower than that in melanoma, cervical, and head and neck cancers. Correlation analysis comparing TIGIT positivity to previously published, patient-matched spatial proteomic data by our group revealed a negative association between TIGIT and the checkpoint proteins PD-1 and LAG3. Our findings support careful evaluation of TIGIT expression on T cells in primary or metastatic RCC specimens for patients who may be treated with TIGIT-targeting antibodies, as increased TIGIT positivity might be associated with a greater likelihood of response to therapy.
MoreTranslated text
Key words
TIGIT,Renal cell carcinoma,Kidney cancer,Immune checkpoint inhibitors,LAG3,PD-1
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2002
被引用759 | 浏览
2012
被引用16564 | 浏览
2013
被引用15224 | 浏览
2015
被引用59 | 浏览
2015
被引用58 | 浏览
2017
被引用125 | 浏览
2018
被引用1944 | 浏览
2021
被引用6 | 浏览
2021
被引用362 | 浏览
2021
被引用304 | 浏览
2021
被引用45 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper