环状RNA circRTN4IP1在肝细胞癌中的生物信息学分析及其与预后的关系
Chinese Journal of Hepatic Surgery(Electronic Edition)(2021)
Abstract
目的 探讨环状RNA circRTN4IP1在肝细胞癌(HCC)中的表达及其与预后的关系.方法 使用R软件的Limma包分析GEO数据集GSE 97332、GSE 94508、GSE 78520,联合筛选差异表达的环状RNA.通过CSCD网站预测circRTN4IP1结合的microRNA(miRNA),使用TargetScan、miRDB、miRTarBase数据库联合预测miRNA的靶基因,并对其进行基因本体(GO)功能和京都基因与基因组百科全书(KEGG)通路富集分析.应用STRING在线工具分析靶基因的功能和蛋白质相互作用(PPI)网络,筛选关键基因.采用Kaplan-Meier Plotter数据库分析其与预后的关系.收集2016年8月至2018年6月于中山大学孙逸仙纪念医院行手术切除的60例HCC患者癌组织及配对癌旁组织,荧光定量RT-PCR检测组织circRTN4IP1表达水平,并分析其与HCC患者临床病理特征和预后的关系.两组circRTN4IP1 mRNA相对表达量比较采用t检验,率的比较采用χ2检验.生存分析采用Kaplan-Meier法和Log-rank检验.结果 GEO数据集分析筛选HCC中显著上调的circRTN4IP1,预测出其结合的60个miRNA及581个靶基因.GO和KEGG分析显示其靶基因主要参与转录、翻译、蛋白泛素化、细胞周期等生物学进程,以及p38MAPK、PI3K/AKT、FoxO及TGF-β/Smad等信号通路.共筛选出8个关键基因,其中UBE2D1、H2AFX、UBE2V1、LRRC41、POLR2D高表达的HCC患者总生存时间明显低于低表达患者(HR=1.61,1.93,1.72,1.88,1.51;P<0.05).荧光定量RT-PCR结果显示,HCC癌组织中circRTN4IP1 mRNA平均相对表达量为0.759±0.020,明显高于癌旁组织的0.625±0.024(t=8.385,P<0.05).circRTN4IP1在HCC组织中的表达与AFP水平有关(χ2=4.267,P<0.05).circRTN4IP1高表达组患者的无病生存明显差于低表达组(χ2=5.055,P<0.05).结论 circRTN4IP1在HCC中高表达,可能通过上调靶基因UBE2D1、H2AFX、UBE2V1、LRRC41、POLR2D参与调控HCC的发生、发展,是潜在的预后标志物及治疗靶点.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2002
被引用82 | 浏览
2016
被引用503 | 浏览
2015
被引用191 | 浏览
2017
被引用328 | 浏览
2017
被引用31 | 浏览
2018
被引用259 | 浏览
2019
被引用35 | 浏览
2019
被引用196 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper