WeChat Mini Program
Old Version Features

miRNA146b在多种疾病中的作用及其机制研究进展

Journal of Jiangsu University(Medicine Edition)(2018)

Cited 5|Views12
Abstract
miRNA 146b(简称 miR-146b,其人类同源性miRNA也被称为 hs-miR-146 b-5 p)同 miRNA 146 a均是miRNA 146家族中的一员.鼠miR-146 b由位于19 号染色体19 qC3 位置的 MIRNA146 B 基因编码,而鼠miR-146 a由位于11 号染色体的11 qA5 位置的MIRNA146 A基因编码,两者结构差异仅是两个位于3′端的核苷酸,该区域对目标识别的影响较小[1].因此,miR-146a 和 miR-146b 可通过靶向性结合同一转录产物达到同样的抑制翻译的目的,并具有相似的调节通路.miR-146 b可抑制肿瘤细胞的增殖、侵袭和转移,miR-146 b在甲状腺乳头状癌(thyroid papillary carcinoma,PTC)、乳腺癌、前列腺癌、弥漫性大B细胞淋巴瘤(diffuse large B cell lym-phoma,DLBCL)和胶质瘤中的表达均发生显著变化.此外,miR-146b 在小儿中耳炎等感染性疾病,阿尔茨海默病、视网膜黄斑变性等退行性疾病中亦发挥关键影响.本文将对miRNA146 b在上述不同疾病中扮演的双重角色及相关作用机制进行综述.
More
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined