TFPI对大鼠心肌缺血再灌注损伤的影响及其机制的初步探索
cnki(2023)
Abstract
目的 探究组织因子途径抑制物(TFPI)对大鼠心肌缺血再灌注(I/R)及心肌细胞缺氧复氧(H/R)损伤的影响,并从心肌细胞凋亡的变化探索其机制。方法 在体内实验中,通过SD大鼠心脏原位结扎法可逆阻断前降支建立大鼠心肌I/R模型。将大鼠随机分为对照组、I/R组和I/R+rTFPI组,再灌注后3天采用HE染色观察大鼠心肌组织形态学变化,TTC染色法评估心肌梗死区范围,扫描透射电镜观察心肌超微结构损伤情况,Western-Blot法检测各组大鼠心肌组织中Bcl-2、Bax和cleaved-caspase-3蛋白的表达。在体外实验中,采用胰酶消化法及差速贴壁法培养SD乳鼠原代心肌细胞,用MIC101系统模拟心肌细胞I/R损伤,缺氧2小时、复氧12小时后建立体外心肌细胞缺氧/复氧(H/R)模型。将心肌细胞分为对照组、H/R组和H/R+rTFPI(10μg/L)组,用CCK8法检测心肌细胞活力,TUNEL法检测心肌细胞凋亡率,Western-blot方法检测心肌细胞中Bax、Bcl-2及cleaved-caspase-3蛋白的表达水平。结果 体内实验中,成功建立大鼠在体心肌I/R模型。HE染色结果显示I/R组较对照组心肌细胞坏死程度加重,I/R+rTFPI组较I/R组心肌细胞坏死程度减低;TTC染色示I/R+rTFPI组较I/R组心肌梗死范围减少了39.76%(P<0.05);扫描透射电镜观察显示I/R组凋亡及损伤程度较对照组加重,I/R+rTFPI组凋亡及损伤较I/R组减轻;Western-Blot结果示,再灌注3天后I/R组心肌组织Bcl-2的表达较对照组降低了53.43%(P<0.05)、Bax和cleaved-caspase-3(P<0.05)的表达较对照组分别增加了29.05%和73.25%(P<0.05),而I/R+rTFPI组Bcl-2的表达水平较I/R组升高了55.01%(P<0.05),Bax和cleaved-caspase-3的表达水平较I/R组分别降低了13.77%和24.25%(P<0.05)。在体外实验中,CCK8检测结果显示H/R组细胞活力较对照组下降了29.70%(P<0.05),H/R+rTFPI组细胞活力较H/R组升高了19.77%(P<0.05)。TUNEL结果显示H/R组较对照组凋亡率增加了56.76%,H/R+rTFPI组细胞凋亡率较H/R组降低了24.55%(P<0.05)。Western-blot结果示:H/R组细胞Bcl-2表达较对照组降低了46.92%,Bax表达较对照组增加了41.90%(P<0.05),cleaved-caspase-3表达较对照组升高了2.68倍(P<0.05)。H/R+rTFPI组Bcl-2表达较H/R组增加了28.24%(P<0.05),Bax及cleaved-caspase-3表达较H/R组分别降低了26.34%和57.60%(P<0.05)。结论 TFPI可显著拮抗心肌I/R和心肌细胞H/R损伤,此效应与其抑制心肌细胞凋亡有关。
More求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined