WeChat Mini Program
Old Version Features

持续肾脏替代治疗串联低流量膜肺治疗重症急性呼吸窘迫综合征的实验研究

Journal of Medical Postgraduates(2019)

Cited 10|Views6
Abstract
目的 单用机械通气治疗重症急性呼吸窘迫综合征(ARDS)有时并不能有效改善氧合和清除血中过多的CO2.文章探讨连续性肾脏替代治疗(CRRT)串联V-V型低流量低阻力膜式氧合器(ECMO)对ARDS合并高碳酸血症的犬在清除炎症介质、改善氧合、降低血CO2等方面的救治疗效.方法 静注油酸,将30只健康成年雄性杂种犬制成ARDS合并高碳酸血症的模型,随机数字表法分成假手术组、ECMO组、联合实验组,每组10只.假手术组仅进行有创机械通气,ECMO组予以低流量体外膜肺治疗,联合实验组还予以串接V-V型CRRT、低流量体外膜肺.在建模开始(T0)、上机第1、3、6、9小时,分别测定5个时间点(T0、T1、T3、T6、T9)的心率(HR)、平均动脉压(MAP)、心输出量(CO)、血TNF-α、IL-6浓度;在建模开始、上机第3、6、9小时及停止治疗后3h(T12),ECMO组、联合实验组分别测定5个时间点的氧合指数(OI)、PCO2、体温.比较上述各指标组内及组间差异.结果 与假手术组T6、T9比较,ECMO组、联合实验组同时间点HR均下降(P<0.05);与ECMO组、联合实验组T0比较,组内T6、T9时点HR均降低(P<0.05).与ECMO组比较,联合实验组对应T6、T9时点HR均降低(P<0.05).3组HR、MAP和CO在T0时差异无统计学意义(P>0.05).与联合实验组T0时IL-6[(341.13±18.78)ng/L]比较,组内T3、T6、T9时点[(276.13±8.32、262.04±7.15、259.33±7.31)ng/L]均降低(P<0.05).与联合实验组T0时TNF-α[(64.07±3.03)ng/L]比较,组内T3、T6、T9时点[(50.14±1.75、50.45±1.81、48.03±1.24)ng/L]均降低(P<0.05).与ECMO组T3、T6、T93个时点比较,联合实验组同一时点IL-6、TNF-α均下降(P<0.05).与假手术组T3、T6、T9的IL-6[(343.76±21.97、345.91±19.89、340.34±22.17)ng/L]比较,联合实验组同时间点[(276.13±8.32、262.04±7.15、259.33±7.31)ng/L]均降低(P<0.05).与假手术组T3、T6、T9的TNF-α[(68.10±2.96、67.31±3.01、70.34±3.35)ng/L]比较,联合实验组同时间点[(50.14±1.75、50.45±1.81、48.03±1.24)ng/L]均降低(P<0.05).与假手术组T9、T12比较,联合实验组对应时点OI值均降低(P<0.05);与假手术组T3、T6、T9比较,ECMO组、联合实验组对应时点PaCO2值均减低(P<0.05).与联合实验组组T6比较,组内T9、T12对应时点OI值均降低(P<0.05).与ECMO组、联合实验组T0比较,组内T3、T6、T9时点PaCO2值均降低(P<0.05).结论 串联V-V型CRRT、低阻力膜肺系统可安全、有效地清除ARDS合并高碳酸血症的患者(无重症肺部感染,基础心肺功能良好)的炎症介质,明显降低其高碳酸血症,并一定程度的改善氧合,提高救治疗效.
More
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined