WeChat Mini Program
Old Version Features

Comparison of Adaptive and Non-Adaptive Pacing Modes on Time-to-peak Dp/dt in Multipoint Pacing or Standard Biventricular Pacing with Different Degrees of Intraventricular Fusion

EP Europace(2021)

Oslo University Hospital Rikshospitalet

Cited 0|Views6
Abstract
Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Norwegian South-East Health Authorities Background We have investigated the timing of the peak left ventricular pressure rise, time to peak dP/dt (Td) as marker of resynchronization to be measured during implantation for detection of effective resynchronization. Td links the time domain (dyssynchrony) to the mechanical domain (pressure) as the dyssynergic muscular contractions resulting from electrical dyssynchrony delays pressure development and hence the timing of peak dP/dt, Td. Td shortens with resynchronization. Purpose In this study we investigated the acute changes in Td by comparing pacing the left ventricle (LV) with fusion of intrinsic right ventricular (RV) conduction (Adaptive, A) with pacing RV and LV (Non-Adaptive, NA), with and without multipoint pacing (MPP) and with different degrees of intraventricular pacing delays (RV-LV). Methods 19 patients with sinus rhythm and LBBB undergoing CRT implantation were studied. We measured pressures with an indwelling LV pressure catheter. Td was calculated as the time from onset of pacing to peak dP/dt, and averaged in 10 subsequent beats at each stage of pacing. We used quadripolar LV pacing leads positioned in what was considered an optimal mid/basal posterolateral/ lateral branch of the coronary sinus and sequential pacing (DDD) was performed; Adaptive and Non-Adaptive pacing was performed at LV distal [LVdist], proximal electrode [LVprox] and at both electrodes as multipoint pacing [MPP]. VV-timing: LV pacing was performed relative to QRS onset (either as a result of intrinsic activation or RV pace, mean ± SD): 1. LV only -76 ± 21ms before QRS activation with minimal fusion with RV activation (LVonly); 2. -28 ± 14ms before QRS activation (Pre); 3. 12 ± 15ms after (Post) QRS activation. Linear mixed models were used for statistics of the pooled data. Results are estimated marginal means ±SEM, and only significant P < 0.05 changes are reported. Results Average Td (data pooled) with RVP was 173 ± 2ms, MPP 144 ± 0.4ms and BIVP 150 ± 0.4ms. When analyzing the interaction between pacingmode (A,NA), VV-timing (LVonly,Pre,Post) and electrode(LVdist,LVprox,MPP) in all interventions we found that Td was shorter (p < 0.01) with A(Post) for all electrode combinations [LVdist] 143 ± 4ms, [LVprox] 140 ± 4ms and [MPP] 134 ± 4ms, while Td with A(Pre) was shorter with [MPP] 139 ± 4ms only. A(post)[MPP] provided shorter Td than the other adaptive modes (p < 0.01). NA(Post)[MPP] at 145 ± 4ms and NA(Post)[LVdist] at 146 ± 4ms provided the shortest Td (p < 0.01) of the NA pacing modes, and Td with NA(Post)[MPP] was shorter (p < 0.01) than all NA pacing modes. Conclusion Td shortens the most with LV MPP timed to near simultaneous intrinsic RV activation, indicating a beneficial mechanical effect from Adaptive MPP compared to standard biventricular pacing.
More
Translated text
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined