Detection of Copy Number Variants Associated with Late-Onset Conditions in ∼16200 Pregnancies: Parameters for Disclosure and Pregnancy Outcome
Journal of Medical Genetics(2023)SCI 1区SCI 2区
Hadassah Med Org | Hebrew Univ Jerusalem | Beilinson Med Ctr | Shaare Zedek Med Ctr | Hadassah Med Ctr
Abstract
Background Copy number variants (CNVs) associated with late-onset medical conditions are rare but important secondary findings in chromosomal microarray analysis (CMA) performed during pregnancy. Here, we critically review the cases at two tertiary centres to assess the criteria which guide the disclosure of such findings and develop a disclosure decision tool (DDT) aimed at facilitating disclosure decision. Parental decisions on receiving CNVs associated with risks for late-onset conditions were also recorded. Methods Prenatal CMAs in Hadassah and Shaare Zedek Medical Centers from November 2013 to October 2021 were reviewed for CNVs associated with late-onset conditions. The DDT proposed uses a five-parameter scoring system, which considers the severity, median age of onset, penetrance, understanding of genotype-phenotype correlation and actionability of the finding. Results Out of 16 238 prenatal CMAs, 16 (0.1%) harboured CNVs associated with late-onset conditions, 15 of which were disclosed. Outcome information was available on 13 of the 16 pregnancies, all of which continued to delivery. Conclusions Our suggested DDT will help clinicians to quantitatively weigh the variables associated with CNVs of this type and arrive at a well thought out clinical decision regarding disclosure. Although the prevalence of late-onset conditions as a major finding in the prenatal setup is low, it is expected to rise with the increasing use of non-invasive CMA testing and whole exome and genome sequencing.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2006
被引用184 | 浏览
2014
被引用379 | 浏览
2011
被引用160 | 浏览
2011
被引用40 | 浏览
2012
被引用40 | 浏览
2013
被引用293 | 浏览
2013
被引用2875 | 浏览
2019
被引用39 | 浏览
2019
被引用7 | 浏览
2019
被引用16 | 浏览
2020
被引用19 | 浏览
2020
被引用26 | 浏览
2020
被引用3 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest