WeChat Mini Program
Old Version Features

鲜食葡萄果实单萜合成关键基因的eQTL分析

wf(2022)

Cited 0|Views17
Abstract
【目的】通过对鲜食葡萄果实单萜合成关键基因进行eQTL定位及候选基因挖掘,深入了解单萜合成调控机制,为优良玫瑰香味葡萄新品种培育及种质改良奠定基础。【方法】以‘摩尔多瓦’ב瑞都香玉’F1代群体及亲本为供试材料,分别在转色期和成熟期采集葡萄果实样品;利用实时荧光定量qPCR技术对7个单萜合成途径基因(VvDXS1、VvDXS3、VvDXR、VvHDR、VvLiner、VvTerp和VvGermD)的表达量进行检测获得表达性状表型数据;基于区间作图法,采用MapQTL6.0软件,对单萜基因表达性状进行eQTL定位分析;将eQTL连锁标记定位到基因组区域,通过Ensembl Plants和NCBI数据库进行基因注释;利用葡萄全基因组芯片技术检测不同发育时期亲本果实样品中候选基因的表达谱。【结果】7个单萜合成基因表达量在F1代群体中呈现连续分布数量遗传特征;各个单萜基因表达之间具有显著的相关性;在转色期,7个表达性状一共定位到13个eQTL,主要位于1号、6号、14号、16号、17号、10号和12号等染色体上,表型解释率介于12.2%—23.5%。其中位于14号染色体的eQTL(qDXS1-v14、qHDR-v14-1和qTerp-v14)覆盖相同的遗传区间57.582—76.979 cM,qLiner-v10、qTerp-v10和qGermD-v10共定位到10号染色体相同的遗传区间;在成熟期,共检测到16个eQTL,主要位于1号、6号、12号、8号、13号和19号等染色体。qDXS1-m6-2、qDXR-m6-2、qLiner-m6和qGermD-m6共定位到6号染色体139.212—143.161 cM遗传区间;针对成熟期与转色期各个基因的表达量比值变化进行定位分析,共检测到18个eQTL,分别位于1号、3号、7号、10号、12号、15号和19号等染色体。定位于12号染色体的qDXS1-r12-1、qDXR-r12-1、qHDR-r12、qLiner-r12和qGermD-r12覆盖相同的遗传区间6.330—6.967 cM。对多个基因表达性状共定位的eQTL区域进行基因注释,共筛选到90个转录因子基因,表达谱及相关性分析最终确定11候选基因。其中4个候选基因(VIT_06s0009g01380、VIT_14s0006g02290、VIT_12s0028g01170和VIT_15s0046g00290)与激素信号通路调控相关,一个候选基因(VIT_12s0028g01110)编码光敏色素作用因子与光响应相关,还有一些编码Myb类、WRKY类转录因子或者未知功能蛋白基因。【结论】在两个不同的生长发育期共检测到37个与单萜合成基因表达性状连锁的eQTL,主要定位于6号、10号、12号和14号染色体。基于基因注释和表达谱分析结果,确定了包含VIT_06s0009g01380和VIT_14s0006g02290在内的11个可能的候选基因,这些候选基因与多个单萜基因表达高度相关。
More
Translated text
Key words
grape,monoterpenes,key genes,eQTL
求助PDF
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined