不同实验条件对淋病奈瑟球菌体外转化效率的影响
Chinese Journal of Dermatology(2024)
Third Affiliated Hospital of Sun Yat-sen University
Abstract
目的 探讨不同实验条件对淋病奈瑟球菌(淋球菌)体外转化效率的影响.方法 以淋球菌D株DNA为模板扩增penA基因,采用重叠延伸PCR制备penAA501V(耐药突变)基因,购买淋球菌penAH041基因(耐药阳性对照).制备含penA/penAA501V/penAH041基因的质粒转化至大肠杆菌DH5α感受态细胞,采用蓝白筛选培养基克隆扩增;分别采用日本TaKaRa公司质粒提取试剂盒和天根生化科技(北京)有限公司高纯度质粒小提试剂盒提取质粒.以penA质粒、penAA501V质粒、penAH041质粒、penAA501V基因、penAH041基因为底物采用直接孵育法转化淋球菌受体菌株A43、A49,测定转化率(CFU/µg)及头孢曲松(CRO)的最小抑菌浓度(MIC).以penAA501V、penAH041基因为底物,采用脂质体法转化受体菌株A43、A49,观察转化效率及头孢曲松的MIC.结果 以D株淋球菌DNA为模板,成功获得1 749 bp的penA及penAA501V基因.提取质粒并电泳后,TaKaRa试剂盒提取的含淋球菌penA、penAA501V、penAH041基因的质粒主要集中在2 500 bp处,迁移速度较快;天根试剂盒提取的质粒主要集中在5 000 bp处,迁移速度较慢.直接孵育法转化时,含野生型penA的A43株的MIC为0.001 μg/ml,转入penA质粒后MIC上升至0.002 μg/ml,转入penAA501V质粒和penAA501V基因转化株MIC均为0.004 μg/ml,转入阳性对照penAH041质粒和penAH041基因后MIC均升至0.512 μg/ml;各底物孵育后CRO对A49株的MIC值与空白对照相同,均为0.016 μg/ml.脂质体法转化时,含野生型penA的A43 株对 CRO 的 MIC 是 0.002 μg/ml,A49 株 MIC 为 0.016 μg/ml;penAA501V 基因转化 A43 株后 MIC 升至0.004 μg/ml,转化A49株后仍为0.016 μg/ml;阳性对照penAH041基因转化A43和A49株后MIC均升至0.250 μg/ml,较空白对照分别升高至125倍和15.6倍.结论 选择质粒为底物采用直接孵育法对A43株淋球菌有较好转化效率,不同质粒提取方法可能影响转化效率,对于直接孵育法转化效率不高的A49株可采用脂质体法.
MoreTranslated text
Key words
Neisseria gonorrhoeae,Transformation,bacterial,Plasmids,Ceftriaxone,Drug resistance,penA gene,Minimum inhibitory concentration
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper