Design of a Diblock-Based Membraneless Organelle System for Metabolic Process Control
Chemical Engineering Journal(2025)
State Key Laboratory of Food Science and Resources
Abstract
Biomolecular condensates represent unique structures capable of regulating cellular metabolism and physiology via phase separation within a confined subcellular milieu. Recent studies have highlighted the potential for intrinsically disordered regions (IDRs) and helical structures to condense into liquid droplets that function as membraneless organelles. Due to their spatial demarcation and ability to colocalize or sequester biological molecules, these artificial compartments present a compelling target for spatially organizing biosynthetic pathways in prokaryotes devoid of organelles. In this study, we delved into the self-association capacity of IDRs and putative helixes derived from two distinct phase-separating proteins PodJ and PopZ, respectively. Based on modular combination of each block, a diblock copolymer model with enhanced condensation capability was engineered. Furthermore, the physiochemical properties of the constructed condensates can be fine-tuned by the adjustment of these two components, and the valency of the IDR-helix module. As a proof of concept, the scaffold was utilized to orchestrate and augment metabolic flux towards the production of several human milk oligosaccharide products in E. coli. This versatile system has promising implications for partitioning bacterial cytoplasm into distinct subcellular zones and compartmentalizing proteins of interest in engineered cells with future synthetic biology applications.
MoreTranslated text
Key words
Biomolecular condensate,Intrinsically disordered region,Phase separation,Metabolic compartmentalization
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话