WeChat Mini Program
Old Version Features

A Comprehensive Benchmarking Study of Protein Structure Alignment Tools Based on Downstream Task Performance

biorxiv(2025)

The Hong Kong University of Science and Technology (Guangzhou)

Cited 0|Views6
Abstract
In this study, we investigate the performance of nine protein structure alignment tools by analyzing the influence of alignment results in accomplishing three downstream biological tasks: homology detection, phylogeny reconstruction, and function inference. These tools include (1) traditional sequential methods using both 3D and 2D structure representations, (2) non-sequential methods, (3) flexible methods, and (4) deep-learning methods. Canonical sequence alignment methods Needleman-Wunsch algorithm and BLASTp are used as baseline methods. We show that accuracies in downstream tasks can be uncorrelated with alignment quality metrics such as TM-score and RMSD, highlighting the discrepancy between the alignment results and the purposes of using them. We identify scenarios where structure alignment results outperform sequence alignment results. In homology detection, structure-based methods are much better than sequence alignment. In phylogeny reconstruction, structure-based methods generally outperform sequence-based methods on the filtered dataset with proteins sharing low sequence similarity. Moreover, we show that structure information improves the overall performance of these tools when used together with sequence information in phylogeny reconstruction and function inference. We also test the running time and CPU/GPU memory consumption of these tools for a large number of queries. Our study suggests that biological problems that were previously addressed with sequence-based methods using only sequence information could be further improved by using structure information alone or using both sequence and structure information. The trade-off between task accuracy and speed is the major consideration in developing new alignment tools for downstream tasks. We recommend both TMalign and KPAX for these tasks because of their good balance between running time and memory consumption, and relatively good and stable accuracy performance in downstream tasks. In tasks that require a large number of pairwise comparisons, such as homology detection and function inference, traditional methods outperform DL methods at the cost of long running time, and Foldseek is the best choice to achieve relatively high accuracy in a reasonable time. ### Competing Interest Statement The authors have declared no competing interest.
More
Translated text
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined