WeChat Mini Program
Old Version Features

Scalable Hamming Distance Computation Using Accelerated Matrix Transformations

Rabab Alomairy, Qinglei Cao,Hatem Ltaief,David Keyes,Alan Edelman

ISC High Performance 2025 Research Paper Proceedings (40th International Conference)(2025)

Computer Science & Artificial Intelligence Laboratory | Department of Computer Science

Cited 0|Views0
Abstract
The Hamming distance, a fundamental measure of dissimilarity between data points, plays a crucial role in various fields, including error detection, machine learning, and genomic sequence alignment, where it is commonly used for identifying mismatches in nucleotide or protein sequences. This work introduces two implementations for computing Hamming distances for sequence alignment: synchronous and asynchronous matrix-based approaches. While most existing implementations rely on vector-based methods due to their simplicity and ease of use, they are not efficient for large-scale data. Our work focuses on enhancing performance by introducing matrix-based implementations that significantly improve computational efficiency and scalability. Our asynchronous implementation showcases Julia for sequential task flow and PaRSEC for parameterized task graph execution models on homogeneous and heterogeneous architectures. CPU computations use INT8 GEMM from oneMKL, while GPU implementations employ Tensor/Matrix Core INT8 GEMM from cuBLAS/hipBLAS and 1-bit TensorOps GEMM capabilities from CUTLASS. For constructing bitmask matrices on GPUs, we develop both a naive CUDA implementation using global memory and an optimized implementation utilizing shared memory at the warp level, with the optimized version achieving a 5X speedup over the naive approach. The results demonstrate significant performance improvements, with the asynchronous matrix-based implementation achieving up to 284X speedup over the vector-based approach on CPUs, while the asynchronous GPU-enabled implementation on A100 GPUs delivers a 15X speedup compared to the CPU matrix-based approach and a three orders of magnitude improvement over the CPU vector-based approach. Furthermore, the asynchronous implementation of PaRSEC scales well on up to 256 nodes of Summit and Frontier. These advancements highlight the scalability and efficiency of matrix-based Hamming distance computation, leveraging GPU acceleration and advanced asynchronous execution, paving the way forward for large-scale genomic sequence alignment and data analysis.
More
Translated text
Key words
Hamming Distance,Matrix Multiply,Genomics,1-bit Integer,Julia,PaRSEC,CUTLASS,Mixed-Precision,GPU
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined