OSAMamba: an Adaptive Bidirectional Selective State Space Model for OSA Detection
IEEE Trans Instrum Meas(2025)
Abstract
As the two most typical classic network models, Convolutional Neural Networks (CNN) and Transformer have been widely applied in Obstructive sleep apnea (OSA) detection in recent years. However, due to the inherent limitations of the receptive field in traditional CNN models (the receptive field is positively correlated with the fixed convolution kernel size, and the ability to extract global feature information is limited), further improvement in their performance is constrained. While, for Transformer, due to the computational complexity of the self attention mechanism in the Transformer model increases exponentially with the length of the context, it will hold a very high computational overhead, and which would hinder the deployment of transformer on devices with limited computing resources. To address these problems, this paper proposes an adaptive bidirectional selective state space model based method for OSA detection, termed as OSAMamba. The main novelty of the proposed method lies in the following two aspects: the development of the lightweight multi-scale efficient aggregation (LMSEA) module, and the propose of the adaptive bidirectional selective state space model (ABSM). To achieve the purpose of expanding the model receptive field and capturing the effective temporal features with a very low number of parameters, the LMSEA module adopts a combination of partial convolution (PConv)-based multiscale strategy and convolutional block attention module (CBAM). The purpose of the ABSM module is to reduce the computational cost of the model and improve the model deployability by using a frequency domain enhancement strategy to fuse the effective time domain features extracted by adaptive bi-directional Mamba (ABi-Mamba) with linear complexity with the frequency domain features extracted by the Frequency Domain Enhancement Module (FEM). Extensive experiments on the Apnea-ECG dataset show that of all compared methods, the proposed method obtains the best accuracy of 91.91% in the per-segment detection, and which surpassing the state-of-the-art (SOTA) TFFormer by 0.31%. It also achieves a remarkable accuracy of 100% with the lowest mean absolute error (MAE) of 2.43 in per-record detection.
MoreTranslated text
Key words
Obstructive sleep apnea,Lightweight MultiScale Efficient Aggregation module,Adaptive Bidirectional Selective State Space Model,Frequency Domain Enhancement Module,Electrocardiogram (ECG)
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined