WeChat Mini Program
Old Version Features

SLBAF-Net: Super-Lightweight Bimodal Adaptive Fusion Network for UAV Detection in Low Recognition Environment

Multimedia Tools and Applications(2023)

Southeast University

Cited 10|Views33
Abstract
Unmanned aerial vehicle (UAV) detection has significant research value in the field of military and civilian applications. However, the traditional object detection algorithms commonly lack satisfying accuracy and robustness due to the intense illumination changes and extremely small size of UAVs on the remote sensor images with the sky background. This paper proposes a super-lightweight bimodal network SLBAF-Net with the adaptive fusion of visible light and infrared images for UAV detection under complex illumination and weather conditions. To handle complex illumination environments and meeting the low computing requirements of airborne computers, a super-lightweight bimodal UAV detection network inspired by YOLO's network structure is developed. In order to fuse bimodal features more effectively, the bimodal adaptive fusion module (BAFM) is proposed to perform an adaptive fusion of visible and infrared feature maps for the purpose of improving detection robustness in complex environments. To verify the superiority of our method, we build a complex dual-modal UAV dataset and conduct comprehensive comparison experiments with various state-of-art object detection networks. The experimental results show that the proposed SLBAF-Net outperforms other algorithms in terms of detection performance and robustness in harsh environments, with a precision rate of 0.909 and a recall rate of 0.912. Moreover, the SLBAF-Net can meet the real-time requirements of airborne computers, and the network size is only 5.6 MB.
More
Translated text
Key words
UAV detection,Bi-modal network,Adaptive fusion,Infrared image,SLBAF-Net
求助PDF
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined