Monopulse DOA Estimation for High-Speed Targets in Frequency Agile Radar with Phase Shifter Mismatch
IEEE Transactions on Aerospace and Electronic Systems(2025)
Beijing Institute of Radio Measurement
Abstract
The frequency agile radar (FAR) is widely recognized for its robust anti-jamming capabilities in complex electromagnetic environments. However, achieving accurate monopulse direction-of-arrival (DOA) estimation with FAR remains challenging, particularly under short dwell times where phase shifter mismatch (PSM) exacerbates the issue. This paper proposes a targeted procedure to monopulse DOA estimation for FAR that considers the impact of PSM. Initially, we introduce a modified FAR-Radon Fourier Transform (FAR-RFT) algorithm to enhance the extraction of matched filtering samples for high-speed targets, laying the groundwork for subsequent DOA estimation procedures. We then present two distinct methods for monopulse DOA estimation with FAR. The modified monopulse method facilitates rapid DOA estimation at low beam pointing angles, while the Maximum Likelihood Estimation (MLE) method ensures high accuracy across all beam pointing angles. Furthermore, we discuss fast implementation techniques tailored to the MLE method. Additionally, leveraging extensive simulation experiments based on real-world electromagnetic data and rigorous theoretical analyses, we explore critical aspects such as the Cramer-Rao Lower Bound (CRLB) for ´ DOA estimation under FAR with PSM and the theoretical computational complexities of our algorithms. Our results demonstrate the superior performance of the proposed methods in diverse scenarios. Particularly noteworthy is the accuracy achieved by the modified monopulse and MLE methods, which closely approach the CRLB at medium to high signal-to-noise ratios.
MoreTranslated text
Key words
Agile frequency radar,maximum likelihood estimation,sum-and-difference beams,monopulse radar,long-time coherent integration
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined