WeChat Mini Program
Old Version Features

Detection, Tracking, and Characterization of Small, Faint Targets at GEO Distances Using the Magdalena Ridge Observatory 2.4-Meter Telescope

W. H. Ryan,E. V. Ryan

amos(2019)

Cited 1|Views1
Abstract
As time progresses, satellites launched into the GEO region have gotten smaller, and smaller, making the ability to detect and track decimeter-sized targets at these distances increasingly difficult but important for determining operational status, revealing changes, identifying, and characterizing. Previously we demonstrated that by using the Magdalena Ridge Observatory’s (MRO’s) 2.4-meter telescope, we could detect debris and other objects in GEO at visible magnitudes as faint as V~20 or fainter in single images, and were able to derive reliable and accurate astrometry. We also established that employing strategic shifting and summing of individual images based on the anticipated motion of the target allows for this magnitude limit to be extended somewhat. Initially, since objects in geostationary orbit typically move about 15 arc-seconds per second with respect to sidereal motion, we limited exposure times to half a second or less to avoid significant trailing and analyzed the photometric signatures using circular apertures. For this current work, we explore techniques using elliptical apertures and extend individual exposures to push our detection limits to V~21 visible magnitude and fainter. We investigate the limitations in accuracy inherent in this approach and examine the relative practicalities of utilizing longer individual integration times versus the software shifting and summing of shorter exposures. We also explore the magnitude, and hence size, limitations that these techniques imply for the characterization of artificial objects when studying their temporal photometric and spectroscopic signatures.
More
Translated text
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined