Monitoring the Pointing of the Large Size Telescope Prototype Using Star Reconstruction in the Cherenkov Camera
International Conference on Rebooting Computing (ICRC)(2021)
Abstract
The first Large-Sized Telescope (LST-1) proposed for the forthcoming Cherenkov Telescope Array (CTA) has started to operate in 2019 in La Palma. The large structure of LST-1 - with a 23 m mirror dish diameter - imposes a strict control of its deformations that could affect the pointing accuracy and its overall performance. According to CTA specifications that are conceived to resolve e.g. the fine structure of galactic sources, the LST post-calibration pointing accuracy should be better than 14 arcseconds. To fulfill this requirement, the telescope pointing precision is monitored with two dedicated CCD cameras located at the dish center. The analysis of their images allows us to disentangle different systematic deformations of the structure. In this work, we investigate a complementary approach that offers the possibility to monitor the pointing of the telescope during the acquisition of sky data. After properly cleaning the events from the Cherenkov showers, the reconstructed positions of the stars imaged in the camera field of view are compared to their nominal expected positions in catalogues. This provides a direct measurement of the telescope pointing, that can be used to cross-check the other methods and as a real-time monitoring of the optical properties of the telescope and of the pointing corrections applied by the bending models. Additionally, this method benefits from not relying on specific hardware or dedicated observations. In this contribution we will illustrate this analysis and show results based on simulations of LST-1.
MoreTranslated text
Key words
Low-Frequency Telescopes
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话