The Radial Interplanetary Field Strength at Sunspot Minimum As Polar Field Proxy and Solar Cycle Predictor
The Astrophysical Journal Letters(2024)
Naval Res Lab
Abstract
The minimum value of the geomagnetic aa index has served as a remarkably successful predictor of solar cycle amplitude. This value is reached near or just after sunspot minimum, when both the near-Earth solar wind speed and interplanetary magnetic field (IMF) strength fall to their lowest values. At this time, the heliospheric current sheet is flattened toward the heliographic equator and the dominant source of the IMF is the Sun's axial dipole moment, which, in turn, has its source in the polar fields. As recognized previously, the success of aamin as solar cycle precursor provides support for dynamo models in which the sunspots of a given cycle are produced by winding up the poloidal field built up during the previous cycle. Because they are highly concentrated toward the poles by the surface meridional flow, the polar fields are difficult to measure reliably. Here we point out that the observed value of the radial IMF strength at solar minimum can be used to constrain the polar field measurements, and that this parameter, which is directly proportional to the Sun's axial dipole strength, may be an even better solar cycle predictor than geomagnetic activity.
MoreTranslated text
Key words
Solar magnetic fields,Interplanetary magnetic fields,Solar dynamo,Solar cycle,Sunspot cycle,Solar wind
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined