Precise Dynamical Masses and Orbital Fits for Β Pic B and Β Pic C
The Astronomical Journal(2021)SCI 2区
Univ Calif Santa Barbara | Univ Edinburgh | European Space Agcy ESA
Abstract
We present a comprehensive orbital analysis to the exoplanets beta Pictoris b and c that resolves previously reported tensions between the dynamical and evolutionary mass constraints on beta Pic b. We use the Markov Chain Monte Carlo orbit code orvara to fit 15 years of radial velocities and relative astrometry (including recent GRAVITY measurements), absolute astrometry from Hipparcos and Gaia, and a single relative radial velocity measurement between beta Pic A and b. We measure model-independent masses of 9.3(-2.5)(+2.6) M-Jup for beta Pic b and 8.3 1.0 M-Jup for beta Pic c. These masses are robust to modest changes to the input data selection. We find a well-constrained eccentricity of 0.119 0.008 for beta Pic b, and an eccentricity of 0.21(-0.09)(+0.16) beta Pic c, with the two orbital planes aligned to within similar to 05. Both planets' masses are within similar to 1 sigma of the predictions of hot-start evolutionary models and exclude cold starts. We validate our approach on N-body synthetic data integrated using REBOUND. We show that orvara can account for three-body effects in the beta Pic system down to a level similar to 5 times smaller than the GRAVITY uncertainties. Systematics in the masses and orbital parameters from orvara's approximate treatment of multiplanet orbits are a factor of similar to 5 smaller than the uncertainties we derive here. Future GRAVITY observations will improve the constraints on beta Pic c's mass and (especially) eccentricity, but improved constraints on the mass of beta Pic b will likely require years of additional radial velocity monitoring and improved precision from future Gaia data releases.
MoreTranslated text
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2012
被引用182 | 浏览
New Evidence for a Substellar Luminosity Problem: Dynamical Mass for the Brown Dwarf Binary Gl 417BC
2014
被引用73 | 浏览
2010
被引用80 | 浏览
2014
被引用46 | 浏览
2016
被引用225 | 浏览
2014
被引用532 | 浏览
2018
被引用116 | 浏览
2017
被引用100 | 浏览
2018
被引用6396 | 浏览
2019
被引用109 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话