Precision Mass Measurements of 74-76Sr Using the Multiple Reflection Time-of-flight Mass Spectrometer at TITAN
PHYSICAL REVIEW C(2025)
TRIUMF | Univ Calgary | Justus Liebig Univ Giessen | Univ Edinburgh | McGill Univ | GSI Helmholtzzentrum Schwerionenforsch GmbH | Oak Ridge Natl Lab | Facil Rare Isotope Beams | Argonne Natl Lab | Northwestern Univ
Abstract
We report precision mass measurements of 74-76Sr performed with the TITAN multiple-reflection time-of- flight mass spectrometer. This marks a first-time mass measurement of 74Sr and gives increased mass precision to both 75Sr and 76Sr, which were previously measured using storage ring and Penning trap methods, respectively. This completes the A = 74, T = 1 isospin triplet and gives increased precision to the A = 75, T = 1/2 isospin doublet, which are both the heaviest experimentally evaluated triplets and doublets to date. The new data allow us to evaluate coefficients of the isobaric multiplet mass equation for the first time at A = 74 and with increased precision at A = 75. With increased precision of 75Sr, we confirm the recent measurement reported by CSRe that was used to remove a staggering anomaly in the doublets. New ab initio valence-space in-medium similarity renormalization group calculations of the T = 1 triplet are presented at A = 74. We also investigate the impact of the new mass data on the reaction flow of the rapid proton capture process in type I x-ray bursts using a single-zone model.
MoreTranslated text
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2003
被引用93 | 浏览
2014
被引用69 | 浏览
2017
被引用30 | 浏览
2019
被引用19 | 浏览
2005
被引用43 | 浏览
2021
被引用26 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话