Low-input Breeding Potential in Stone Pine, a Multipurpose Forest Tree with Low Genome Diversity.
G3 (Bethesda, Md)(2025)
Abstract
Stone pine (Pinus pinea L.) is an emblematic tree species within the Mediterranean basin, with high ecological and economic relevance due to the production of edible nuts. Breeding programmes to improve pine nut production started decades ago in Southern Europe but have been hindered by the near absence of polymorphisms in the species genome and the lack of suitable genomic tools. In this study, we assessed new stone pine's genomic resources and their utilization in breeding and sustainable use, by using a commercial SNP-array (5,671 SNPs). Firstly, we confirmed the accurate clonal identification and identity check of 99 clones from the Spanish breeding programme. Secondly, we successfully estimated genomic relationships in clonal collections, an information needed for low-input breeding and genomic prediction. Thirdly, we applied this information to genomic prediction for the total number of cones unspoiled by pests and their weight measured in 3 Spanish clonal tests. Genomic prediction accuracy depends on the trait under consideration and possibly on the number of genotypes included in the test. Predictive ability (ry) was significant for the mean cone weight measured in the 3 clonal tests, while solely significant for the number of cones in one clonal test. The combination of a new SNP-array together with the phenotyping of relevant commercial traits into genomic prediction models, proved to be very promising to identify superior clones for cone weight. This approach opens new perspectives for early selection.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined