WeChat Mini Program
Old Version Features

The Relative Importance of Forced and Unforced Temperature Patterns in Driving the Time Variation of Low-Cloud Feedback

Journal of Climate(2024)

a Center for Climate Systems Research

Cited 0|Views1
Abstract
Abstract Atmospheric models forced with observed sea-surface temperatures (SSTs) suggest a trend toward a more-stabilizing cloud feedback in recent decades, partly due to the surface cooling trend in the eastern Pacific (EP) and the warming trend in the western Pacific (WP). Here we show model evidence that the low-cloud feedback has contributions from both forced and unforced feedback components, and that its time variation arises in large part through changes in the relative importance of the two over time, rather than through variations in forced or unforced feedbacks themselves. Initial-condition large ensembles (LEs) suggest that the SST patterns are dominated by unforced variations for 30-year windows ending prior to the 1980s. In general, unforced SSTs are representative of an ENSO-like pattern, which corresponds to weak low-level stability in the tropics and less-stabilizing low-cloud feedback. Since the 1980s, the forced signals have become stronger, outweighing the unforced signals for the 30-year windows ending after the 2010s. Forced SSTs are characterized by relatively uniform warming with an enhancement in the WP, corresponding to a more-stabilizing low-cloud feedback in most cases. The time-evolving SST pattern due to this increasing importance of forced signals is the dominant contributor to the recent stabilizing shift of low-cloud feedback in the LEs. Using single-forcing LEs, we further find that if only greenhouse gases evolve with time, the transition to the domination of forced signals occurs 10-20 years earlier compared to the LEs with full forcings, which can be understood through the compensating effect between aerosols and greenhouse gases.
More
Translated text
Key words
Atmospheric Dynamics
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined