WeChat Mini Program
Old Version Features

Least-cost and 2 °C-Compliant Mitigation Pathways Robust to Physical Uncertainty, Economic Paradigms, and Intergenerational Cost Distribution

ENVIRONMENTAL RESEARCH-CLIMATE(2024)

Univ Paris Saclay

Cited 0|Views9
Abstract
Each run of an integrated assessment models produces a single mitigation pathway consistent with stated objectives (e.g. maximum temperature) and optimizing some objective function (e.g. minimizing total discounted costs of mitigation). Even though models can be run thousands of times, it is unclear how built-in assumptions constrain the final set of pathways. Here we aim at broadly exploring the space of possible mitigation scenarios for a given mitigation target, and at characterizing the sets of pathways that are (near-)optimal, taking uncertainties into account. We produce an extensive set of CO _2 emission pathways that stay below 2 °C of warming using a reduced-form climate-carbon model with a 1000 different physical states. We then identify 18 sets of quasi ‘least-cost’ mitigation pathways, under six assumptions about cost functions and three different cost minimization functions embarking different visions of intergenerational cost distribution. A first key outcome is that the absence or presence of inertia in the cost function plays a pivotal role in the resulting set of least-cost pathways. Second, despite inherent structural differences, we find common pathways across the 18 combinations in 96% of the physical states studied. Interpreting these common pathways as robust economically and in terms of intergenerational distribution, we shed light on some of their characteristics, even though these robust pathways differ for each physical state.
More
Translated text
Key words
mitigation costs,inertia,Paris Agreement,IAM,simple climate models
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined