Chrome Extension
WeChat Mini Program
Use on ChatGLM

Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora

Computing Research Repository (CoRR)(2022)

Univ Southern Calif | AWS AI Labs

Cited 128|Views129
Abstract
Pretrained language models (PTLMs) are typically learned over a large, static corpus and further fine-tuned for various downstream tasks. However, when deployed in the real world, a PTLM-based model must deal with data distributions that deviate from what the PTLM was initially trained on. In this paper, we study a lifelong language model pretraining challenge where a PTLM is continually updated so as to adapt to emerging data. Over a domain-incremental research paper stream and a chronologically-ordered tweet stream, we incrementally pretrain a PTLM with different continual learning algorithms, and keep track of the downstream task performance (after fine-tuning). We evaluate PTLM's ability to adapt to new corpora while retaining learned knowledge in earlier corpora. Our experiments show distillation-based approaches to be most effective in retaining downstream performance in earlier domains. The algorithms also improve knowledge transfer, allowing models to achieve better downstream performance over the latest data, and improve temporal generalization when distribution gaps exist between training and evaluation because of time. We believe our problem formulation, methods, and analysis will inspire future studies towards continual pretraining of language models.
More
Translated text
Key words
Language Modeling,Pretrained Models,Language Understanding,Neural Machine Translation,Machine Translation
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Howard J. Jacob
2019

被引用145 | 浏览

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一个终身预训练挑战,即持续更新预训练语言模型(PTLM)以适应新兴数据,研究了不同持续学习算法在领域递增研究论文流和按时间顺序排列的推文流上的应用,以评估PTLM适应新语料库的能力。

方法】:本文采用不同的持续学习算法,对PTLM进行递增预训练,并在微调后跟踪下游任务性能。

实验】:在领域递增研究论文流和时间顺序推文流上进行实验,结果显示基于蒸馏的方法在保留早期领域的下游性能方面最有效。这些算法还提高了知识转移,使模型在最新的数据上取得更好的下游性能,并在训练和评估之间存在分布差距时提高时间泛化能力。