Chrome Extension
WeChat Mini Program
Use on ChatGLM

CKDF-V2: Effectively Alleviating Representation Shift for Continual Learning with Small Memory

IEEE transactions on neural networks and learning systems(2025)

Research Center for Graph Computing | Institute of Automation

Cited 0|Views0
Abstract
In continual learning (CL), the newly arrived data are often out-of-distribution from the previous ones, causing drastic representation shift (RS) when updating the old model on the new data, leading to catastrophic forgetting. In this work, we propose feature boosting calibration (FBC) to tackle this problem. Specifically, an expanded module is trained to learn all the classes, including the old and new classes, discovering critical features missed by the original/old model. Then, an FBC network (FBCN) is trained to exploit these missed features to calibrate the old representations. As the missed features increase the information needed for distinguishing between the old and new classes, FBCN generates the calibrated ones with more transferable features, thus alleviating the RS. Next, given the limited memory to store samples of the old/learned classes, the data are severely imbalanced between the old and new classes. To cope with this problem, we propose blockwise knowledge distillation (BWKD), which splits the softmax layer into blocks according to class frequency and then distills each block separately, resolving data imbalance effectively. Building upon the two improvements, we propose a two-stage training framework for CL, named CKDF-V2, providing an enhanced version of the cascaded knowledge distillation framework (CKDF). Furthermore, we integrate it with a task-token expansion method to develop a novel approach for CL based on the vision transformer (ViT). Extensive experiments show that both a convolutional neural network (CNN) and ViT-based CKDF-V2 obtain favorable results across multiple CL benchmarks.
More
Translated text
Key words
Blockwise knowledge distillation (BWKD),continual learning (CL),feature boosting calibration (FBC),representation shift (RS)
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了CKDF-V2方法,通过特征增强校准(FBC)和块状知识蒸馏(BWKD)有效缓解了连续学习中的表示偏移问题,并基于视觉变换器(ViT)进行了改进。

方法】:CKDF-V2方法包括两个主要部分:一是特征增强校准,通过训练一个扩展模块来学习新旧所有类别,发现原模型遗漏的关键特征,并用FBC网络进行校准;二是块状知识蒸馏,将softmax层按类别频率分块进行蒸馏,解决数据不平衡问题。

实验】:作者在多个连续学习基准上进行了实验,使用的数据集包括Split-MNIST、CIFAR-100和Tiny-ImageNet等,结果表明基于卷积神经网络(CNN)和ViT的CKDF-V2都取得了良好的效果。