Continual Learning with Pre-Trained Models: A Survey

Abstract

Nowadays, real-world applications often face streaming data, which requires the learning system to absorb new knowledge as data evolves. Contin- ual Learning (CL) aims to achieve this goal and meanwhile overcome the catastrophic forgetting of former knowledge when learning new ones. Typ- ical CL methods build the model from scratch to grow with incoming data. However, the advent of the pre-trained model (PTM) era has sparked immense research interest, particularly in leverag- ing PTMs’ robust representational capabilities for CL. This paper presents a comprehensive survey of the latest advancements in PTM-based CL. We categorize existing methodologies into three dis- tinct groups, providing a comparative analysis of their similarities, differences, and respective ad- vantages and disadvantages. Additionally, we of- fer an empirical study contrasting various state- of-the-art methods to highlight concerns regard- ing fairness in comparisons. The source code to reproduce these evaluations is available at: https: //github.com/sun-hailong/LAMDA-PILOT.

Date
Aug 13, 2025 12:00 PM — 12:30 PM
Location
Online (Zoom)
Reza Rahimi Azghan
Reza Rahimi Azghan
Grad Research Associate

I am a Ph.D. student at Arizona State University. I work as a Graduate Research Associate at Embedded Machine Intelligence Lab (EMIL) under the supervision of Dr. Hassan Ghasemzadeh.