
Nowadays, real-world applications often face streaming data, which requires the learning system to absorb new knowledge as data evolves. Contin- ual Learning (CL) aims to achieve this goal and meanwhile overcome the catastrophic forgetting of former knowledge when learning new ones. Typ- ical CL methods build the model from scratch to grow with incoming data. However, the advent of the pre-trained model (PTM) era has sparked immense research interest, particularly in leverag- ing PTMs’ robust representational capabilities for CL. This paper presents a comprehensive survey of the latest advancements in PTM-based CL. We categorize existing methodologies into three dis- tinct groups, providing a comparative analysis of their similarities, differences, and respective ad- vantages and disadvantages. Additionally, we of- fer an empirical study contrasting various state- of-the-art methods to highlight concerns regard- ing fairness in comparisons. The source code to reproduce these evaluations is available at: https: //github.com/sun-hailong/LAMDA-PILOT.