Continual Learning

Literatures of Continual Learning (持续学习, also called Lifelong / Incremental / Cumulative Learning).

Other Awesome Lists

Courses & Tutorials

Surveys

  • Continual Lifelong Learning with Neural Networks: A Review. German I. Parisi, et al. Neural Networks 2019. [Paper]

    A survey on different approaches (regularization / dynamic architectures / rehearsal) for continual learning.

  • Three Scenarios for Continual Learning. Gido M. van de Ven and Andreas S. Tolias. arXiv 2019. [Paper] [Code]

    A survey on three different scenarios (task / domain / class) for continual learning.

  • Continual Learning: A Comparative Study on How to Defy Forgetting in Classification Tasks. Matthias De Lange, et al. arXiv 2019. [Paper]

  • Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges. Timothée Lesort, et al. Inf. Fusion 2020. [Paper]

Theses

  • Continual Learning with Deep Architectures. Vincenzo Lomonaco. University of Bologna, 2019. [Thesis]

  • Continual Learning with Neural Networks. Sayna Ebrahimi. UC Berkeley, 2020. [Thesis]

  • Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes. Timothée Lesort. ENSTA-ParisTech, 2020. [Thesis]

Blogs & Communities

Approaches

Regularization

Impose constraints on the update of the neural weights.

  • Learning without Forgetting. Zhizhong Li and Derek Hoiem. ECCV 2016. [Paper] [Code] (LwF)

  • Overcoming Catastrophic Forgetting in Neural Networks. James Kirkpatrick, et al. PNAS 2017. [Paper] (Elastic Weight Consolidation, EWC)

  • Continual Learning Through Synaptic Intelligence. Friedemann Zenke, et al. ICML 2017. [Paper] [Code] (Intelligent Synapses, IS)

  • Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting. Hippolyt Ritter, et al. NIPS 2018. [Paper]

  • Improving and Understanding Variational Continual Learning. Siddharth Swaroop, et al. NIPS 2018 Continual Learning Workshop. [Paper] [Code]

  • Memory Aware Synapses: Learning What (Not) to Forget. Rahaf Aljundi, et al. ECCV 2018. [Paper] [Code]

  • Task Agnostic Continual Learning Using Online Variational Bayes. Chen Zeno, et al. arXiv 2018. [Paper] [Code]

  • Task-Free Continual Learning. Rahaf Aljundi, et al. CVPR 2019. [Paper]

  • Online Continual Learning with Maximally Interfered Retrieval. Rahaf Aljundi, et al. NIPS 2019. [Paper]

  • Uncertainty-based Continual Learning with Adaptive Regularization. Hongjoon Ahn, et al. NIPS 2019. [Paper] [Code]

  • Efficient continual learning in neural networks with embedding regularization. Jary Pomponi, et al. Neurocomputing 2020. [Paper]

  • Uncertainty-guided Ccontinual Learning with Bayesian Neural Networks. Sayna Ebrahimi, et al. ICLR 2020. [Paper] [Code]

  • Continual Learning with Node-Importance based Adaptive Group Sparse Regularization. Sangwon Jung, et al. CVPR 2020 Workshop on Continual Learning in Computer Vision. [Paper]

  • SOLA: Continual Learning with Second-Order Loss Approximation. Dong Yin, et al. arXiv 2020. [Paper]

  • CPR: Classifier-Projection Regularization for Continual Learning. Sungmin Cha, et al. ICLR 2021. [Paper] [Code]

Rehearsal

Extra Memory

Use extra memory to store data from previous tasks.

  • Gradient Episodic Memory for Continual Learning. David Lopez-Paz and Marc'Aurelio Ranzato. NIPS 2017. [Paper] [Code]

  • iCaRL: Incremental Classifier and Representation Learning. Sylvestre-Alvise Rebuffi, et al. CVPR 2017. [Paper] [Code]

  • Variational Continual Learning. Cuong V. Nguyen, et al. ICLR 2018. [Paper] [Code] (VCL)

  • Experience Replay for Continual Learning. David Rolnick, et al. NIPS 2019. [Paper]

  • Continual Learning with Bayesian Neural Networks for Non-Stationary Data. Richard Kurle, et al. ICLR 2020. [Paper]

  • Graph-Based Continual Learning. Binh Tang and David S. Matteson. ICLR 2021. [Paper]

  • Gradient Projection Memory for Continual Learning. Gobinda Saha, et al. ICLR 2021. [Paper] [Code]

Generative Replay

Mimic past data by generative models (GAN, VAE, etc).

  • Continual Learning with Deep Generative Replay. Hanul Shin, et al. NIPS 2017. [Paper]

  • FearNet: Brain-Inspired Model for Incremental Learning. Ronald Kemker and Christopher Kanan. ICLR 2018. [Code]

  • Generative Models from the perspective of Continual Learning. Timothée Lesort, et al. IJCNN 2019. [Paper] [Code]

  • Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning. Oleksiy Ostapenko, et al. CVPR 2019. [Paper] [Code]

  • Continual Unsupervised Representation Learning. Dushyant Rao, et al. NIPS 2019. [Paper] [Code]

    Continual learning without task boundaries via dynamic expansion and generative replay (VAE).

Dynamic Expansion

Increase in network capacity that handles new tasks without affecting learned networks.

  • Net2Net: Accelerating Learning via Knowledge Transfer. Tianqi Chen, et al. ICLR 2016. [Paper]

  • Progressive Neural Networks. Andrei A. Rusu, et al. arXiv 2016. [Paper]

  • Expert Gate: Lifelong Learning with a Network of Experts. Rahaf Aljundi, et al. CVPR 2017. [Paper] [Re-implementation]

  • Random Path Selection for Continual Learning. Jathushan Rajasegaran, et al. NIPS 2019. [Paper] [Code]

  • Compacting, Picking and Growing for Unforgetting Continual Learning. Steven C. Y. Hung, et al. NIPS 2019. [Paper] [Code]

    Gradual model pruning (compacting) → Train a 0-1 mask to pick weights of previous tasks (picking) → Dynamic expansion (growing)

  • Continual Unsupervised Representation Learning. Dushyant Rao, et al. NIPS 2019. [Paper] [Code]

    Continual learning without task boundaries via dynamic expansion and generative replay (VAE).

  • Continual Learning with Adaptive Weights (CLAW). Tameem Adel, et al. ICLR 2020. [Paper]

  • A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning. Soochan Lee, et al. ICLR 2020. [Paper] [Code]

Task Free

  • Task Agnostic Continual Learning Using Online Variational Bayes. Chen Zeno, et al. arXiv 2018. [Paper] [Code]

  • Task-Free Continual Learning. Rahaf Aljundi, et al. CVPR 2019. [Paper]

  • Continual Unsupervised Representation Learning. Dushyant Rao, et al. NIPS 2019. [Paper] [Code]

    Continual learning without task boundaries via dynamic expansion (Dirichlet process) and generative replay (VAE).

  • Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks. Ghassen Jerfel, et al. NIPS 2019. [Paper]

  • Continuous Meta-Learning without Tasks. James Harrison, et al. arXiv 2019. [Paper] [Code] [OpenReview]

    Integrate Bayesian online changepoint detection algorithm with existing meta-learning approaches to enable meta-learning in task-unsegmented settings.

  • A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning. Soochan Lee, et al. ICLR 2020. [Paper] [Code]

  • Task Agnostic Continual Learning via Meta Learning. Xu He, et al. ICML 2020 LifelongML Workshop. [Paper]

+ Meta Learning

Here is also a list of literatures for Meta Learning.

  • Meta Continual Learning. Risto Vuori, et al. arXiv 2018. [Paper]

    Train a RNN as optimizer, and the optimizer leverages information of both current and previous tasks to learn to preserve previous parameters.

  • Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference. Matthew Riemer, et al. ICLR 2019. [Paper] [Code] (Meta-Experience Replay, MER)

    Combine Reptile (a meta-learning algorithm) with experience replay for rapidly learning the current and future experience and preserving past knowledge.

  • Meta-Learning Representations for Continual Learning. Khurram Javed and Martha White. NIPS 2019. [Paper] [Code] [Poster]

  • Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks. Ghassen Jerfel, et al. NIPS 2019. [Paper]

  • Continuous Meta-Learning without Tasks. James Harrison, et al. arXiv 2019. [Paper] [Code] [OpenReview]

    Integrate Bayesian online changepoint detection algorithm with existing meta-learning approaches to enable meta-learning in task-unsegmented settings.

  • Task Agnostic Continual Learning via Meta Learning. Xu He, et al. ICML 2020 LifelongML Workshop. [Paper]

  • La-MAML: Look-ahead Meta Learning for Continual Learning. Gunshi Gupta, et al. NIPS 2020. [Paper] [Code]

+ Reinforcement Learning

  • Reinforced Continual Learning. Ju Xu and Zhanxing Zhu. NIPS 2018. [Paper] [Code]

  • Experience Replay for Continual Learning. David Rolnick, et al. NIPS 2019. [Paper]

+ Generative Modeling

  • Lifelong GAN: Continual Learning for Conditional Image Generation. Mengyao Zhai, et al. ICCV 2019. [Paper]

  • Generative Models from the perspective of Continual Learning. Timothée Lesort, et al. IJCNN 2019. [Paper] [Code]

Bayesian

  • Variational Continual Learning. Cuong V. Nguyen, et al. ICLR 2018. [Paper] [Code] (VCL)

  • Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting. Hippolyt Ritter, et al. NIPS 2018. [Paper]

  • Improving and Understanding Variational Continual Learning. Siddharth Swaroop, et al. NIPS 2018 Continual Learning Workshop. [Paper] [Code]

  • Task Agnostic Continual Learning Using Online Variational Bayes. Chen Zeno, et al. arXiv 2018. [Paper] [Code]

  • Continual Unsupervised Representation Learning. Dushyant Rao, et al. NIPS 2019. [Paper] [Code]

    Continual learning without task boundaries via dynamic expansion and generative replay (VAE).

  • Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks. Ghassen Jerfel, et al. NIPS 2019. [Paper]

  • Uncertainty-based Continual Learning with Adaptive Regularization. Hongjoon Ahn, et al. NIPS 2019. [Paper] [Code]

  • Continuous Meta-Learning without Tasks. James Harrison, et al. arXiv 2019. [Paper] [Code] [OpenReview]

    Integrate Bayesian online changepoint detection algorithm with existing meta-learning approaches to enable meta-learning in task-unsegmented settings.

  • Task Agnostic Continual Learning via Meta Learning. Xu He, et al. ICML 2020 LifelongML Workshop. [Paper]

  • A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning. Soochan Lee, et al. ICLR 2020. [Paper] [Code]

  • Continual Learning with Bayesian Neural Networks for Non-Stationary Data. Richard Kurle, et al. ICLR 2020. [Paper]

  • Uncertainty-guided Ccontinual Learning with Bayesian Neural Networks. Sayna Ebrahimi, et al. ICLR 2020. [Paper] [Code]

New Settings

  • Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning. Massimo Caccia, et al. arXiv 2020. [Paper] [Code]

  • Wandering Within a World: Online Contextualized Few-Shot Learning. Mengye Ren, et al. arXiv 2020. [Paper]

  • Compositional Language Continual Learning. Yuanpeng Li, et al. ICLR 2020. [Paper] [Code]

    Continual learning in NLP for seq2seq style tasks.

Workshops