2000 character limit reached
Continual Learning Using Only Large Language Model Prompting
Published 20 Dec 2024 in cs.CL and cs.AI | (2412.15479v1)
Abstract: We introduce CLOB, a novel continual learning (CL) paradigm wherein a LLM is regarded as a black box. Learning is done incrementally via only verbal prompting. CLOB does not fine-tune any part of the LLM or add any trainable parameters to it. It is particularly suitable for LLMs that are accessible via APIs. We also propose a new CL technique, called CIS, based on incremental summarization that also overcomes the LLM's input length limit. Experiments show CIS outperforms baselines by a very large margin.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.