Generality and adaptability across multiple tasks in LLM-based systems

Establish methods that enable Large Language Model (LLM)-based systems to achieve both generality and adaptability across multiple tasks, allowing continual acquisition of new abilities without degrading previously learned capabilities.

Background

The review concludes that despite significant progress on specialized LLM-based systems, achieving broad generality and adaptability across many tasks remains unresolved. Incremental learning is proposed as a means to bridge this gap by enabling models to acquire new knowledge over time while preserving existing capabilities.

The authors argue that current approaches often rely on periodic batch updates and external system components rather than true, real-time incremental updates to the core models, underscoring the open nature of this objective.

References

However, the goals of generality and adaptability across multiple tasks remain an open problem.

Towards Incremental Learning in Large Language Models: A Critical Review  (2404.18311 - Jovanovic et al., 2024) in Section 4 (Conclusion)