Global optimal prompt tuning under extremely limited parameter space
Develop an optimization procedure that can find a globally optimal configuration of learnable prompts in prompt-based continual learning for frozen pre-trained models (such as Vision Transformer or CLIP backbones), specifically under the constraint of an extremely limited prompt parameter space used by methods like L2P, DualPrompt, and CODA-Prompt. The goal is to resolve the optimization challenge of achieving global optimality for prompt parameters within this highly restricted setting.
References
Additionally, finding a global optimal solution within the extremely limited parameter space of prompts remains an open optimization challenge.
— LibContinual: A Comprehensive Library towards Realistic Continual Learning
(2512.22029 - Li et al., 26 Dec 2025) in Subsection "Prompt-based storage", Section 4.2 (Investigation of Assumption of Unregulated Memory Resources)