Global optimal prompt tuning under extremely limited parameter space

Develop an optimization procedure that can find a globally optimal configuration of learnable prompts in prompt-based continual learning for frozen pre-trained models (such as Vision Transformer or CLIP backbones), specifically under the constraint of an extremely limited prompt parameter space used by methods like L2P, DualPrompt, and CODA-Prompt. The goal is to resolve the optimization challenge of achieving global optimality for prompt parameters within this highly restricted setting.

Background

Within the storage-centric analysis of continual learning, prompt-based methods (e.g., L2P, DualPrompt, CODA-Prompt) store a small number of learnable tokens to condition a frozen pre-trained model. This paradigm is highly memory-efficient and achieves strong performance in online scenarios by updating only lightweight prompt parameters instead of the full backbone.

However, the effectiveness of prompt-based approaches depends heavily on the underlying pre-trained model, and the prompt parameter space is intentionally very small. This creates a challenging optimization landscape: despite the small number of parameters, finding a globally optimal prompt configuration that consistently yields strong performance remains unresolved. The paper explicitly notes this as an open optimization challenge, motivating research into algorithms capable of achieving global optimality in such constrained prompt spaces.

References

Additionally, finding a global optimal solution within the extremely limited parameter space of prompts remains an open optimization challenge.

LibContinual: A Comprehensive Library towards Realistic Continual Learning  (2512.22029 - Li et al., 26 Dec 2025) in Subsection "Prompt-based storage", Section 4.2 (Investigation of Assumption of Unregulated Memory Resources)