General solution to prompt injection attacks
Establish a general, provable solution that prevents prompt injection attacks across large language model–integrated, agentic AI systems, ensuring that malicious instructions embedded in prompts or external data cannot induce unauthorized behavior.
References
OpenAI's CISO acknowledged that "prompt injection remains an unsolved problem".
— Authenticated Workflows: A Systems Approach to Protecting Agentic AI
(2602.10465 - Rajagopalan et al., 11 Feb 2026) in Section 1 (Introduction)