Reliable general-purpose defense against prompt injection
Develop a reliable, general-purpose defense against prompt injection that works consistently across diverse large language model and agentic application contexts, and remains effective against adaptive adversaries that actively adjust their attack strategies.
References
Despite many proposals by the academic community, there is still no (reliable) solution for prompt injection that works consistently in all contexts. In the general case, especially against adaptive adversaries, it continues to be an open problem.
— When AIOps Become "AI Oops": Subverting LLM-driven IT Operations via Telemetry Manipulation
(2508.06394 - Pasquini et al., 8 Aug 2025) in Section “Securing AIOps”, Subsection “AIOpsShield”