Compliance mechanisms for EU AI Act Article 15(5) attack-prevention requirement

Determine concrete technical and organizational measures by which providers and deployers of machine-learning, neural-network, and large language model systems can comply with Article 15(5) of Regulation (EU) 2024/1689, which requires prevention of unique attacks against such AI systems, specifying how these measures can be implemented in practice to meet the "where appropriate" standard.

Background

The paper analyzes EU cybersecurity law and highlights Article 15 of the AI Act, noting that Article 15(5) requires preventing unique attacks on AI systems (including ML, neural networks, and LLMs). It emphasizes that this mandate introduces technology-specific obligations aligned with best practices but acknowledges uncertainty about how compliance will be achieved due to the technical difficulty of these attack classes.

This creates an open compliance question: while the legal requirement is clear, the precise methods—tools, processes, and safeguards—needed to satisfy Article 15(5) are not yet established and may require further technical development and consensus.

References

Article 15(5), part 3, is worth focussing on because it mandates that all types of unique attacks on machine-learning, neural-network or LLM based AI should be prevented where appropriate. How this is complied with going forward is unknown, as several of these are hard problems to solve in a technical context.

Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance  (2512.16419 - Ludvigsen, 18 Dec 2025) in Section 4.1 (Statutory Law)