Adoption of fully automated AI systems for developer code advice in security contexts

Determine whether organizations that develop and maintain cybersecurity software will transition from relying primarily on internal documentation (such as wikis and internal guides) to deploying fully automated artificial intelligence systems that concretely advise developers on code in security-related development workflows.

Background

In the discussion of non-LLM AI, the paper notes that while systems capable of concretely advising developers with code are relevant to security, they are currently not prevalent, with many organizations instead relying on internal guides and wikis. The paper highlights empirical concerns about LLM-based coding assistants (e.g., GitHub Copilot) being potentially not ready for reliable use in security-critical contexts, which frames uncertainty about future organizational practices.

This uncertainty reflects broader questions about the readiness and trustworthiness of AI tools for secure software development and whether industry practices will shift from human-maintained documentation to automated AI advisors in the near term.

References

More often than not, companies will have internal guides, “wikis”, and the like, not fully automated systems. Whether this is changing going forward is currently unknown.

Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance  (2512.16419 - Ludvigsen, 18 Dec 2025) in Section 2.2 (Other types of AI)