Papers
Topics
Authors
Recent
Search
2000 character limit reached

Securing Agentic AI: Threat Modeling and Risk Analysis for Network Monitoring Agentic AI System

Published 12 Aug 2025 in cs.CR and cs.AI | (2508.10043v1)

Abstract: When combining LLMs with autonomous agents, used in network monitoring and decision-making systems, this will create serious security issues. In this research, the MAESTRO framework consisting of the seven layers threat modeling architecture in the system was used to expose, evaluate, and eliminate vulnerabilities of agentic AI. The prototype agent system was constructed and implemented, using Python, LangChain, and telemetry in WebSockets, and deployed with inference, memory, parameter tuning, and anomaly detection modules. Two practical threat cases were confirmed as follows: (i) resource denial of service by traffic replay denial-of-service, and (ii) memory poisoning by tampering with the historical log file maintained by the agent. These situations resulted in measurable levels of performance degradation, i.e. telemetry updates were delayed, and computational loads were increased, as a result of poor system adaptations. It was suggested to use a multilayered defense-in-depth approach with memory isolation, validation of planners and anomaly response systems in real-time. These findings verify that MAESTRO is viable in operational threat mapping, prospective risk scoring, and the basis of the resilient system design. The authors bring attention to the importance of the enforcement of memory integrity, paying attention to the adaptation logic monitoring, and cross-layer communication protection that guarantee the agentic AI reliability in adversarial settings.

Summary

  • The paper introduces the MAESTRO framework that mitigates agentic AI security concerns through a seven-layered architecture.
  • It employs structured threat modeling to address unique vulnerabilities like instruction manipulation and memory corruption.
  • Experimental validations via DoS and memory poisoning tests confirm the framework's efficacy and highlight areas for future improvement.

Security Considerations for Agentic AI Systems

The research presented in "Securing Agentic AI: Threat Modeling and Risk Analysis for Network Monitoring Agentic AI System" describes the creation and implementation of the MAESTRO framework. This framework is designed to address and mitigate security concerns specific to agentic AI systems that incorporate LLMs for network monitoring tasks. The paper emphasizes the necessity for a multilayered security approach to manage emerging threats that traditional models such as STRIDE and PASTA fail to adequately address.

MAESTRO Framework and System Architecture

The MAESTRO architecture is structured into seven layers, each of which caters to specific aspects of the agentic AI system, from foundational models to the broader agent ecosystem. Figure 1

Figure 1: Complete seven-layer MAESTRO architecture.

  1. Foundation Models (Layer 1): This layer involves pre-trained and fine-tuned LLMs responsible for the core reasoning capabilities of the agent.
  2. Data Operations (Layer 2): Encompasses all the necessary data pipelines for collecting and managing network performance data.
  3. Agent Frameworks (Layer 3): Responsible for the orchestration and decision-making processes, integrating various subsystems to enable agent actions.
  4. Deployment and Infrastructure (Layer 4): Details containerized environments and endpoint interactions which support the agent's deployment.
  5. Evaluation and Observability (Layer 5): Monitoring and assessment procedures that ensure the system's ongoing performance and integrity.
  6. Security and Compliance (Layer 6): Covers the protocols and standards ensuring the system's regulatory compliance.
  7. Agent Ecosystem (Layer 7): The interaction interface for the agent with other systems and human operators.

Threat Landscape and Considerations

The agent's architecture significantly broadens its attack surface, demanding meticulous threat modeling strategies. The research outlines several potential threats associated with agentic AI systems.

  1. Instruction Manipulation: Alteration of input prompts to maliciously redirect agent behavior.
  2. Goal Manipulation: Subtle shifts in the agent's objectives due to ambiguous feedback or injected telemetry data.
  3. Memory Corruption: Injections or alterations in the memory/context data that inform future agent decisions.
  4. Resource Exhaustion: Attacks that flood the system with data, thereby degrading its performance.
  5. Multi-Agent Exploitation: Activities that leverage shared memory to subvert multiple agents.

These threats necessitate a methodical mapping of vulnerabilities across MAESTRO's strata to formulate effective mitigation strategies.

Mitigation Strategies

The deployment of multilayered defenses is pivotal to ensuring robust agent security. Key strategies include: Figure 2

Figure 2: Prevention Strategies for the securement of an autonomous agent using MAESTRO Layer.

  1. Input Validation and Sanitization: Prevention techniques to guard against erroneous agent instructions.
  2. Memory Isolation and Contextual Integrity: Focus on fortifying memory-related operations to avert data poisoning.
  3. Secure Tool Access: Utilization of capability-based control access to limit agent actions to trusted operations.
  4. Anomaly Detection Mechanisms: Real-time monitoring to swiftly identify and rollback potential system disruptions.

Experimental Validation and Observations

Two high-impact test cases validate the vulnerabilities and security risk assessment of the agentic AI system:

  1. Resource Exhaustion Test: Simulates a Denial-of-Service (DoS) scenario that validates the system's susceptibility to excessive load through TCP replay.
  2. Memory Poisoning Test: Validates the impact of corrupt historical data on the agent’s decision-making pipeline, highlighting the potential for cascading failures if memory management is compromised. Figure 3

    Figure 3: After-attack dashboard (after DoS attack).

Limitations and Future Work

The current system exhibits limitations in scalability and resilience under certain conditions, such as single-node deployments and lack of end-to-end cryptographic controls. Future research directions include:

  • Multi-Agent Coordination: Enhancing trust and consensus across distributed agents.
  • Adversarial Robustness: Strengthening defense mechanisms against adversarial manipulations at all MAESTRO layers.
  • Cross-Layer Security Orchestration: Improving dynamic response strategies and security controls throughout the operational layers.

Conclusion

The MAESTRO framework provides a comprehensive approach to modeling and mitigating threats within agentic AI systems, emphasizing the importance of cross-layer analysis and multilayered defense strategies. Experimental results underscore the necessity for enhanced security measures tailored to the adaptive nature of agentic AI, setting the direction for future advancements in AI security frameworks.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 13 likes about this paper.