Legal liability for harms caused by MCP-integrated AI agents

Determine the allocation of legal liability when a Model Context Protocol-integrated AI agent causes damage, specifying responsibilities among the AI model developer, the MCP server provider, and the deploying organization to guide regulatory compliance and governance.

Background

MCP-enabled agents can take high-impact actions (e.g., modifying databases, sending messages), raising complex liability issues when harms occur.

The paper highlights that existing legal frameworks are still adapting to AI and suggests that shared liability models and due-care obligations may emerge, but the concrete allocation remains unresolved and needs clarification.

References

If an AI agent integrated via MCP causes damage (for example, it deletes customer data or leaks confidential info), who is liable? Is it the developer of the AI model, the provider of the MCP server, or the organization deploying it? This is an open legal question.

Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem  (2512.08290 - Gaire et al., 9 Dec 2025) in Section 7.5 Regulatory, Legal, and Ethical Considerations