- The paper examines how stochastic, dynamic, fluid autonomy in agentic AI redefines legal standards for authorship, inventorship, and liability.
- It highlights the use of reinforcement learning, large language models, and planning algorithms to enable AI’s long-term, creative decision-making.
- The study discusses potential legal adaptations, including hybrid attribution and strict liability models, to address intertwined human-AI contributions.
Stochastic, Dynamic, Fluid Autonomy in Agentic AI: Implications for Authorship, Inventorship, and Liability
Abstract
The paper examines the emergence of agentic AI, characterized by stochastic, dynamic, and fluid autonomy, and its impact on existing legal frameworks concerning authorship, inventorship, and liability. Unlike traditional generative AI, agentic AI autonomously executes complex tasks and adapts strategies, producing outputs that are probabilistically varied, contextually informed, and dynamically evolving. This results in an intricate intertwining of human and AI contributions, presenting formidable challenges to legal constructs that necessitate clear attribution of agency, creativity, and responsibility.
Introduction
Agentic AI advances traditional AI capabilities by pursuing long-term goals, making decisions, and orchestrating multi-step processes autonomously. Unlike conventional intelligent agents constrained to narrow tasks, agentic AI systems utilize reinforcement learning, LLMs, and sophisticated planning algorithms to achieve higher-level autonomy and creativity. Examples such as OpenAI's DeepResearch showcase their potential to conduct extensive research autonomously, moving beyond mere support tools to proactive problem solvers. This agentic nature, however, challenges conventional frameworks for authorship, inventorship, and liability, as the entanglement of human and machine efforts becomes inextricable.
Authorship
Agentic AI blurs the line between human creativity and AI contribution, undermining traditional copyright doctrines that hinge on human-centric creativity. While some jurisdictions maintain strict human authorship requirements, others have adapted more flexible perspectives. The inability to parse contributions in intertwined creative processes challenges attribution models. Existing proposals like hybrid attribution and dynamic royalties necessitate clear demarcation, often unfeasible in recursive human-AI interactions. Consequently, legal frameworks may need to treat AI outputs as functionally equivalent to human-produced works for practical reasons, focusing instead on the originality and transformative nature of the final work.
Inventorship
Patent law traditionally requires human conception and reduction to practice. Agentic AI, capable of autonomously generating novel solutions, challenges this requirement. The DABUS case highlights the global debate over AI inventorship, with jurisdictions mostly rejecting non-human inventors. However, agentic AI functions within a continuum of human involvement, from no participation to collaborative partnerships. Determining inventorship becomes problematic when AI-generated inventive concepts surpass traditional human input, raising questions about the requisite mental act of conception.
Liability
Agentic AI complicates liability frameworks by merging human control with fluid AI autonomy. The resultant "responsibility gap" and "moral crumple zone" phenomena challenge established legal doctrines. Users may face liability for unforeseeable AI actions, while manufacturers struggle to anticipate the breadth of potential harms from AI's evolving capabilities. Traditional models based on user control and manufacturer foreseeability may prove inadequate. Novel liability schemes, such as strict liability and sector-specific compensation funds, may be necessary to address these complexities equitably.
Conclusion
The intricate interplay between agentic AI and human users destabilizes foundational legal doctrines across authorship, inventorship, and liability. The blurred boundaries and recursive interactions inherent in agentic AI warrant new legal frameworks embracing functional equivalence. By focusing on outcomes rather than origins, this approach foregoes the impracticality of attributing distinct contributions, offering a pragmatic pathway to legal coherence in an era increasingly defined by machine autonomy.