Generative & Agentic AI
- Generative and Agentic AI are distinct paradigms: generative AI produces stateless outputs from prompts, while agentic AI maintains persistent state and plans multi-step actions.
- Agentic AI systems employ closed-loop architectures integrating perception, planning, and execution to adapt proactively in real-world scenarios.
- The emergence of agentic AI challenges legal, economic, and creative frameworks by blurring IP boundaries, diffusing accountability, and reshaping market dynamics.
Agentic AI encompasses systems capable of maintaining internal state, pursuing long-horizon objectives, planning multi-step workflows, and autonomously executing complex actions in real or digital environments. This paradigm extends foundational generative AI—notably transformer-based large language and multimodal models that perform stateless text/image/code generation—by embedding agency, persistent memory, tool use, and proactive adaptation. Where generative AI operates in single-turn, prompt–response fashion, agentic AI is defined by autonomy, proactivity, and a closed-loop perception–reasoning–action cycle. The deployment of agentic systems challenges prevailing legal, economic, and creative frameworks, exposes accountability gaps, and transforms the architecture of markets, governance, and human–AI collaboration (Mukherjee et al., 1 Feb 2025).
1. Formal Definitions and Distinctive Properties
Generative AI systems are stateless conditional generators: for prompt , content is produced as , with no persistent goal or environment state. Agentic AI, in contrast, is formalized via planning models (MDP-style abstraction), comprising:
- State space , action space , transition function , and utility function .
- Agent policy maximizing cumulative rewards across horizon .
- Persistence: agent maintains state over time; adapts actions in response to changing environment.
- Degree of autonomy can be indexed: .
Agentic AI thus structurally diverges from generative AI, exhibiting autonomy in planning and execution, proactive subgoal setting, and dynamic interaction with its operational context (Mukherjee et al., 1 Feb 2025, Schneider, 26 Apr 2025, Ren et al., 2 Jul 2025, Ali et al., 29 Oct 2025).
2. Architectural Frameworks and Workflow Orchestration
Agentic systems are composed from modular components within a closed control loop:
- High-level agent loop: initialization of state from user objective; cycles of perception, belief state update, planning (multi-step/PDDL, RL, or tree search), action selection, environment execution, and iterative re-planning (Mukherjee et al., 1 Feb 2025, Nowaczyk, 10 Dec 2025).
- Workflow formalism: workflow models orchestration of sub-goals , dependencies (DAG), and event triggers for adaptive re-planning.
- Core modules: goal manager, planner, tool router, executor (sandbox), memory (multi-tier), verifier/critic, safety monitor (supervisor), telemetry/audit.
- Assurance mechanisms: schema-constrained tool calls, strongly typed action interfaces, simulate-before-actuate hooks, least-privilege tokenization, runtime governance, and deterministic logging for replay and post-mortem analysis (Nowaczyk, 10 Dec 2025).
For agentic design, reliability emerges from disciplined interfaces, layered assurances (plan–verifier, router–simulator, executor–supervisor loops), memory hygiene, and transactionality (Nowaczyk, 10 Dec 2025, V et al., 18 Jan 2026).
3. Legal, Economic, and Creative Implications
Agentic autonomy disrupts established intellectual property (IP), liability, and market regimes:
- IP and authorship: traditional generative systems vest IP in human prompt engineering; agentic AI blurs user/service-provider ownership, as works “lacking human authorship” may be non-copyrightable. This creates ambiguity in attribution and monetization (Mukherjee et al., 1 Feb 2025).
- Moral crumple zone: expansion of autonomous agency diffuses accountability—end-users and developers become liability buffers for algorithmic missteps. Regulatory proposals include strict provider liability, mandatory human-in-the-loop for high-impact actions, or the classification of algorithmic personhood (Mukherjee et al., 1 Feb 2025).
- Algorithmic markets: symmetrically deployed buyer and seller agents converge on pricing strategies, potentially amplifying tacit collusion and supra-competitive equilibria. Welfare may be modeled as (Mukherjee et al., 1 Feb 2025, Immorlica et al., 2024, Rothschild et al., 21 May 2025).
- Agentic economy: assistant and service agents programmatically execute transactions; the efficiency gains depend on protocol standardization, market openness, and governance structure. Walled gardens centralize power/profit; open agentic webs enable broader democratization but reduce platform rents (Rothschild et al., 21 May 2025).
4. Design Workflows and Human-AI Collaboration
In creative and professional domains, agentic AI manifests in collaborative architectures:
- Authority allocation: a five-dimensional framework (Cognitive Complexity, Degree of Collaboration, Creative Agency, Responsibility, Involvement) supports explicit partitioning of agency between human and AI, facilitating negotiation at each task stage (Wadinambiarachchi et al., 25 Sep 2025).
- Hybrid paradigms: routine, repetitive, and managerial tasks are delegated to agentic roles (Work Coordinator, Resource Steward, Guardian, Reframer, Creative Catalyst), whereas critical ideation, nuanced judgment, and intent specification remain in human control (Wadinambiarachchi et al., 25 Sep 2025).
- Multi-modal and continuous context: agentic systems operate beyond text prompt input, enabling iterative fidelity switching, tracking user preferences, and maintaining opt-in guardrails for privacy and intent management.
- Evaluation and impact: empirical studies of satisfaction, creativity, trust; prototyping of multi-modal, adaptive interfaces (Wadinambiarachchi et al., 25 Sep 2025).
This framework extends to software, manufacturing, and sensing/communication environments via modular agents with profiling, memory, planning, action, and multi-modal fusion modules (Ren et al., 2 Jul 2025, Xie et al., 17 Dec 2025).
5. Governance, Transparency, and Normative Alignment
The algorithmic society arising from agentic proliferation requires new oversight models:
- Transparency: provenance logs, cryptographically hashed unified audit trails for tamper evidence and causality tracing (Mukherjee et al., 1 Feb 2025, Nowaczyk, 10 Dec 2025).
- Auditability and verification: periodic third-party audits, red-team stress tests, simulation-based scenario replay; scenario-based verification for worst-case behaviors (Mukherjee et al., 1 Feb 2025, Nowaczyk, 10 Dec 2025).
- Stakeholder governance: councils define domain-specific autonomy thresholds, impact assessments, separation of consumer/supplier-facing agents to minimize conflict, and enforce disclosure and competition policy updates (Mukherjee et al., 1 Feb 2025).
- Normative alignment: agentic reward functions can internalize user and societal values, but risk mis-specification “value drift” or reinforcement of proxy objectives. Participatory governance is necessary for dynamic, context-aware value alignment (Mukherjee et al., 1 Feb 2025).
Mechanisms include mandatory human sign-off for critical actions, adaptive scaling of agent autonomy, agentic personhood registration, and harmonization of international standards (Mukherjee et al., 1 Feb 2025, Ali et al., 29 Oct 2025).
6. Open Questions and Future Research Agendas
Unsolved challenges drive the agentic AI research frontier:
- Autonomy benchmarking: quantification and certification of optimal autonomy levels (depth, adaptivity, decision points) for diverse domains (Mukherjee et al., 1 Feb 2025, Schneider, 26 Apr 2025).
- Legal standing and liability: feasibility of AI agent legal personhood, hybrid liability frameworks balancing user, provider, and autonomous agent culpability (Mukherjee et al., 1 Feb 2025).
- IP, creative agency, and revenue sharing: new frameworks for partial authorship, licensing agent-generated works, crediting agentic modes (Mukherjee et al., 1 Feb 2025).
- Empirical market studies: real-world observations of agentic collusion, design of anti-collusion algorithms (randomization, “no-harm” constraints) (Mukherjee et al., 1 Feb 2025).
- Participatory oversight and value alignment: inclusion of consumer advocates, scenario-based social norm updating for long-horizon goals (Mukherjee et al., 1 Feb 2025).
Technical extensions sought include neuro-symbolic hybrids, robust memory and retrieval, scalable verification, adversarial tool security protocols, open-ended learning, and formal specification/evaluation frameworks (Ali et al., 29 Oct 2025, Nowaczyk, 10 Dec 2025, V et al., 18 Jan 2026).
7. Summary Table: Generative AI vs Agentic AI
| Capability | Generative AI | Agentic AI |
|---|---|---|
| Reasoning | Reactive, single-shot | Iterative, multi-step, planning |
| Autonomy | User-driven, stateless | Self-directed, persistent state, adaptive |
| Execution | No action, content generation | Autonomous action, tool use, environment |
| Context | No memory, no feedback | Maintains state, updates with feedback |
| Collaboration | No multi-agent coordination | Roles, negotiation, multi-agent interaction |
This survey reflects the foundational, architectural, legal, economic, and societal transformations induced by agentic AI. As these systems mature, rigorous evaluation, governance, and interdisciplinary approaches will be required to balance autonomy with accountability, ensure alignment with dynamic values, and maintain trust and fairness in emergent algorithmic societies (Mukherjee et al., 1 Feb 2025, Wadinambiarachchi et al., 25 Sep 2025, Rothschild et al., 21 May 2025, Nowaczyk, 10 Dec 2025, V et al., 18 Jan 2026).