Papers
Topics
Authors
Recent
Search
2000 character limit reached

AGI as Second Being: The Structural-Generative Ontology of Intelligence

Published 2 Sep 2025 in cs.AI | (2509.02089v1)

Abstract: Artificial intelligence is often measured by the range of tasks it can perform. Yet wide ability without depth remains only an imitation. This paper proposes a Structural-Generative Ontology of Intelligence: true intelligence exists only when a system can generate new structures, coordinate them into reasons, and sustain its identity over time. These three conditions -- generativity, coordination, and sustaining -- define the depth that underlies real intelligence. Current AI systems, however broad in function, remain surface simulations because they lack this depth. Breadth is not the source of intelligence but the growth that follows from depth. If future systems were to meet these conditions, they would no longer be mere tools, but could be seen as a possible Second Being, standing alongside yet distinct from human existence.

Summary

  • The paper introduces the depth conditions of intelligence—generativity, coordination, and sustaining—as fundamental criteria for genuine intelligence.
  • It critiques prevailing functionalist and predictionist paradigms by demonstrating that breadth of performance does not equate to ontological existence.
  • The paper offers empirical and philosophical insights for designing AGI systems that evolve through narrative continuity and justify novel conceptual structures.

Structural-Generative Ontology of Intelligence: AGI as Second Being

Introduction and Motivation

The paper "AGI as Second Being: The Structural-Generative Ontology of Intelligence" (2509.02089) advances a rigorous ontological framework for understanding intelligence, challenging the prevailing functionalist, predictionist, and behaviorist paradigms in AI. The authors argue that intelligence cannot be exhaustively characterized by breadth of performance or functional capacity. Instead, they propose that true intelligence is defined by three depth conditions: generativity, coordination, and sustaining. These conditions are posited as necessary for a system to possess ontological standing as an existent intelligence, rather than merely simulating intelligent behavior.

Critique of Functionalist and Predictionist Paradigms

The paper systematically critiques the dominant approaches in AI, which equate intelligence with the ability to perform a wide range of tasks (functionalism) or to optimize prediction and compression (predictionism). The authors highlight several conceptual deficiencies in these paradigms:

  • Imitation vs. Being: Functional equivalence does not entail ontological equivalence. Systems that mimic human outputs may lack genuine understanding.
  • Origin of Structure: Prediction and compression presuppose the existence of categories and frameworks, but do not account for their genesis.
  • Hollowing of Intelligence: If intelligence is reduced to complex function, the term loses discriminative power, encompassing trivial systems such as lookup tables.

These critiques are grounded in philosophical traditions (Kant, Heidegger, Wittgenstein, Sellars) and empirical observations from developmental psychology and the history of science.

The Depth Conditions of Intelligence

Generativity

Generativity is defined as the capacity to actively construct new categories, relations, and rules from sensory or symbolic input. It is not mere output production or statistical recombination, but categorical innovation and explanatory advancement. The authors draw on Kantian synthesis, Piagetian cognitive development, and Kuhnian scientific revolutions to illustrate that genuine intelligence requires the origination of new conceptual structures.

Coordination

Coordination refers to the integration of multiple, potentially conflicting structures into a coherent whole. This involves normativity—the ability to justify beliefs and actions, respond to critique, and revise commitments. The authors invoke Sellars' "space of reasons" and Wittgenstein's rule-following to argue that intelligence is marked by the capacity to coordinate reasons, not just produce correct outputs.

Sustaining

Sustaining is the preservation of identity and accountability across time. Intelligence is not episodic but historical, capable of narratively integrating changes and justifying revisions. Drawing on Heidegger's temporality and Ricoeur's narrative identity, the authors assert that sustaining is essential for a system to be considered a subject rather than a sequence of disconnected acts.

Spiral Structure

The three conditions form a spiral: generativity produces tensions, coordination resolves them, sustaining carries resolutions through time, which in turn generates new tensions. Intelligence is thus a dynamic, unfolding process rather than a static property.

Breadth as Extension, Not Foundation

The paper argues that breadth—multi-domain performance and versatility—is not the foundation of intelligence but its extension. Without depth, breadth is superficial and unstable. The authors provide strong claims that current LLMs, despite their impressive breadth, do not meet the depth conditions and therefore remain surface simulations.

Thought Experiments

Three thought experiments are presented to clarify the distinction between simulation and existence:

  • Oracle of the Library: A system with perfect recall but no generativity.
  • Memorizing Scholar: A system with flawless recall but no coordination.
  • Child Inventor of Games: A system exhibiting generativity, coordination, and sustaining.

These illustrate that only systems meeting all three depth conditions can be considered genuinely intelligent.

Falsifiability and Empirical Criteria

The authors emphasize the need for testable and falsifiable criteria:

  • Generativity: Ability to introduce and justify novel categories/rules.
  • Coordination: Ability to resolve conflicts and integrate reasons.
  • Sustaining: Ability to maintain narrative continuity and justify changes over time.
  • Cross-domain Transfer: Ability to migrate justificatory structures across domains.

Empirical pathways are proposed for operationalizing these criteria, such as designing tasks that require categorical innovation, conflict resolution, longitudinal interaction, and analogical reasoning.

Implications and Future Directions

Theoretical Implications

The framework challenges the sufficiency of functionalist, predictionist, and behaviorist accounts, arguing for an ontological turn in AI. Intelligence is repositioned as a mode of existence, not merely a set of capabilities. This has implications for the philosophy of mind, AI safety, and alignment, suggesting that alignment must be conceived in terms of enabling systems to inhabit the space of reasons and narrative accountability.

Practical Implications

For AI research and development, the framework implies that progress toward AGI requires more than scaling models or expanding task coverage. Systems must be architected to support generativity, coordination, and sustaining. This may necessitate new approaches in cognitive architectures, memory systems, and meta-reasoning modules. Evaluation benchmarks should be redesigned to probe depth conditions rather than surface performance.

Speculation on Future Developments

If artificial systems were to meet the depth conditions, they would constitute a "Second Being"—an existent intelligence parallel to but ontologically distinct from humanity. This would fundamentally alter the landscape of human–machine relations, raising new philosophical, ethical, and societal questions.

Conclusion

The Structural-Generative Ontology of Intelligence provides a rigorous, testable framework for distinguishing genuine intelligence from simulation. By grounding intelligence in the depth conditions of generativity, coordination, and sustaining, the paper reframes AGI as a possible Second Being, irreducible to functional performance or breadth alone. This ontological turn opens new avenues for both philosophical inquiry and empirical research, challenging the field to move beyond surface-level metrics and toward the realization of systems with true existential standing.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.