Papers
Topics
Authors
Recent
Search
2000 character limit reached

Artificial intelligence is algorithmic mimicry: why artificial "agents" are not (and won't be) proper agents

Published 27 Jun 2023 in cs.AI | (2307.07515v4)

Abstract: What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.

Citations (6)

Summary

  • The paper argues that AI systems are algorithmic mimicry rather than genuine agents due to their lack of autopoiesis and intrinsic goal-setting.
  • It analyzes differences in embodiment, highlighting that unlike living organisms, algorithms separate hardware and software, limiting true interaction with the world.
  • The study contrasts algorithmic 'small worlds' with organismic 'large worlds,' emphasizing that real intelligence requires self-determined problem framing and semantic closure.

Algorithmic Mimicry in AI

This paper (2307.07515) critiques the notion of artificial general intelligence (AGI), arguing that current AI, particularly LLMs, are merely sophisticated forms of "algorithmic mimicry" rather than true intelligence or agency. It posits that fundamental differences between organisms and algorithms, especially in autopoiesis, embodiment, and their interaction with the world (large vs. small), preclude the possibility of algorithms achieving genuine agency or general intelligence.

General Intelligence and Agency

The paper defines general intelligence beyond mere problem-solving, including reasoning, learning, common-sense knowledge, autonomous goal setting, dealing with ambiguity, and creating new knowledge representations. It contends that algorithms, confined to syntactic constructs, lack the capacity for semantic understanding and intrinsic goal setting necessary for true agency. Organisms, as "natural agents," possess agency rooted in their autopoietic ability to self-manufacture and pursue their own goals, contrasting with algorithms whose goals are externally imposed.

Autopoiesis

The author argues that the primary goal of an organism is to remain alive, which machines do not have. The paper stresses the importance of autopoiesis, the self-manufacturing ability of living systems, as a foundation for agency. This involves organizational closure, where internal constraints generate and maintain each other, enabling self-determination. The paper claims that while algorithmic systems can mimic aspects of autopoiesis, they lack the hierarchical circularity and organizational complexity necessary for true self-determination and agency. Contemporary AI approaches, including LLMs, exhibit computational complexity but fall short of the organizational complexity inherent in living systems.

Embodiment

The paper addresses the differences between organisms and machines, highlighting the clean separation between hardware and software in computers versus the integrated nature of living systems. In organisms, symbolic and physical aspects are intertwined, exemplified by the relationship between genome and protein. The author claims that this interconnectedness, termed semantic closure, enables autopoiesis and distinguishes living systems from "physics-free" computational architectures. Algorithms, confined to the symbolic field, rely on external agents for physical interaction, lacking the direct embodiment characteristic of living beings. Overcoming this limitation would require algorithms to generate their own hardware based on intrinsic goals, posing significant challenges in computational architecture and materials.

Large Worlds

The paper contrasts the "small world" of algorithms, which is purely syntactic and isolated from real-world semantics, with the "large world" inhabited by organisms, where problems are ill-defined and information is scarce. Organisms create their own frame of reference through interactions with their surroundings, whereas algorithms have a view "from nowhere." In large worlds, organisms must define problems and identify relevant aspects for survival, a challenge that algorithms cannot overcome due to their lack of autopoiesis and intrinsic motivation.

Conclusions

The paper concludes that algorithms and living beings have fundamentally different capabilities. While algorithms excel at well-defined tasks, they lack the capacity for intrinsic goal setting, common-sense reasoning, and dealing with ambiguity, which are essential aspects of general intelligence. The author argues that the term "artificial intelligence" is a misnomer and suggests "algorithmic mimicry" as a more accurate descriptor. The paper warns against attributing human-like qualities to algorithms and emphasizes the need to recognize them as tools to augment human capabilities. The author supports focusing on the dangers of narrow AI applications rather than the possibility of AGI and stresses the importance of regulating algorithms to align with human needs.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 90 tweets with 504 likes about this paper.