Papers
Topics
Authors
Recent
Search
2000 character limit reached

Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition

Published 1 Mar 2024 in cs.CL | (2403.00499v2)

Abstract: Recent advances in LLMs have sparked a debate on whether they understand text. In this position paper, we argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness. To substantiate this claim, we propose a thought experiment involving an open-source chatbot $Z$ which excels on every possible benchmark, seemingly without subjective experience. We ask whether $Z$ is capable of understanding, and show that different schools of thought within seminal AI research seem to answer this question differently, uncovering their terminological disagreement. Moving forward, we propose two distinct working definitions for understanding which explicitly acknowledge the question of consciousness, and draw connections with a rich literature in philosophy, psychology and neuroscience.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Multilingual summarization with factual consistency evaluation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3562–3591, Toronto, Canada. Association for Computational Linguistics.
  2. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610–623.
  3. Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, Online. Association for Computational Linguistics.
  4. Bruno Bouzy and Tristan Cazenave. 2001. Computer go: an ai oriented survey. Artificial Intelligence, 132(1):39–103.
  5. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
  6. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
  7. Sparks of artificial general intelligence: Early experiments with gpt-4.
  8. D. Chalmers. 1996. The conscious mind: in search of a fundamental theory.
  9. David J Chalmers. 1995. Facing up to the problem of consciousness. Journal of consciousness studies, 2(3):200–219.
  10. Noam Chomsky. 2002. Syntactic structures. Mouton de Gruyter.
  11. Paul M Churchland and Patricia Smith Churchland. 1990. Could a machine think? Scientific American, 262(1):32–39.
  12. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop.
  13. Stanislas Dehaene and Mariano Sigman. 2012. From a single decision to a multi-step algorithm. Current opinion in neurobiology, 22(6):937–945.
  14. Simone Gozzano. 1995. Consciousness and understanding in the chinese room.
  15. q2superscript𝑞2q^{2}italic_q start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT: Evaluating factual consistency in knowledge-grounded dialogues via question generation and question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7856–7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  16. Geoffrey Jefferson. 1949. The mind of mechanical man. British Medical Journal, 1(4616):1105.
  17. Robert Kirk. 1974. Sentience and behaviour. Mind, pages 43–60.
  18. Neural correlates of consciousness: progress and problems. Nature Reviews Neuroscience, 17:307–321.
  19. SummaC: Re-visiting NLI-based models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics, 10:163–177.
  20. Christopher D. Manning. 2022. Human language understanding & reasoning. Daedalus, 151:127–138.
  21. Gary Marcus. 2022. Nonsense on stilts.
  22. John McCarthy. 1990. Chess as the drosophila of ai.
  23. The strength of weak integrated information theory. Trends in Cognitive Sciences, 26:646–655.
  24. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. ArXiv preprint, abs/2305.14251.
  25. Melanie Mitchell and David C. Krakauer. 2022. The debate over understanding in ai’s large language models. Proceedings of the National Academy of Sciences of the United States of America, 120.
  26. Thomas Nagel. 1974. What is it like to be a bat? The philosophical review, 83(4):435–450.
  27. Meghan O’Gieblyn. 2021. God, human, animal, machine: Technology, metaphor, and the search for meaning. Anchor.
  28. Anat Perry. 2023. Ai will never convey the essence of human empathy. Nature Human Behaviour, 7:1808 – 1809.
  29. Steven T. Piantadosi and Felix Hill. 2022. Meaning without reference in large language models. ArXiv preprint, abs/2208.02957.
  30. John Searle. 2010. Why dualism (and materialism) fail to account for consciousness. Questioning nineteenth century assumptions about knowledge, III: Dualism, pages 5–48.
  31. John R Searle. 1980. Minds, brains, and programs. Behavioral and brain sciences, 3(3):417–424.
  32. Claude E Shannon. 1950. Programming a computer for playing chess. Philosophical Magazine, 41(314):256–275.
  33. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489.
  34. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. ArXiv preprint, abs/1712.01815.
  35. Daniel Stoljar. 2024. Physicalism. In Edward N. Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy, Spring 2024 edition. Metaphysics Research Lab, Stanford University.
  36. Integrated information theory: from consciousness to its physical substrate. Nature Reviews Neuroscience, 17:450–461.
  37. Alan Turing. 1953. Digital computers applied to games. In B. V. Bowden, editor, Faster than thought, pages 286–310. Sir Isaac Pitman & Sons, Ltd., London.
  38. Alan M. Turing. 1950. Computing machinery and intelligence. Mind, LIX:433–460.
  39. Michael Tye. 2021. Qualia. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, Fall 2021 edition. Metaphysics Research Lab, Stanford University.
  40. Erik Van Der Werf. 2004. AI techniques for the game of Go. Citeseer.
  41. Robert Van Gulick. 2022. Consciousness. In Edward N. Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy, Winter 2022 edition. Metaphysics Research Lab, Stanford University.
  42. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Citations (1)

Summary

  • The paper introduces a thought experiment with the chatbot ZZZ, challenging the idea that consciousness is required for functional understanding.
  • It employs the philosophical zombie analogy to contrast observable performance in NLP with the presence of genuine conscious experience.
  • The paper outlines future research directions, suggesting that insights from neuroscience may further illuminate AI cognition debates.

Do Zombies Understand? Exploring Machine Cognition Through a Philosophical Lens

Introduction to the Thought Experiment

Recent advancements in LLMs have reignited discussions around machine understanding and cognition. A central debate is whether these models can be said to truly "understand" in any meaningful sense, a question that inevitably leads to considerations of consciousness and subjective experience. In exploring this debate, the paper introduces a thought experiment centered around a hypothetical chatbot, termed Z𝑍Z, which demonstrates unprecedented success across all possible NLP benchmarks, despite ostensibly lacking any form of consciousness. The experiment serves to probe the role of consciousness in machine understanding, drawing on differing perspectives within AI research and adjacent fields.

Background on Philosophical Zombies

Philosophical zombies, a concept derived from metaphysical discussions, are beings indistinguishable from humans in their behavior but devoid of conscious experience. Applied to AI, the paper posits an equivalent entity, Z𝑍Z, that mirrors human linguistic performance without the subjective experience, framing a discourse on whether such an entity genuinely understands. This analogy leverages historical philosophical debates, notably Chalmers’ work on consciousness, to scrutinize AI’s potential for understanding.

Understanding Without Consciousness

The paper delineates a functional definition of understanding, reliant solely on observable performance rather than subjective experience. Under this paradigm, a model like Z𝑍Z achieves understanding by performing tasks to a human or superior level, detached from any consciousness. Historical benchmarks of AI prowess, such as chess and Go, exemplify the shift in goalposts that accompanies technological progress, making Z𝑍Z an ultimate benchmark for functional understanding. This perspective views understanding as attainable through incremental progress on NLP tasks, harboring Z𝑍Z as the epitome of a chatbot that functionally understands every aspect of human language without consciousness.

The Necessity of Conscious Experience

In contrast, another school of thought insists on consciousness as indispensable for genuine understanding. This perspective, deeply rooted in philosophical and cognitive science traditions, demands both functional competence and subjective experience. The paper references key discussions, including Turing’s contemplations and Searle’s Chinese Room argument, to emphasize that AI, irrespective of its functional successes, lacks understanding if devoid of consciousness. This viewpoint advocates for a conscious understanding definition and suggests that future AI research might glean insights from neuroscience, particularly theories regarding the neural correlates of consciousness.

Addressing Alternative Perspectives

The paper acknowledges and responds to potential objections to its premise, including the relevance of Z𝑍Z’s implementation details and the conceivability of Z𝑍Z itself. It debates these points within the scopes of both functional and conscious understanding, ultimately reinforcing the bifurcation of these viewpoints while inviting further discourse on what constitutes understanding in AI.

Conclusive Thoughts and Future Directions

Concluding, the paper does not seek to arbitrate between these divergent perspectives but rather to explicate the underlying assumptions about consciousness and understanding in current AI debates. By delineating two distinct definitions of understanding, it opens avenues for more nuanced research agendas and theoretical discussions. Additionally, the paper suggests that this framework can be extended to other aspects of cognition in AI, like empathy, further exploring the significance of consciousness in attributing human-like traits to machines.

Limitations and Invitations for Future Discussion

The paper’s approach, grounded in philosophical analysis and thought experiments, inherently faces limitations in empirically addressing the complexities of consciousness and machine understanding. It acknowledges potential overlooked viewpoints and encourages a broader engagement with these foundational questions, signaling an ongoing dialogue within the AI research community on the nature of machine cognition and the elusive role of consciousness therein.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 44 likes about this paper.