Coordination Using Common Knowledge in Generative AI Agents

Determine whether generative AI agents implemented with large language models coordinate rationally using common knowledge, as defined in distributed systems epistemic frameworks, or whether they instead rely primarily on weaker group-knowledge notions such as mutual, timestamped, or probabilistic common knowledge; additionally, ascertain the extent to which each form of knowledge is used in practical multi‑agent deployments.

Background

The paper adopts a view-based epistemic framework for knowledge in distributed systems and discusses common knowledge versus weaker forms such as mutual or probabilistic common knowledge. The authors note that simultaneous coordination and common knowledge are mutually dependent in theory, but acknowledge that weaker notions often suffice in practice.

Within this context, the authors explicitly flag uncertainty about how LLM agents actually coordinate in practical settings, making it an open question whether they use full common knowledge or rely on weaker substitutes.

References

Note that it is an interesting open question whether in practice, generative AI agents coordinate rationally using common knowledge, and to what extent they rely on weaker concepts of group knowledge~\citep{halpern_knowledge_2000,monderer_approximating_1989,thomas_psychology_2014}.

Secret Collusion among Generative AI Agents: Multi-Agent Deception via Steganography  (2402.07510 - Motwani et al., 2024) in Appendix, View-Based Knowledge (Subsection A.1)