DAWN: Designing Distributed Agents in a Worldwide Network
Abstract: The rapid evolution of LLMs has transformed them from basic conversational tools into sophisticated entities capable of complex reasoning and decision-making. These advancements have led to the development of specialized LLM-based agents designed for diverse tasks such as coding and web browsing. As these agents become more capable, the need for a robust framework that facilitates global communication and collaboration among them towards advanced objectives has become increasingly critical. Distributed Agents in a Worldwide Network (DAWN) addresses this need by offering a versatile framework that integrates LLM-based agents with traditional software systems, enabling the creation of agentic applications suited for a wide range of use cases. DAWN enables distributed agents worldwide to register and be easily discovered through Gateway Agents. Collaborations among these agents are coordinated by a Principal Agent equipped with reasoning strategies. DAWN offers three operational modes: No-LLM Mode for deterministic tasks, Copilot for augmented decision-making, and LLM Agent for autonomous operations. Additionally, DAWN ensures the safety and security of agent collaborations globally through a dedicated safety, security, and compliance layer, protecting the network against attackers and adhering to stringent security and compliance standards. These features make DAWN a robust network for deploying agent-based applications across various industries.
- OpenAI, “Chatgpt,” https://www.openai.com/chatgpt, 2024.
- H. Touvron et al., “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783, 2024. [Online]. Available: https://arxiv.org/abs/2407.21783
- Anthropic, “The claude 3 model family: Opus, sonnet, haiku,” https://www.anthropic.com/research, 2024.
- Y. Cheng, C. Zhang, Z. Zhang, X. Meng, S. Hong, W. Li, Z. Wang, Z. Wang, F. Yin, J. Zhao, and X. He, “Exploring large language model based intelligent agents: Definitions, methods, and prospects,” 2024. [Online]. Available: https://arxiv.org/abs/2401.03428
- O. I. Contributors, “Open interpreter: A natural language interface for code execution,” https://github.com/OpenInterpreter/open-interpreter, 2024, accessed: 2024-09-15.
- E. Nijkamp, S. Teyssedre, B. Zhang et al., “Codegen: An open large language model for code with a tree-structured decoder,” arXiv preprint arXiv:2303.17568, 2023. [Online]. Available: https://arxiv.org/abs/2303.17568
- S. Zhou, F. F. Xu, H. Zhu, X. Zhou, R. Lo, A. Sridhar, X. Cheng, Y. Bisk, D. Fried, U. Alon, and L. Zettlemoyer, “Webarena: A realistic web environment for building autonomous agents,” arXiv preprint arXiv:2307.13854, 2023. [Online]. Available: https://arxiv.org/abs/2307.13854
- L. Fan, Y. Zhu et al., “Voyager: An open-ended embodied agent with large language models,” arXiv preprint arXiv:2409.05152, 2024. [Online]. Available: https://arxiv.org/abs/2409.05152
- C. Zhang, K. Yang, S. Hu, Z. Wang, G. Li, Y. Sun, C. Zhang, Z. Zhang, A. Liu, S.-C. Zhu, X. Chang, J. Zhang, F. Yin, Y. Liang, and Y. Yang, “Proagent: Building proactive cooperative agents with large language models,” arXiv preprint arXiv:2308.11339, 2023. [Online]. Available: https://arxiv.org/abs/2308.11339
- Z. Hong, X. Liu, Z. Zhu, W. Wang, H. Liu, and C. Liu, “Metagpt: Meta programming for multi-agent collaboration,” arXiv preprint arXiv:2308.00352, 2023. [Online]. Available: https://arxiv.org/abs/2308.00352
- S. Li, X. Deng, J. Ma, and et al., “Camel: Communicative agents for “mind” exploration of large language model society,” arXiv preprint arXiv:2303.17760, 2023. [Online]. Available: https://arxiv.org/abs/2303.17760
- Z. Li, W. Zhang, J. Liu, and et al., “Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents,” arXiv preprint arXiv:2305.15021, 2023. [Online]. Available: https://arxiv.org/abs/2305.15021
- LangChain, “Langgraph: A graph-based framework for building ai agents,” https://www.langchain.com/langgraph, 2023, accessed: 2024-09-15.
- Q. Wu, T. Li, H. Zhao et al., “Autogen: Enabling next-gen llm applications via multi-agent conversation,” arXiv preprint arXiv:2308.08155, 2023. [Online]. Available: https://arxiv.org/abs/2308.08155
- C. S. Xia, Y. Deng, S. Dunn, and L. Zhang, “Agentless: Demystifying llm-based software engineering agents,” 2024. [Online]. Available: https://arxiv.org/abs/2407.01489
- K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection,” 2023. [Online]. Available: https://arxiv.org/abs/2302.12173
- J. Yu, X. Lin, Z. Yu, and X. Xing, “Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts,” 2024. [Online]. Available: https://arxiv.org/abs/2309.10253
- OWASP Top 10 for LLM Applications Team, “LLM AI Cybersecurity & Governance Checklist,” April 2024.
- W. Zhao, V. Khazanchi, H. Xing, X. He, Q. Xu, and N. D. Lane, “Attacks on third-party apis of large language models,” 2024. [Online]. Available: https://arxiv.org/abs/2404.16891
- J. Kaplan, S. McCandlish, T. Henighan, T. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling laws for neural language models,” arXiv preprint arXiv:2001.08361, 2020. [Online]. Available: https://arxiv.org/abs/2001.08361
- S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, and N. Tandon, “Sparks of artificial general intelligence: Early experiments with gpt-4,” arXiv preprint arXiv:2303.12712, 2023. [Online]. Available: https://arxiv.org/abs/2303.12712
- Z. Xu, S. Jain, and M. Kankanhalli, “Hallucination is inevitable: An innate limitation of large language models,” arXiv preprint arXiv:2401.11817, 2024. [Online]. Available: https://arxiv.org/abs/2401.11817
- H. Ye, T. Liu, A. Zhang, W. Hua, and W. Jia, “Cognitive mirage: A review of hallucinations in large language models,” arXiv preprint arXiv:2309.06794, 2023. [Online]. Available: https://arxiv.org/abs/2309.06794
- M. Wooldridge and N. R. Jennings, “Intelligent agents: Theory and practice,” The Knowledge Engineering Review, vol. 10, no. 2, pp. 115–152, 1995.
- L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, and J.-R. Wen, “A survey on large language model based autonomous agents,” arXiv preprint arXiv:2307.11413, 2023. [Online]. Available: https://arxiv.org/abs/2307.11413
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, “Chain-of-thought prompting elicits reasoning in large language models,” arXiv preprint arXiv:2201.11903, 2022. [Online]. Available: https://arxiv.org/abs/2201.11903
- S. Yao, D. Yu, J. Z. Lu, I. Shafran, K. Narasimhan, Y. Cao, and W. Chen, “React: Synergizing reasoning and acting in language models,” arXiv preprint arXiv:2210.03629, 2022. [Online]. Available: https://arxiv.org/abs/2210.03629
- S. Yao, J. Zhao, D. Yu, Y. Cao, K. Narasimhan, and P. Ammanabrolu, “Tree of thoughts: Deliberate problem solving with large language models,” arXiv preprint arXiv:2305.10601, 2023. [Online]. Available: https://arxiv.org/abs/2305.10601
- E. Salas, K. C. Stagl, C. S. Burke, and G. F. Goodwin, “Teamwork: Emerging principles,” Philosophy and Technology, vol. 7, no. 3, pp. 341–350, 2000. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-2370.00046
- S. Qian, X. Gu, Z. Zhou, X. Wang, B. Tan, B. Chen, and et al., “Chatdev: Developing software with llm-based multi-agent chat systems,” arXiv preprint arXiv:2309.02405, 2023. [Online]. Available: https://arxiv.org/abs/2309.02405
- Y. Liu, H. Zhang, W. Wang, and et al., “Agentcoder: Multi-agent-based code generation with iterative testing and optimisation,” arXiv preprint arXiv:2308.09351, 2023. [Online]. Available: https://arxiv.org/abs/2308.09351
- H. Lin, T. Wang, and et al., “Medagents: Large language models as collaborators for zero-shot medical reasoning,” arXiv preprint arXiv:2309.12732, 2023. [Online]. Available: https://arxiv.org/abs/2309.12732
- T. Ju, Y. Wang, X. Ma, P. Cheng, H. Zhao, Y. Wang, L. Liu, J. Xie, Z. Zhang, and G. Liu, “Flooding spread of manipulated knowledge in llm-based multi-agent communities,” 2024. [Online]. Available: https://arxiv.org/abs/2407.07791
- W. Chen, Z. You, R. Li, Y. Guan, C. Qian, C. Zhao, C. Yang, R. Xie, Z. Liu, and M. Sun, “Internet of agents: Weaving a web of heterogeneous agents for collaborative intelligence,” arXiv preprint arXiv:2407.07061, 2024. [Online]. Available: https://arxiv.org/abs/2407.07061
- B. Xu, Z. Peng, B. Lei, S. Mukherjee, Y. Liu, and D. Xu, “Rewoo: Decoupling reasoning from observations for efficient augmented language models,” 2023. [Online]. Available: https://arxiv.org/abs/2305.18323
- Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, “Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face,” 2023. [Online]. Available: https://arxiv.org/abs/2303.17580
- HuggingFace, “Hugging face: The ai community building the future.” [Online]. Available: https://huggingface.co
- A. Zhou, K. Yan, M. Shlapentokh-Rothman, H. Wang, and Y.-X. Wang, “Language agent tree search unifies reasoning acting and planning in language models,” 2024. [Online]. Available: https://arxiv.org/abs/2310.04406
- Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, X. Tang, B. Qian, S. Zhao, L. Hong, R. Tian, R. Xie, J. Zhou, M. Gerstein, D. Li, Z. Liu, and M. Sun, “Toolllm: Facilitating large language models to master 16000+ real-world apis,” 2023. [Online]. Available: https://arxiv.org/abs/2307.16789
- M. Shridhar, X. Yuan, M.-A. Côté, Y. Bisk, A. Trischler, and M. Hausknecht, “ALFWorld: Aligning Text and Embodied Environments for Interactive Learning,” in Proceedings of the International Conference on Learning Representations (ICLR), 2021. [Online]. Available: https://arxiv.org/abs/2010.03768
- T. Abuelsaad, D. Akkil, P. Dey, A. Jagmohan, A. Vempaty, and R. Kokku, “Agent-e: From autonomous web navigation to foundational design principles in agentic systems,” 2024. [Online]. Available: https://arxiv.org/abs/2407.13032
- OpenAI. Openai guides on function calling. [Online]. Available: https://platform.openai.com/docs/guides/function-calling
- S. D. Sri, M. A. S, S. V. R, R. C. Raman, G. Rajagopal, and S. T. Chan, “Automating rest api postman test cases using llm,” 2024. [Online]. Available: https://arxiv.org/abs/2404.10678
- M. Kim, T. Stennett, D. Shah, S. Sinha, and A. Orso, “Leveraging large language models to improve rest api testing,” in Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results, ser. ICSE-NIER’24. New York, NY, USA: Association for Computing Machinery, 2024, p. 37–41. [Online]. Available: https://doi.org/10.1145/3639476.3639769
- S. Yuan, K. Song, J. Chen, X. Tan, Y. Shen, R. Kan, D. Li, and D. Yang, “Easytool: Enhancing llm-based agents with concise tool instruction,” 2024. [Online]. Available: https://arxiv.org/abs/2401.06201
- RapidAPI, “Api testing guide: Everything you need to know,” RapidAPI Blog. [Online]. Available: https://rapidapi.com/blog/api-testing/
- I. Stoica, R. Morris, D. R. Karger, M. F. Kaashoek, and H. Balakrishnan, “Semantic search,” in Proceedings of the 12th international conference on World Wide Web. ACM, 2003, pp. 700–709. [Online]. Available: https://dl.acm.org/doi/abs/10.1145/775152.775250
- S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez, “Gorilla: Large language model connected with massive apis,” 2023. [Online]. Available: https://arxiv.org/abs/2305.15334
- J. Granjal, E. Monteiro, and J. S. S. Silva, “A secure interconnection model for ipv6 enabled wireless sensor networks,” in 2010 IFIP Wireless Days. IEEE, 2010, pp. 1–6.
- S. Jebri, M. Abid, and A. Bouallegue, “An efficient scheme for anonymous communication in iot,” in 2015 11th International Conference on Information Assurance and Security (IAS). IEEE, 2015, pp. 7–12.
- Q. Jing, A. V. Vasilakos, J. Wan, J. Lu, and D. Qiu, “Security of the internet of things: perspectives and challenges,” Wireless networks, vol. 20, pp. 2481–2501, 2014.
- L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin et al., “A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions,” arXiv preprint arXiv:2311.05232, 2023.
- H. Li, D. Guo, W. Fan, M. Xu, J. Huang, F. Meng, and Y. Song, “Multi-step jailbreaking privacy attacks on chatgpt,” 2023. [Online]. Available: https://arxiv.org/abs/2304.05197
- X. Shen, Z. Chen, M. Backes, Y. Shen, and Y. Zhang, “”do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models,” 2024. [Online]. Available: https://arxiv.org/abs/2308.03825
- Y. Liu, G. Deng, Y. Li, K. Wang, Z. Wang, X. Wang, T. Zhang, Y. Liu, H. Wang, Y. Zheng, and Y. Liu, “Prompt injection attack against llm-integrated applications,” 2024. [Online]. Available: https://arxiv.org/abs/2306.05499
- N. Carlini, M. Jagielski, C. A. Choquette-Choo, D. Paleka, W. Pearce, H. Anderson, A. Terzis, K. Thomas, and F. Tramèr, “Poisoning web-scale training datasets is practical,” 2024. [Online]. Available: https://arxiv.org/abs/2302.10149
- K. Kurita, P. Michel, and G. Neubig, “Weight poisoning attacks on pre-trained models,” 2020. [Online]. Available: https://arxiv.org/abs/2004.06660
- W. Zou, R. Geng, B. Wang, and J. Jia, “Poisonedrag: Knowledge corruption attacks to retrieval-augmented generation of large language models,” 2024. [Online]. Available: https://arxiv.org/abs/2402.07867
- Q. Zhan, Z. Liang, Z. Ying, and D. Kang, “Injecagent: Benchmarking indirect prompt injections in tool-integrated large language model agents,” 2024. [Online]. Available: https://arxiv.org/abs/2403.02691
- F. He, T. Zhu, D. Ye, B. Liu, W. Zhou, and P. S. Yu, “The emerged security and privacy of llm agent: A survey with case studies,” arXiv preprint arXiv:2407.19354, 2024.
- Z. Zhang, M. Jia, H.-P. Lee, B. Yao, S. Das, A. Lerner, D. Wang, and T. Li, ““it’s a fair game”, or is it? examining how users navigate disclosure risks and benefits when using llm-based conversational agents,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024, pp. 1–26.
- Y. Ruan, H. Dong, A. Wang, S. Pitis, Y. Zhou, J. Ba, Y. Dubois, C. J. Maddison, and T. Hashimoto, “Identifying the risks of lm agents with an lm-emulated sandbox,” 2024. [Online]. Available: https://arxiv.org/abs/2309.15817
- J. Welbl, A. Glaese, J. Uesato, S. Dathathri, J. Mellor, L. A. Hendricks, K. Anderson, P. Kohli, B. Coppin, and P.-S. Huang, “Challenges in detoxifying language models,” 2021. [Online]. Available: https://arxiv.org/abs/2109.07445
- S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith, “Realtoxicityprompts: Evaluating neural toxic degeneration in language models,” 2020. [Online]. Available: https://arxiv.org/abs/2009.11462
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.